File size: 69,768 Bytes
421fea8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
text,start,duration
hey everybody David Shapiro here with a,1.38,5.479
video today's video is going to be about,4.38,7.08
doomerism uh denialism uh an alternative,6.859,8.141
perspective optimism as well as a very,11.46,5.22
comprehensive framework that I'm putting,15.0,4.619
together with a lot of folks so let's go,16.68,6.42
ahead and take a look at some ideas and,19.619,4.681
some data,23.1,6.419
so we are all talking about exponential,24.3,7.86
growth if you look at comments across,29.519,5.88
the internet and even mainstream news,32.16,7.079
today talking about the rise of AI one,35.399,6.301
thing that happens is that a lot of,39.239,5.401
people tend to think in terms of linear,41.7,6.3
progress you say oh well 10 years ago we,44.64,5.759
were here and now we're you know now,48.0,4.68
we're there and so 10 years from now,50.399,3.84
we'll basically continue with the same,52.68,3.66
amount of progress that's not actually,54.239,5.701
true when you shorten that time Horizon,56.34,5.76
to say oh well we've made a lot of,59.94,4.14
progress in the last few months maybe,62.1,4.32
that's the new rate of progress that's,64.08,4.74
still not actually true,66.42,4.8
with exponential growth which is what we,68.82,4.86
are seeing right now the actual uh,71.22,5.7
correct assumption to make is that uh is,73.68,5.7
that you know the X amount of time from,76.92,5.64
from now will actually have continued to,79.38,4.44
accelerate,82.56,4.62
now this is a nice lovely handmade graph,83.82,6.659
that is shown in perfect clear data but,87.18,5.34
let me show you some actual some real,90.479,5.1
data about parameter counts in neural,92.52,4.68
networks,95.579,4.22
so here you can see it growing,97.2,5.76
exponentially and then the exponential,99.799,5.32
curve accelerates and it starts growing,102.96,3.78
logarithmically,105.119,5.221
so we are at the knee of the curve,106.74,6.059
already so the knee of the curve is this,110.34,4.02
part right here,112.799,4.68
where the acceleration really starts to,114.36,5.039
take off but the thing is is when you're,117.479,3.541
in the middle of it it's kind of like,119.399,3.241
boiled frog syndrome which we'll talk,121.02,3.9
about a little bit more in just a minute,122.64,6.599
so with this data in mind let's jump,124.92,6.42
into the rest of the video,129.239,5.341
so I mentioned doomerism and denialism,131.34,5.64
and then finally optimism these are kind,134.58,5.34
of the three main categories that people,136.98,5.82
by and large fall into there's also,139.92,5.52
people that are apathetic uh which I,142.8,4.079
didn't include that just because it's a,145.44,4.2
waste of screen space but so doomerism,146.879,6.961
is uh the is the the belief that uh,149.64,8.04
decline collapse Calamity is inevitable,153.84,6.3
that we are going to end up in some sort,157.68,4.62
of Extinction scenario or dystopian,160.14,3.84
outcome and that there's not really,162.3,4.5
anything that we can do to change it so,163.98,4.2
this is why there's been lots of,166.8,3.6
comments around like Malik which the,168.18,4.08
idea of Malik will get into that a,170.4,3.96
little bit as well,172.26,5.4
um so then there's denialism so the,174.36,5.22
denialists basically say there's nothing,177.66,5.46
to see here uh AGI is not even possible,179.58,6.84
or it's still decades away hard takeoff,183.12,6.54
is not possible or it's decades away and,186.42,5.7
uh then finally optimism techno,189.66,5.159
optimists is people like myself who are,192.12,4.8
just like yeah like we can do this these,194.819,4.381
problems are all solvable and it will,196.92,6.42
ultimately end up in the better so what,199.2,6.78
I want to say is is that this is I'm not,203.34,4.679
talking about individuals don't take it,205.98,3.539
personally if you identify with these,208.019,3.061
what I'm talking about here is thought,209.519,4.201
leaders uh people like content creators,211.08,5.34
like myself leading scientists people on,213.72,6.48
Twitter uh basically famous people or,216.42,6.78
respected people in The Establishment in,220.2,7.02
the industry who take these mindsets uh,223.2,6.959
so again not not calling on any,227.22,6.0
particular commenter or fan or people on,230.159,5.881
Reddit or Twitter this is talking about,233.22,4.98
basically like people at my level or,236.04,3.419
above,238.2,2.459
um,239.459,4.14
and also this is like it obviously,240.659,4.621
doesn't fall into this symbol of,243.599,3.241
categories I'm just kind of talking,245.28,3.539
about the the kind of extreme ends most,246.84,3.78
people fall somewhere in the middle like,248.819,3.06
if you were to draw this out on a,250.62,2.88
triangle most people are somewhere in,251.879,3.06
the middle there's a few people at the,253.5,3.54
extreme points I'm an extreme optimist,254.939,3.14
so,257.04,3.84
in in the yellow corner is the extreme,258.079,4.481
optimists in the red and green Corners,260.88,4.259
are the dumerous and denialists,262.56,5.88
um okay so but as promised by the,265.139,4.681
opening title I want to take a,268.44,4.08
sympathetic look at these other uh other,269.82,4.92
uh uh dispositions,272.52,6.06
so Sympathy for the Doomer one it is,274.74,5.76
good to acknowledge the existential,278.58,4.98
risks of AI this has been true of all,280.5,5.22
new technologies whether it's Medical,283.56,4.5
Technology nuclear technology pretty,285.72,4.68
much every new technology today carries,288.06,4.38
with it some level of existential risk,290.4,4.859
right you know the whether it's the,292.44,5.1
ability to do Gene engineering or,295.259,5.16
engineer new uh strains of flu or,297.54,5.099
coronavirus or whatever there's always,300.419,4.5
risks,302.639,5.041
um the doomers understand the potential,304.919,5.821
risk of uncontrolled AGI right the sky,307.68,5.4
is the limit right as people are,310.74,4.08
learning more about what AI is capable,313.08,4.44
of the idea of an Extinction scenario,314.82,5.58
like Skynet is actually not entirely,317.52,4.92
impossible and when you look at the fact,320.4,4.2
that Congress right now is working on,322.44,4.5
passing legislation so that AI will,324.6,4.439
never have uh control over nuclear,326.94,4.02
weapons like they're taking it seriously,329.039,4.38
too right so like there's something here,330.96,5.76
uh there's it's not nothing,333.419,5.101
um so then there's also the recognition,336.72,4.8
for safeguards and regulations and then,338.52,4.56
finally when you just look at the,341.52,3.72
current trends,343.08,3.839
um like stagnant wages and wealth,345.24,3.42
inequality and other evidence of the,346.919,5.701
Malik problem like it doesn't take a you,348.66,6.599
know a a great leap of faith or logic to,352.62,4.139
say what if these Trends continue and,355.259,3.541
get worse which there's no evidence of,356.759,4.201
some of these Trends reversing,358.8,3.66
um then it's like okay well then we are,360.96,3.9
all going to end up in a cyberpunk Hell,362.46,4.98
and then finally these problems are all,364.86,4.32
very large and complex they are global,367.44,4.68
scale problems so what I want to say is,369.18,4.799
I want to acknowledge that these are the,372.12,3.96
primary as far as I can tell the primary,373.979,4.201
concerns of doomers,376.08,5.1
um and uh like there is some legitimacy,378.18,4.44
to this position I'm not saying oh,381.18,3.18
doomers are just flat out wrong you know,382.62,3.359
to ignore them like no these are real,384.36,3.6
things I need to acknowledge that but,385.979,4.201
what I'll get to is like why I'm still,387.96,5.239
optimistic despite all this,390.18,6.959
now to play Devil's Advocate there are,393.199,6.461
some flaws with tumorism which is people,397.139,5.06
that just stick in this corner,399.66,4.979
one is over emphasis on worst case,402.199,3.821
scenarios,404.639,3.301
yes we can think about worst case,406.02,4.14
scenarios but it does not help for us to,407.94,4.08
dwell on worst case scenarios and only,410.16,3.9
worst case scenarios we need to think,412.02,3.48
about the entire spectrum of,414.06,2.699
possibilities,415.5,3.18
another thing that is that is common,416.759,4.261
with some doomers is that they're very,418.68,4.38
dogmatic in their thinking they have,421.02,3.959
come to believe for their own reasons,423.06,3.24
with their own logic and their own,424.979,3.261
research and minds and whatever else,426.3,4.679
that catastrophe is a foregone,428.24,5.079
conclusion they think that it is totally,430.979,5.101
inevitable which results in dogmatic and,433.319,4.1
rigid thinking,436.08,4.92
this mentality discourages Innovation,437.419,5.441
and collaboration they're like ah,441.0,4.38
we're doomed who cares give up just,442.86,4.08
throw your hands up and just let it,445.38,3.24
happen,446.94,3.18
um which creates a distraction from,448.62,4.139
finding real solutions and the ultimate,450.12,4.68
result of this is from an emotional and,452.759,3.72
psychological perspective is that it,454.8,4.14
leads to a sense of nihilism or fatalism,456.479,4.381
so nihilism is the belief that nothing,458.94,4.979
matters anyways uh which this kind of,460.86,6.0
forms a vicious cycle where if you,463.919,4.741
already have a nihilistic attitude and,466.86,3.66
then you believe that between climate,468.66,4.74
change and geopolitics and economics and,470.52,4.92
AI that we're all doomed anyways you,473.4,3.419
might as well give up while you're ahead,475.44,4.379
and that is fatalism so the fatalism and,476.819,5.761
nihilism play off of each other really,479.819,4.32
powerfully,482.58,4.2
um and it just leads to giving up,484.139,4.381
and that is the hopelessness and,486.78,3.3
inaction,488.52,4.26
um so again I do want to sympathize with,490.08,4.5
the doomers and say yes these are really,492.78,5.52
difficult problems and uh the our,494.58,6.0
success and survival as a species is not,498.3,4.38
guaranteed it is not a foregone,500.58,4.44
conclusion even for us optimists that we,502.68,3.859
will come out in a better place,505.02,3.959
generally speaking over the last century,506.539,4.6
we have come out in a better place in,508.979,4.44
the long run it can get pretty awful in,511.139,3.601
the short term,513.419,2.941
um and then but it's also not evenly,514.74,3.359
distributed life gets better for some,516.36,3.359
people worse for others,518.099,4.261
so you know it is important to raise the,519.719,4.141
alarm but,522.36,3.479
you know we can't we can't just dwell,523.86,2.94
right,525.839,3.421
all right so Sympathy for the denier so,526.8,4.2
the deniers and again I'm not trying to,529.26,3.48
call out anyone by name I'm not trying,531.0,3.54
to start Twitter beefs and YouTube beefs,532.74,4.38
I'm just giving my perspective so,534.54,4.979
Sympathy for the denier these are the,537.12,3.899
people that have said yeah we've been,539.519,4.681
promised AGI for like 60 years right I,541.019,4.38
remember what was it there was a,544.2,2.759
Consortium that was launched in like,545.399,3.721
Stanford or something back in the 60s or,546.959,3.901
70s and they're like oh yeah with a,549.12,3.12
summer of work we should be able to,550.86,4.14
figure out you know uh artificial,552.24,4.38
intelligence and then here we are like,555.0,5.459
40 or 60 years later and no,556.62,5.58
um so yeah you know it's just like,560.459,3.661
nuclear fusion right it's always 10,562.2,3.78
years away or 20 years away,564.12,4.62
so progress up to this point has been,565.98,7.14
slow that is true uh there's a um on on,568.74,7.98
the deniers side there is an emphasis,573.12,5.82
more on like yeah it's you know AI is,576.72,3.96
helpful and it could have some potential,578.94,4.74
benefits but we shouldn't rely on this,580.68,5.46
it's not a Magic Bullet right and that's,583.68,4.92
that's always true like AI will change,586.14,4.68
everything just the same way that steel,588.6,4.14
and coal and steam power and internal,590.82,3.72
combustion engines changed everything,592.74,3.599
but it didn't solve The World's problems,594.54,4.919
it it solved a bunch of problems created,596.339,6.721
new problems and changed a lot of stuff,599.459,7.681
um another uh uh benefit for for the,603.06,6.779
deniers is that they're like hang on you,607.14,4.86
know tap the brakes like let's not,609.839,4.56
overreact let's not over uh like,612.0,4.8
regulate or you know fear monger and I I,614.399,3.901
do appreciate some of those comments,616.8,3.42
actually it's like some of the deniers,618.3,3.24
out there are like enough with the,620.22,3.84
fear-mongering like I don't care,621.54,5.76
um and then you know just we we have,624.06,5.279
survived 100 of everything that has come,627.3,4.62
our way so far and so like nothing has,629.339,4.981
exploded yet the where's the fire right,631.92,5.34
so there is some validity to the,634.32,4.86
perspective of deniers out there which,637.26,3.36
you know one of the things that they say,639.18,2.64
is there's nothing to see here right,640.62,3.779
nobody panic which you always need that,641.82,4.62
kind of energy too like you in any,644.399,3.721
society you want people raising the,646.44,3.54
alarm and other people tapping the,648.12,4.08
brakes we all have our purpose just like,649.98,5.34
us optimists also have our role to play,652.2,5.34
now there are some flaws with the,655.32,3.36
denialism,657.54,5.28
so one is from my perspective deniers,658.68,6.659
seem to underestimate the potential,662.82,4.8
risks especially when they say AGI is,665.339,4.44
not possible hard takeoff isn't possible,667.62,5.159
or these things are decades away,669.779,5.941
um another possibility is just like not,672.779,4.8
not fully thinking through like okay,675.72,3.42
even if there's a five percent chance,677.579,3.961
that AGI is happening within the next,679.14,4.8
five years only a five percent chance,681.54,4.5
what like still think through the cost,683.94,4.079
of that right like look at Bayes theorem,686.04,3.479
like okay there's a five percent chance,688.019,3.601
that AGI is going to happen and if we,689.519,3.781
don't do it right there is a very high,691.62,3.48
likelihood that like we're all gonna die,693.3,5.7
or end up in worse uh situation so in,695.1,6.06
action the cost the potential cost of,699.0,4.74
inaction or under action under reaction,701.16,5.28
is still pretty high,703.74,6.42
um another thing is is two uh or two two,706.44,5.459
things are exponential growth and,710.16,4.38
saltatory leaps so exponential growth,711.899,4.081
which I provided some evidence for at,714.54,2.52
the beginning of this video that's,715.98,3.24
happening that is a fact,717.06,3.959
and then actually let me go back to this,719.22,4.619
so this here where you have this Gap up,721.019,5.461
this is actually a mathematical evidence,723.839,5.221
of a what's called a saltatory leap so a,726.48,4.5
saltatory leap is when some breakthrough,729.06,4.98
or some uh compounding returns of,730.98,5.94
incremental progress result in sudden,734.04,4.739
breakthroughs that you could not have,736.92,3.659
predicted because if you just look at,738.779,3.721
this trend line you'd predict like okay,740.579,3.601
we wouldn't be here for another couple,742.5,4.32
years but we're here now right so that,744.18,4.92
that's a saltatory leap so you have to,746.82,5.28
acknowledge that saltatory leaps not,749.1,4.979
just do happen sometimes have happened,752.1,4.679
recently and if it's happened recently,754.079,4.981
it might happen again,756.779,5.281
um the lack of urgency by saying eh it's,759.06,5.7
decades away again you know you got to,762.06,3.959
think through it like okay but what if,764.76,2.699
it's not,766.019,5.041
um and the the the taking a big step,767.459,5.521
back the nothing to see here messaging,771.06,5.16
might lead to boiled frog syndrome the,772.98,5.039
the temperature is rising quickly this,776.22,3.54
year I think a lot of us agree on that,778.019,3.841
and so well you get used to it right,779.76,5.16
okay it's warmer than it was but it's,781.86,5.88
not hot yet thing is the time between it,784.92,4.44
gets warm and it gets hot and it starts,787.74,5.039
boiling that time could be shortening,789.36,6.719
so the social impacts of these of of,792.779,6.18
when thought leaders adopt more extreme,796.079,6.06
uh stances such as doomerism or deny uh,798.959,6.601
denialism is uh basically just a quick,802.139,6.421
recap the doomers create nihilism and,805.56,6.779
fatalism which discourages uh proactive,808.56,5.219
Solutions because they say that there is,812.339,3.421
no solution right that is one of the,813.779,3.601
underpinning assumptions of doomers is,815.76,4.079
that it's inevitable it's unavoidable,817.38,5.459
there is no solution don't even try,819.839,4.56
um and this promotes fear and anxiety,822.839,4.261
right which yes fear and anxiety are,824.399,5.041
evolutionarily there for a reason to,827.1,3.96
motivate us to do something about a,829.44,3.18
problem that we perceive,831.06,4.26
but too much of a of of an important,832.62,4.32
thing can still be bad,835.32,5.04
finally our next the impact of denialism,836.94,5.22
or denialism,840.36,3.599
um is that there's a false sense of,842.16,3.66
security right and we don't have to,843.959,4.021
worry about it eh it's not coming for a,845.82,4.74
long time right that's complacency and,847.98,5.88
act and inaction which undermines some,850.56,4.44
of the rest of us who are working and,853.86,3.0
saying actually this might be something,855.0,3.3
that we need to think a little bit,856.86,3.36
further ahead on,858.3,4.02
um because think about last year right,860.22,5.46
how um AI images exploded onto the scene,862.32,5.94
and nobody was ready for it right what,865.68,5.58
if the next thing that happens is not,868.26,5.22
just AI images or AI music or something,871.26,4.44
like that but something a little bit,873.48,3.9
more profound a little bit more,875.7,3.18
significant that we just weren't ready,877.38,3.3
for which means that the time to start,878.88,3.84
preparing for those things that we know,880.68,4.44
are coming eventually and we don't know,882.72,4.26
when that Jack-in-the-Box is gonna pop,885.12,4.079
the time to prepare is now,886.98,4.44
so some of the some of the consequences,889.199,4.981
that occur because of these the the,891.42,5.46
messaging is one polarization of of,894.18,5.279
public opinion some people are bending,896.88,4.62
over backwards to say more jobs are,899.459,4.38
coming let's not even think about you,901.5,5.519
know AI based war or the control problem,903.839,5.461
or anything like that uh what meanwhile,907.019,5.341
others are like ah no we're you know,909.3,4.92
lead the leaders around the world are,912.36,4.14
not even addressing these risks they're,914.22,4.679
just sitting on their on their hands so,916.5,4.86
we're doomed right because if the adults,918.899,3.901
in the room don't care or don't think,921.36,3.719
it's a problem but all of the kids are,922.8,3.779
like hey do you see that the house is on,925.079,3.301
fire like maybe we should put that out,926.579,4.26
first right that leads to more nihilism,928.38,5.699
more fatalism a lot of polarization,930.839,5.641
and then of course uh the Overton window,934.079,4.2
is still too narrow so the Overton,936.48,3.719
window is the concept of what is allowed,938.279,3.841
to be talked about in political,940.199,5.521
discourse so you know if you if you,942.12,5.159
follow my channel every now and then,945.72,3.179
I'll post links to videos like hey look,947.279,3.661
you know the conversation is Shifting,948.899,5.661
right just yesterday I posted a a a a a,950.94,6.48
video from DW news which is in Germany,954.56,5.74
where they try to address like hey,957.42,5.64
there's actual anxiety about like,960.3,5.58
everyone's jobs are going away right and,963.06,5.76
they bent over real God I listened to it,965.88,4.92
again they bent over backwards to try,968.82,4.319
and say well yeah a lot of low-paid jobs,970.8,3.96
are going away but there's a few high,973.139,3.901
paid jobs coming in and it's like okay,974.76,4.199
but still the point is is that most of,977.04,3.479
the medium and low-paid jobs are going,978.959,3.781
away and being replaced by a few,980.519,4.62
high-paying jobs that's not the promise,982.74,4.62
of like techno Revolution where AI,985.139,4.5
creates a whole bunch of new jobs,987.36,5.4
um and then I think it was Amazon uh or,989.639,5.101
Facebook one of them just an announced,992.76,4.68
even more layoffs and they explicitly,994.74,4.8
said that the reason for the layoffs is,997.44,3.12
that they're going to replace as many,999.54,3.599
people with AI as possible I called it,1000.56,6.12
I've been saying it so it's happening,1003.139,5.64
um so the Overton window is Shifting now,1006.68,5.599
okay why is this a big problem why like,1008.779,6.3
fundamentally why is it that some people,1012.279,5.92
are dimerous and denialists and what,1015.079,5.401
what is left in the wash what is missing,1018.199,4.981
from the conversation so one thing,1020.48,4.68
that's missing is there is not a,1023.18,4.44
coherent Global strategy,1025.16,5.22
and so what I mean by that is everyone's,1027.62,5.4
busy arguing you know in this little,1030.38,5.039
domain or this little domain uh you know,1033.02,5.279
about corporate governance or academic,1035.419,5.701
Integrity or should we have a moratorium,1038.299,5.581
right there's not really a global,1041.12,4.439
strategy no one has even proposed,1043.88,3.24
anything,1045.559,5.761
um then on top of that is as I mentioned,1047.12,5.64
just a moment ago,1051.32,3.66
uh calling for moratoriums is not a,1052.76,3.539
solution,1054.98,2.699
um that's not even that's not even a,1056.299,2.941
stopgap measure,1057.679,3.24
um and so when all the thought leaders,1059.24,3.6
in the world when none of them are,1060.919,3.841
really offering Solutions of course,1062.84,4.44
you're going to end up with a lot of uh,1064.76,4.98
bickering and arguing and also a lot of,1067.28,5.54
anxiety right we are humans and we love,1069.74,6.48
love when there are adults in the room,1072.82,5.979
that we trust to help make good,1076.22,4.5
decisions and to make sure that we're,1078.799,4.681
going to be okay right and right now on,1080.72,5.579
the topic of AI there's nobody really,1083.48,4.74
out there saying we're gonna be okay,1086.299,4.321
I've got a plan,1088.22,5.52
um and then uh on top of that Global,1090.62,4.799
strategy is a comprehensive roadmap,1093.74,4.819
right kind of the same thing,1095.419,3.14
said a lot of this stuff but really what,1098.78,4.259
we need is a is that Global,1100.34,4.5
comprehensive roadmap and a,1103.039,4.441
multi-layered approach to solving all,1104.84,4.92
these problems at all these different uh,1107.48,3.66
levels,1109.76,3.24
so I've already alluded to some of these,1111.14,3.36
things there's quite a bunch of stuff,1113.0,3.299
that doesn't work right calling for,1114.5,4.14
moratoriums just simply does not work,1116.299,4.62
we'll get into more detail about why,1118.64,6.06
moratoriums don't work and and and uh,1120.919,5.401
and all the incentives against it in,1124.7,3.359
just a moment another thing that doesn't,1126.32,4.14
work is bombing data centers sorry that,1128.059,4.921
is a really bone-headed suggestion,1130.46,5.099
uh complaining on Twitter writing op-eds,1132.98,4.98
writing mean comments on YouTube none of,1135.559,4.98
these things are actually helpful and,1137.96,3.839
another thing that's not helpful is,1140.539,2.88
actually just trusting corporations or,1141.799,3.061
the establishment to figure it out on,1143.419,2.281
their own,1144.86,4.02
we are all all humans Global,1145.7,5.96
stakeholders in AI,1148.88,5.82
so all these these the this list of,1151.66,4.36
stuff that I've just have that that,1154.7,2.88
doesn't work they're all molecule,1156.02,3.779
reactions and molecule Solutions which,1157.58,3.3
basically means that they will,1159.799,3.181
inevitably lead to those lose-lose,1160.88,4.02
outcomes that the doomers are are,1162.98,4.079
warning us against right again I'm not,1164.9,4.2
saying that the doomers are wrong if,1167.059,3.721
things keep going as they are the,1169.1,4.5
doomers are right I just I personally,1170.78,6.18
don't ascribe to constantly yelling fire,1173.6,6.12
and then claiming you know we're all,1176.96,4.44
gonna die,1179.72,5.579
okay so I outlined the big problems now,1181.4,5.82
what,1185.299,4.26
this video the entire purpose is to,1187.22,4.92
introduce kind of the crowning,1189.559,4.5
achievement so far of What Not Just I'm,1192.14,4.62
working on but the the rapidly growing,1194.059,5.401
community that I'm building,1196.76,4.74
um uh what started around the years to,1199.46,5.04
comparatives my research on alignment,1201.5,6.059
for individual models and agents it has,1204.5,6.78
quickly expanded so this gato framework,1207.559,7.62
Global alignment taxonomy Omnibus is,1211.28,6.42
that comprehensive strategy that I just,1215.179,4.74
mentioned that is missing it is not just,1217.7,4.56
for responsible AI development but is a,1219.919,5.461
coherent road map that everyone on the,1222.26,5.58
planet can participate in at various,1225.38,4.86
levels whatever level makes the most,1227.84,3.66
sense to you,1230.24,5.4
this framework has seven layers on ways,1231.5,8.16
to implement uh models AI systems and,1235.64,6.539
also alignment uh alignment-based,1239.66,4.8
regulations and we'll get into all the,1242.179,4.201
layers in just a moment,1244.46,5.579
uh but basically the the whole point of,1246.38,5.22
this gato framework that we're working,1250.039,4.861
on is that it will unite all,1251.6,5.28
stakeholders give us a common framework,1254.9,5.04
with which to have these discussions to,1256.88,5.28
broaden the Overton window to open the,1259.94,4.38
Overton window a little bit more so,1262.16,4.5
whatever part of the spectrum you're on,1264.32,4.859
whether you're saying eh it's not really,1266.66,4.379
an issue yet or we're all going to die,1269.179,4.261
or you don't care or you're an optimist,1271.039,4.981
whatever this is a framework that we can,1273.44,4.56
all participate in,1276.02,4.86
um just in a decentralized distributed,1278.0,5.64
and open source manner,1280.88,5.52
so as promised here are the seven layers,1283.64,5.1
of the gato framework and in the,1286.4,4.08
community we started saying that it's,1288.74,3.6
like a seven layer burrito so we use,1290.48,5.1
like taco cat as our little Avatar so,1292.34,5.88
layer one the lowest layer is model,1295.58,5.64
alignment so model alignment has to do,1298.22,6.0
with individual neural networks so that,1301.22,8.52
means gpt2 gpt3 gpt4 Bert vicuna uh,1304.22,8.28
stable LM all of these right large,1309.74,5.34
language models are proliferating like,1312.5,4.559
well I don't know just like locusts,1315.08,3.66
whatever,1317.059,4.441
it's happening right data sets are,1318.74,5.16
growing models are growing they're all,1321.5,4.799
coming out uh the cat's out of the bag,1323.9,5.04
right language technology multimodal,1326.299,4.801
technology it's all coming you can't,1328.94,3.719
stop it,1331.1,3.48
um so rather than stop it rather than,1332.659,4.801
call for moratoriums what we're doing is,1334.58,5.459
we're focusing on okay let's ride this,1337.46,5.06
wave I all have already proposed,1340.039,4.741
reinforcement learning with heuristic,1342.52,3.519
imperatives which is different from,1344.78,2.34
reinforcement learning with human,1346.039,3.661
feedback because human feedback aligns,1347.12,5.28
models to what humans want which what,1349.7,4.56
humans want and what humans need often,1352.4,4.259
very very different here is to,1354.26,3.84
comparatives is not just what humans,1356.659,4.621
want but what all life needs we're also,1358.1,4.74
talking about data set curation and,1361.28,3.06
inner alignment problems like Mesa,1362.84,3.12
optimization,1364.34,6.06
Layer Two is sorry autonomous systems so,1365.96,6.18
these are cognitive architectures and,1370.4,3.38
autonomous agents,1372.14,4.919
this is this is recently exploded on the,1373.78,4.48
scene with,1377.059,4.381
um you know Jarvis and baby AGI and,1378.26,5.76
agent GPT and all that fun stuff so you,1381.44,4.38
guys know what that is and it's coming,1384.02,3.12
and it's only going to get more,1385.82,3.359
sophisticated we're on the ground floor,1387.14,4.62
of autonomous systems this is year zero,1389.179,6.061
year two three four five like you can't,1391.76,5.58
on you cannot imagine how powerful,1395.24,4.679
autonomous systems are going to be in,1397.34,5.4
the coming years so at the at the the,1399.919,4.981
low level the engine level right the,1402.74,4.2
components under the hood that's the,1404.9,4.74
models the autonomous systems are the,1406.94,4.979
software architectures that use those,1409.64,3.96
systems including memory systems and,1411.919,4.021
apis and other stuff to create those,1413.6,5.52
autonomous cognitive entities right,1415.94,6.66
layer 3 is the decentralized network so,1419.12,5.16
you might have seen some of my recent,1422.6,2.6
videos where I've talked about,1424.28,3.3
blockchain decentralized autonomous,1425.2,4.959
organizations and also another component,1427.58,4.5
of that is what's called a federation so,1430.159,4.741
a federation is where you have either,1432.08,4.44
independent nodes or independent,1434.9,3.48
networks that can communicate and,1436.52,5.039
collaborate through Federated systems so,1438.38,8.039
these are the the network layer is how,1441.559,7.86
do we create networked intelligent,1446.419,6.961
entities that are also aligned and this,1449.419,5.701
is a tough nut to crack we've had lots,1453.38,3.419
of discussions in the group talking,1455.12,3.72
about can you implement Heroes to,1456.799,5.941
comparatives as a consensus mechanism at,1458.84,5.459
what level do you process it do you,1462.74,5.64
process it at every llm inference or do,1464.299,6.301
you wait for the decisions how do you,1468.38,4.679
make decisions around this kind of thing,1470.6,5.52
excuse me real tough nut to crack number,1473.059,6.0
four is where we jump from the technical,1476.12,5.34
implementation and research to more of,1479.059,5.221
the social political and economic uh,1481.46,4.5
layer of the stack,1484.28,3.36
and for all of you technologists out,1485.96,4.32
there you can probably see,1487.64,5.34
um my influence as a as a technologist,1490.28,5.04
because this is it's not modeled on the,1492.98,5.04
osm OSI model it's actually more closely,1495.32,4.92
modeled on the defense and depth model,1498.02,5.58
but it is a layered hierarchical stack,1500.24,6.6
or onion of uh of Concepts so corporate,1503.6,4.199
adoption,1506.84,2.339
here's the thing,1507.799,3.841
you cannot just tell a corporation you,1509.179,4.201
know what stop with the AI we don't we,1511.64,3.72
don't like where AI is going sure you,1513.38,4.74
can try to with regulation uh but you,1515.36,4.5
know like Italy tried to do that and,1518.12,3.9
then they reverse course right,1519.86,4.26
there's just way too much economic,1522.02,4.86
incentive the bottom line you know that,1524.12,4.76
is if you're if you're a corporation,1526.88,4.5
shareholders and the bottom line that's,1528.88,4.659
where the power is so rather than fight,1531.38,4.38
that part of what this framework does is,1533.539,4.321
say let's how let's figure out how we,1535.76,4.56
can align those heuristic imperatives,1537.86,4.26
reduce suffering increase prosperity and,1540.32,4.38
increase understanding how can we align,1542.12,5.22
those fundamental human needs the,1544.7,4.26
fundamental needs of all living things,1547.34,3.66
with corporate interest,1548.96,4.68
and so one story that I like to share is,1551.0,4.5
that I've had a few patreon uh,1553.64,3.659
supporters reach out to me and they're,1555.5,3.36
like hey I've got this autonomous system,1557.299,3.0
that I'm working on but it's like it's,1558.86,3.96
getting stuck or I need help or whatever,1560.299,5.041
um or even without asking for my help uh,1562.82,4.2
they said like hey I implemented the,1565.34,3.24
heroes to comparatives in my autonomous,1567.02,3.0
Business Systems and they work better,1568.58,5.699
and I'm like thanks share so like if you,1570.02,5.82
have any of those examples please post,1574.279,2.941
them on Reddit on the heroes to,1575.84,3.48
comparative subreddit because we need we,1577.22,3.839
need more of those stories about how,1579.32,3.78
aligned AI systems are actually good for,1581.059,4.381
business it's that simple the bottom,1583.1,4.86
line like I I will always say that,1585.44,4.56
corporations are intrinsically amoral,1587.96,5.219
however what I will say is that is that,1590.0,4.919
their profit motive their primary,1593.179,3.12
incentive structure which is to make,1594.919,4.62
more money will benefit from adopting,1596.299,5.101
heuristic comparative aligned systems,1599.539,4.081
services and products which I also we,1601.4,3.54
also have some members of the community,1603.62,3.12
who are working on spinning this out,1604.94,3.18
into either for-profit or not,1606.74,4.679
not-for-profit services and of course,1608.12,4.799
we're going to be publishing open source,1611.419,3.601
data sets reference architectures that,1612.919,3.661
sort of stuff to make it as easy as,1615.02,3.899
possible for corporations all over the,1616.58,5.459
world to adopt aligned AI,1618.919,5.041
uh and we're going to work on convincing,1622.039,3.24
them that this is the way to go too,1623.96,3.86
number five National regulations,1625.279,5.041
obviously as I just mentioned you know,1627.82,5.08
corporations can or sorry Nations can do,1630.32,4.28
some stuff like people pointed out like,1632.9,5.159
gdpr uh European unions like you know,1634.6,6.459
big package about like a uh data privacy,1638.059,5.161
and stuff and certainly as an I.T,1641.059,3.261
professional,1643.22,3.9
people on the technology side are,1644.32,5.14
terrified of gdpr right that's got some,1647.12,4.02
teeth right you know right to be,1649.46,3.78
forgotten where the data is owned and,1651.14,4.44
housed and data governance okay great,1653.24,6.179
that's all fine but see the thing is is,1655.58,5.82
Nations have their own incentive,1659.419,4.14
structure where it comes to Ai and what,1661.4,4.56
I mean by that is uh the the national,1663.559,4.441
interests of companies has to do with,1665.96,4.92
their own GDP as a whole so this is a,1668.0,6.539
big difference gdpr was about uh like,1670.88,5.64
data privacy for Citizens and social,1674.539,5.341
media it wasn't as directly tied to like,1676.52,6.899
the national growth of their G ADP it,1679.88,5.52
wasn't necessarily directly tied to,1683.419,3.661
their geopolitical influence or their,1685.4,4.68
military or their National Security,1687.08,4.92
AI today though,1690.08,4.199
is all of those things and more,1692.0,5.46
because GDP growth geopolitical,1694.279,5.101
influence National Security border,1697.46,4.38
security whatever all of that has to do,1699.38,4.44
those are the national interests that we,1701.84,4.62
are going to be working on aligning AI,1703.82,6.0
with and basically the long story short,1706.46,6.36
is at a national level We're not gonna,1709.82,4.92
we're not going to say hey Nations maybe,1712.82,4.02
you shouldn't adopt AI maybe you should,1714.74,3.179
slow it down maybe you should just,1716.84,3.12
regulate it we're going to be actually,1717.919,3.481
more I'm not going to say that like,1719.96,3.42
we're accelerationists because like you,1721.4,3.779
don't need to push the to go any faster,1723.38,3.48
right I'm not advocating for,1725.179,3.721
accelerationism I'm just observing that,1726.86,4.26
acceleration is happening so how do we,1728.9,4.639
steer it right and the idea is,1731.12,5.34
encouraging Nations to adopt Heroes,1733.539,5.921
comparative aligned uh models services,1736.46,5.699
and systems because at every level of,1739.46,4.92
government that will help steer the,1742.159,4.081
nation in a better Direction and their,1744.38,4.5
implementations will be safer more,1746.24,4.799
reliable more trustworthy so on and so,1748.88,4.26
forth and of course stability is good,1751.039,3.24
for business it's good for the account,1753.14,3.32
economy it's good for National Security,1754.279,5.52
and all that other fun stuff next up is,1756.46,5.5
number six uh layer six International,1759.799,4.681
treaties so I actually did wasn't the,1761.96,4.02
first one to come up with this idea but,1764.48,3.48
basically we're going to be advocating,1765.98,4.679
for an international Consortium like,1767.96,4.92
CERN but for AI because here's the other,1770.659,4.02
thing and a lot of people pointed this,1772.88,4.14
out is that a lot of Nations,1774.679,5.22
cannot even afford to participate in AI,1777.02,5.7
research right AI research is carried,1779.899,4.561
out largely by the wealthiest companies,1782.72,3.0
on the planet and the wealthiest,1784.46,2.76
countries on the planet,1785.72,3.42
that's going to intrinsically leave a,1787.22,4.439
lot of other nations uh behind in the,1789.14,4.98
dust right and that's just not fair that,1791.659,4.201
is a malarchy outcome where there's a,1794.12,3.419
few wealthy bastions and the rest are,1795.86,4.98
poor and they end up basically like,1797.539,6.26
tossed on the on the rough Seas of an AI,1800.84,5.64
saturated world so what we're going to,1803.799,5.081
do is we're going to advocate for a,1806.48,5.64
global international Consortium where uh,1808.88,6.899
people people Nations pool resources,1812.12,5.88
share their scientists share their,1815.779,5.101
research share their data so that we can,1818.0,5.46
all benefit equally across the whole,1820.88,6.539
globe which that also uh has uh knock-on,1823.46,6.079
benefits with in terms of alliances,1827.419,4.681
economic benefits because you look at,1829.539,4.721
like everyone's going to benefits from,1832.1,4.319
from like CERN and the collaborations,1834.26,4.44
between like NASA and Esa and and that,1836.419,3.721
sort of stuff so International,1838.7,3.959
scientific treaties generally one,1840.14,3.72
they've got a pretty good track record,1842.659,3.24
and two we've got a good model for them,1843.86,3.6
so we're just basically saying let's,1845.899,4.861
copy the success of NASA Issa of CERN,1847.46,6.24
and let's do it for AI again that's not,1850.76,5.46
like you know we're not this is nothing,1853.7,3.839
Earth shattering right it's been done,1856.22,2.88
before we're just saying maybe it is,1857.539,3.781
time to do this with AI and finally,1859.1,5.1
layer 7 of the gato framework is global,1861.32,5.76
consensus so Global consensus has to do,1864.2,5.219
with messaging,1867.08,5.459
um uh working with universities academic,1869.419,5.401
institutions uh industrial sectors,1872.539,5.701
National sectors uh social media right,1874.82,6.3
or all media really because if we can,1878.24,5.819
build consensus in every sector in every,1881.12,5.059
domain and at every level of society,1884.059,6.961
then consensus around how to uh align AI,1886.179,7.0
so that we all end up in a more utopian,1891.02,3.06
state,1893.179,3.48
the utopian attractor State rather than,1894.08,5.219
dystopia or Extinction then we're going,1896.659,4.561
to have a lot more energy right that,1899.299,3.961
Overton window is going to be aligned in,1901.22,4.14
the correct direction rather than you,1903.26,3.06
know because right now the Overton,1905.36,3.24
window is highly highly centered over,1906.32,4.979
we're all going to die or nothing is,1908.6,6.179
happening but really the truth is well,1911.299,5.461
those are possibilities but the Overton,1914.779,4.081
window needs to be broadened and that is,1916.76,4.139
one of the key components of global,1918.86,3.78
consensus,1920.899,4.38
so I just threw a lot at you and this,1922.64,5.82
all sounds really good Pie in the Sky uh,1925.279,5.461
you know blah blah right there's,1928.46,3.66
probably some skepticism so let's,1930.74,4.08
address that this all started as a very,1932.12,5.039
small Discord Community where I just,1934.82,3.9
wanted to bring some people together to,1937.159,3.061
help me do Heroes to comparatives,1938.72,4.92
research and it quickly very quickly,1940.22,5.22
scaled up,1943.64,4.74
um we to as of this recording we have I,1945.44,5.04
think right around just shy of 70 people,1948.38,5.22
involved and more people coming all the,1950.48,4.62
time we're actually having to work on,1953.6,4.14
figuring out ways of automating the,1955.1,4.74
recruiting the applications and the,1957.74,3.419
onboarding which we haven't figured out,1959.84,4.079
yet but we need to,1961.159,4.681
um we're organizing teams and projects,1963.919,4.38
around each layer of gato that I just uh,1965.84,4.02
outlined and so you can see those here,1968.299,4.5
on the right hand side so if you're a,1969.86,4.62
reinforcement learning researcher or an,1972.799,4.26
ml researcher or a data scientist we,1974.48,5.1
need your help with layer one if you're,1977.059,4.081
a software architect or a cloud,1979.58,4.079
architect or someone or devops someone,1981.14,5.34
who understands Automation and complex,1983.659,4.681
systems we need your help in Layer Two,1986.48,4.319
autonomous systems we've got a whole,1988.34,5.4
bunch of blockchain endow people working,1990.799,5.1
with us on layer three which is such a,1993.74,3.72
cool topic because where this is like,1995.899,3.541
super Cutting Edge also we're going to,1997.46,3.42
eat our own dog food we're already,1999.44,5.88
working on using Dows to help voting and,2000.88,6.0
decision making and allocation of,2005.32,4.32
resources within this project obviously,2006.88,4.98
as I've said in many of my videos a lot,2009.64,4.139
of blockchain and DOW technology is not,2011.86,4.319
ready but we are going to eat our own,2013.779,4.38
dog food and make sure that we are,2016.179,4.021
testing these things so that they'll do,2018.159,3.36
the things that we say that they need to,2020.2,2.819
do right we're going to figure it out as,2021.519,4.441
as we go number four corporate adoption,2023.019,5.821
we have a few entrepreneurs and Business,2025.96,5.819
Leaders we've got uh several ctOS in the,2028.84,4.8
group we need more connections to,2031.779,3.9
business and industry this means,2033.64,4.2
conferences this means,2035.679,4.38
um meetups this means,2037.84,4.8
um people on boards right A lot of my,2040.059,4.261
patreon supporters are business people,2042.64,3.659
and so like I work with them directly,2044.32,4.039
but we need more of that we need people,2046.299,5.401
uh working to evangelize,2048.359,5.081
um not just not just like saying hey,2051.7,3.659
Corporation you should adopt your,2053.44,3.239
heuristic imperatives and then leaving,2055.359,3.54
it at that we have startups that we're,2056.679,4.801
working with because the the companies,2058.899,4.801
offering aligned Services don't exist,2061.48,4.439
yet so we're helping incubate those,2063.7,3.36
things and I don't mean from a financial,2065.919,3.18
perspective but from a consultation,2067.06,5.039
perspective and so because if the if hi,2069.099,5.401
aligned Services goods and services,2072.099,5.401
exist companies can adopt them but until,2074.5,5.399
they exist they can't be adopted really,2077.5,5.159
number five National regulation we're,2079.899,5.7
just starting to have this conversation,2082.659,4.561
um actually just a conversation I had,2085.599,3.901
just a little while ago had to do with,2087.22,5.34
uh talking with some of the uh policy,2089.5,6.3
makers and lawyers and legislators that,2092.56,5.039
are concerned about this kind of stuff,2095.8,3.24
so for instance,2097.599,3.541
um the vice president uh I don't know if,2099.04,4.2
it's today but soon we'll be talking,2101.14,4.86
with all of the big Tech Giants right so,2103.24,3.66
we need to have more of those,2106.0,3.3
conversations and we need to add some of,2106.9,4.92
uh some of our perspective from the gato,2109.3,3.9
framework,2111.82,4.32
um into those National conversations but,2113.2,4.7
not just from it not just from a,2116.14,3.719
regulatory standpoint of the nation,2117.9,4.36
looking down into the nation the,2119.859,4.081
nation's looking up and out to the rest,2122.26,3.24
of the world because as I mentioned,2123.94,4.32
National Security that is a huge thing,2125.5,4.8
GDP growth that is a big thing in,2128.26,4.5
geopolitical influence AI is going to,2130.3,5.039
affect all of these domains number six,2132.76,5.7
uh the international treaty again we,2135.339,5.821
need we need people that are connected,2138.46,5.159
to the UN,2141.16,3.24
um,2143.619,4.321
uh maybe NATO I don't know oecd all,2144.4,6.48
kinds of stuff uh UNESCO there's all,2147.94,4.5
kinds of international organizations,2150.88,4.02
that we would like to be connected with,2152.44,5.34
and work with and talk to in order to,2154.9,5.4
have these conversations and,2157.78,4.02
by and large just make the right,2160.3,3.299
connections so that these conversations,2161.8,4.14
are happening and we can articulate the,2163.599,5.161
gato framework and get it published and,2165.94,5.58
then finally layer 7 Global consensus we,2168.76,4.14
have writers we have graphic,2171.52,3.0
communicators we've got editors we've,2172.9,3.66
got audio Engineers,2174.52,4.76
um we're working with uh people all over,2176.56,6.12
even more influencers have excuse me,2179.28,6.22
reached out to me so I'm uh I'm going to,2182.68,4.74
be having conversations with them so,2185.5,6.18
that we can all align on this consensus,2187.42,6.6
and then here's our uh here's our our,2191.68,4.38
mascot it's our own version of taco cat,2194.02,5.16
so again you know gato cat and then you,2196.06,7.26
know seven layered Taco you get the idea,2199.18,6.48
um okay so you're probably glazing over,2203.32,4.14
at this point but you've got the meat of,2205.66,3.3
it so if you're really really super,2207.46,4.02
interested in the layers let's take a,2208.96,4.56
look at the layers of Gato in a little,2211.48,5.22
bit bigger depth so number one of uh,2213.52,5.22
layer one model alignment fine tuning,2216.7,3.72
the very first experiment that I,2218.74,4.02
published was on fine tuning large,2220.42,4.14
language models so that they are aligned,2222.76,4.38
number two reinforcement learning again,2224.56,4.44
that is the goal is how do you create,2227.14,4.08
the data sets in the systems and the,2229.0,6.0
signals in order to have uh models that,2231.22,6.06
not only are initially aligned to,2235.0,4.32
heuristic imperatives and human needs,2237.28,4.2
and the needs of all life but how do you,2239.32,3.72
make sure that they get better at that,2241.48,3.3
over time right that is the entire,2243.04,4.2
purpose of heuristics heuristics uh,2244.78,3.839
heuristic imperatives and reinforcement,2247.24,4.56
learning basically the same thing,2248.619,5.281
um at least here's the comparatives are,2251.8,3.84
reinforcement learning on a specific,2253.9,2.939
trajectory,2255.64,4.74
model bias so there's uh there's a lot,2256.839,5.821
of intrinsic bias in models there's been,2260.38,4.56
uh some really interesting studies even,2262.66,4.02
chat GPT with reinforcement learning,2264.94,3.72
with human feedback is still pretty,2266.68,4.439
sexist it's also pretty racist depending,2268.66,3.9
on the kinds of prompts that you use,2271.119,3.661
there's a lot of implicit bias then,2272.56,4.68
there's also um Mesa optimization which,2274.78,4.74
I'm not sure I'm not entirely sure that,2277.24,4.74
Mesa optimization is a problem for,2279.52,5.16
language models but it could be,2281.98,4.68
um so we'll see but we need to be aware,2284.68,3.659
of that and we need to study it and if,2286.66,3.54
it is there we need to address it but,2288.339,4.321
Mesa optimization is like a tiny,2290.2,4.44
component of this whole framework,2292.66,5.699
open source data sets so one of the,2294.64,6.3
things that I mentioned is open source,2298.359,4.98
open source Open Source by by creating,2300.94,4.26
and Publishing open source data sets,2303.339,5.28
that can uh one they're transparent but,2305.2,6.06
two that can foster collaboration and,2308.619,4.261
ultimately one of the things that I hope,2311.26,4.26
to achieve is what I call axiomatic,2312.88,5.219
alignment so axiomatic alignment is what,2315.52,4.8
happens when through conversation,2318.099,4.98
through experimentation through repeated,2320.32,5.64
augmentation of Open Source data sets,2323.079,5.341
practically every data set out there,2325.96,5.76
that AI is trained on intrinsically has,2328.42,5.88
some alignment baked into it and if,2331.72,5.1
every data set or all or if enough data,2334.3,5.94
sets are aligned then you can end up,2336.82,6.18
with a a virtuous cycle or a positive,2340.24,5.46
feedback loop where every subsequent,2343.0,5.339
data set is also more and more aligned,2345.7,5.46
so from a model perspective the,2348.339,5.041
overarching goal is to arrive at a place,2351.16,4.56
of axiomatic alignment,2353.38,4.14
so this will require us to solve,2355.72,4.2
problems around training model,2357.52,5.04
architecture and then finally the data,2359.92,5.1
ecosystem that we build and when I say,2362.56,4.559
we I don't mean just those of us in gato,2365.02,4.74
the gato framework project but everyone,2367.119,4.74
everyone participating this whether,2369.76,4.2
they're academic researchers corporate,2371.859,4.321
government military so on and so forth,2373.96,5.28
now Layer Two autonomous systems I,2376.18,3.96
already talked a little bit about,2379.24,3.66
cognitive architecture we don't need to,2380.14,4.92
um you know beat the dead horse there,2382.9,4.439
but one of the things that we want to,2385.06,4.62
talk about and and publish is an open,2387.339,4.26
source reference architecture that's,2389.68,3.72
really the primary one of the primary,2391.599,3.961
goals here is what are the components,2393.4,3.84
what are the system components that you,2395.56,4.559
need in order to have a fully aligned,2397.24,5.52
and fully autonomous system so this,2400.119,3.901
includes some of these things like,2402.76,3.78
self-evaluation and stability we are,2404.02,5.339
working on how do you how do you design,2406.54,4.92
tasks how do you evaluate past,2409.359,3.661
performance how do you automatically,2411.46,3.78
label data and how do you create modular,2413.02,4.98
design patterns that allow for anyone,2415.24,6.18
and everyone to create their own fully,2418.0,5.82
autonomous systems that are also aligned,2421.42,5.22
to the heuristic imperatives and,2423.82,6.0
therefore should be benevolent so by,2426.64,5.1
getting by having the ultimate goal of,2429.82,3.84
publishing these open source reference,2431.74,3.72
architectures that'll make it really,2433.66,4.02
easy for all corporations out there and,2435.46,3.78
all private individuals and all,2437.68,4.86
governments to adopt these uh these,2439.24,4.859
patterns these software architecture,2442.54,4.62
patterns which again just by providing,2444.099,4.621
that answer and making it as easy as,2447.16,5.3
possible will be a one component in,2448.72,5.82
solving alignment and the control,2452.46,4.24
problem globally,2454.54,5.52
so decentralized networks this is not,2456.7,5.52
just blockchain not just Dows but also,2460.06,5.34
federations so keep that in mind,2462.22,4.28
um,2465.4,4.08
there's two primary components here one,2466.5,5.02
first we just have to figure out how to,2469.48,3.72
do these Technologies because by and,2471.52,3.72
large these are still highly,2473.2,4.32
experimental Technologies,2475.24,3.839
um and I will be the first to admit that,2477.52,3.18
maybe blockchain and DOW is not the,2479.079,4.381
correct way but in principle some kind,2480.7,4.8
of Federated system or decentralized,2483.46,4.68
network is probably the way to go in,2485.5,4.74
order to have some of these things such,2488.14,4.68
as algorithmic consensus when we're in a,2490.24,3.96
world where we have billions upon,2492.82,3.6
billions of autonomous agents all,2494.2,4.62
working on their own we need a way for,2496.42,4.8
them to work with each other and with us,2498.82,6.06
to come up with a consensus mechanisms,2501.22,7.02
that will slow slow the roll basically,2504.88,5.04
so there's a couple components that can,2508.24,3.96
go into that one is trust and reputation,2509.92,6.179
mechanisms so if you have you know some,2512.2,6.659
arbitrary AI agent operating out on the,2516.099,4.26
net on its own,2518.859,4.561
if it is an untrusted agent then maybe,2520.359,4.74
you don't want to give it resources or,2523.42,3.199
you don't want to give it any Credence,2525.099,3.541
that's what I mean by trust and,2526.619,4.0
reputation mechanisms resource control,2528.64,4.8
and allocation is another aspect of,2530.619,4.861
using blockchain or Dao or Federated,2533.44,4.86
Technologies which basically means if an,2535.48,5.16
agent is behaving in a way that is not,2538.3,5.4
aligned if the consensus of all agents,2540.64,4.74
says hey that's a little bit destructive,2543.7,3.84
maybe you shouldn't do it you revoke its,2545.38,4.979
access to computational resources data,2547.54,5.22
that sort of thing which can be a way to,2550.359,6.301
allow and Empower uh autonomous agents,2552.76,5.7
to police each other,2556.66,5.16
and then finally incentivizing alignment,2558.46,5.1
um so one of the things uh that people,2561.82,3.48
are concerned about is instrumental,2563.56,3.84
convergence so instrumental convergence,2565.3,6.72
is the idea that um AI uh no matter what,2567.4,6.78
goals you give it will be incentivized,2572.02,4.92
to pursue basic similar things like,2574.18,5.22
control of power more data that sort of,2576.94,5.639
stuff but so that's that's based on its,2579.4,5.699
in intrinsic motivations right an AI,2582.579,5.401
needs electricity to run so therefore it,2585.099,4.081
will always have some intrinsic,2587.98,3.9
motivation to do that now through these,2589.18,4.32
Network systems whether it's Federated,2591.88,3.9
decentralized however the network,2593.5,5.579
architecture is ultimately designed if,2595.78,6.18
you incentivize their behavior to get,2599.079,4.801
the behavior that you want so that they,2601.96,4.139
can get what they want then that is the,2603.88,5.52
way to go so for instance if you use,2606.099,5.581
resource tokens or cryptocurrency or,2609.4,3.78
whatever to say hey,2611.68,3.72
everything that you do that is aligned,2613.18,4.8
the the the the the the rest of the,2615.4,4.38
network says we agree with that behavior,2617.98,3.66
we agree with that decision we'll give,2619.78,3.6
you a little bit more data or a little,2621.64,4.679
bit more computational horsepower that,2623.38,5.1
sort of stuff so you incentivize the,2626.319,4.5
behavior that you want to see,2628.48,5.22
number four corporate adoption so again,2630.819,4.981
like I said for everyone that's talked,2633.7,3.6
to me about it implementing Heroes to,2635.8,3.779
comparatives ultimately just creates,2637.3,4.799
better Solutions so if the best AI,2639.579,4.981
services and products are aligned,2642.099,4.861
the solution sells itself that's that,2644.56,4.08
can that could literally be the end of,2646.96,4.2
the conversation is that working with,2648.64,3.9
corporations whether it's the tech,2651.16,3.84
Giants providing these services or,2652.54,4.22
everyone else consuming those services,2655.0,5.579
to realize and develop those services so,2656.76,6.4
that all AI services are intrinsically,2660.579,4.74
aligned and of course open AI has done,2663.16,4.26
their best you know they have,2665.319,3.661
um they have their own internal research,2667.42,3.24
one problem though is that they're not,2668.98,3.42
sharing that research,2670.66,3.959
um so their their work on alignment is a,2672.4,3.959
total black box which means nobody else,2674.619,3.061
can,2676.359,3.781
um Can can duplicate it so we need an,2677.68,4.8
open source way so that everyone can,2680.14,4.679
duplicate alignment research and make,2682.48,4.44
sure that all their apis all their AIS,2684.819,4.921
are aligned and then corporations don't,2686.92,4.08
even need to think about it right,2689.74,3.06
because again corporations are like okay,2691.0,3.0
whatever whatever is going to make us,2692.8,3.36
the most money will do that and that's,2694.0,5.579
yeah so if we if we create a corporate,2696.16,6.54
ecosystem if an economic ecosystem in,2699.579,5.221
which the best option Finance actually,2702.7,4.68
is also the most aligned option problem,2704.8,4.74
solved now that's a big if,2707.38,4.5
there's a few other reasons though that,2709.54,4.86
adopting aligned AI services and systems,2711.88,4.5
would be good for corporations one,2714.4,3.48
public relations,2716.38,3.479
you know whatever whatever is popular in,2717.88,4.86
Vogue so for instance like LGBT rights,2719.859,4.98
super popular right now all the rage so,2722.74,4.079
guess what a lot of corporations are,2724.839,4.081
jumping on that bandwagon bandwagon,2726.819,4.081
mentality is good as long as it aligns,2728.92,4.74
on also something that is good employee,2730.9,5.1
satisfaction now obviously I think that,2733.66,4.14
employment conventional employment is,2736.0,2.76
going to be going the way of the,2737.8,2.819
dinosaurs by and large but for the,2738.76,3.78
employees that are there it really feels,2740.619,4.681
good to know that your company as part,2742.54,4.86
of a higher mission to make the world,2745.3,4.62
better for everyone so just gonna throw,2747.4,3.9
that out there and then finally,2749.92,3.179
stakeholder capitalism,2751.3,4.559
stakeholder capitalism is an is a a,2753.099,6.061
paradigm whereby it's not just you you,2755.859,5.281
the corporation and your customers it's,2759.16,3.78
everyone as a stakeholder so that's,2761.14,3.959
employees customer suppliers environment,2762.94,5.04
the rest of society so by adopting,2765.099,4.921
aligned AI that can also bring,2767.98,3.9
corporations in a line with stakeholder,2770.02,4.68
capitalism as that idea continues to,2771.88,4.14
develop,2774.7,3.899
oh this is a long video uh number five,2776.02,4.5
National regulations I already mentioned,2778.599,6.0
GDP GDP growth obviously AI is a gonna,2780.52,7.26
be a huge powerful economic engine for,2784.599,5.401
the foreseeable future so we need to,2787.78,4.44
make sure that as Nations you know try,2790.0,4.5
to maximize their GDP which they are all,2792.22,3.96
incentivized to do so that's fine I'm,2794.5,2.64
not going to tell them that they're,2796.18,1.919
wrong,2797.14,2.28
um I don't think that it's necessarily,2798.099,3.24
the best thing to optimize for but,2799.42,4.14
that's how the world works right now you,2801.339,4.201
can wish in one hand and you know you,2803.56,3.18
know what you can do on the other hand,2805.54,3.48
guess which one fills up,2806.74,4.02
um National Security so this is the,2809.02,4.02
biggest thing right the US's uh chips,2810.76,4.319
act where we you know did the the AI,2813.04,4.319
chips embargo against China right that's,2815.079,5.341
an example of the geopolitical game of,2817.359,4.381
chess that is going to be playing out,2820.42,4.86
for the foreseeable future around Ai and,2821.74,6.96
adversarial uses of AI so by working,2825.28,6.539
with nations in in line in alignment,2828.7,5.639
with their their national interests we,2831.819,5.401
can also work with them to adopt more,2834.339,5.421
aligned AI solicit Solutions and systems,2837.22,5.76
Democratic institutions so uh voter,2839.76,7.059
rights electric transparency Judicial,2842.98,7.08
Systems AI is going to impact every,2846.819,7.321
element every aspect of uh liberal,2850.06,6.6
Democratic societies including the,2854.14,4.38
agencies that the that those governments,2856.66,4.14
run on so by working with them to say,2858.52,5.04
here's how you can Implement AI to both,2860.8,5.46
save money and be a better Society to,2863.56,4.259
strengthen your Democratic institutions,2866.26,3.839
that will benefit everyone,2867.819,4.8
geopolitical influence ditto there's,2870.099,5.881
going to be things about trade for,2872.619,6.24
instance alliances all of those things,2875.98,4.98
are going to be impacted by AI which we,2878.859,4.621
need to study and we need to become the,2880.96,4.26
world experts on so that we can advise,2883.48,4.02
and consult properly and then finally,2885.22,4.139
sustainability which comes down to,2887.5,3.78
environmental challenges in the grand,2889.359,4.021
scheme of things I think that if we,2891.28,4.38
solve these other problems then by,2893.38,4.68
virtue of solving those problems around,2895.66,5.179
consensus we'll probably also figure out,2898.06,5.1
environmental control,2900.839,5.141
layer 6 International treaty I already,2903.16,5.159
mentioned um basically CERN but for AI,2905.98,4.56
so just a really quick recap of the,2908.319,4.621
benefits one membership and governance,2910.54,5.16
where all uh all nations are,2912.94,4.98
stakeholders and so they can join and,2915.7,3.48
make decisions collectively,2917.92,4.38
collaborative research again same exact,2919.18,4.2
thing that we already see with CERN,2922.3,3.5
shared resources and infrastructure,2923.38,4.5
Education and Training so this is,2925.8,3.94
another thing is there's probably going,2927.88,4.199
to be a shortfall of qualified Ai,2929.74,4.619
blockchain and and cognitive Architects,2932.079,4.921
for a while so by working together to,2934.359,4.201
make sure that we train up the people,2937.0,3.96
that we need to solve this problem that,2938.56,3.779
is something that International,2940.96,4.08
cooperation could do a lot for open,2942.339,4.5
science and knowledge sharing again that,2945.04,4.02
has been well established with,2946.839,3.541
um with some of these existing things,2949.06,4.98
International cooperation ditto huh see,2950.38,6.479
above statements and then finally uh,2954.04,4.74
Global consensus I already mentioned uh,2956.859,3.48
pretty much all of these,2958.78,3.42
um academic institutions we've got we've,2960.339,3.78
already got a few professors and,2962.2,4.08
students in the group so we've got a few,2964.119,3.901
lines in you know we've got feelers and,2966.28,4.92
fingers into um into the academic,2968.02,4.5
establishment,2971.2,2.879
um I've actually personally had probably,2972.52,3.18
a hundred different students reach out,2974.079,2.76
to me,2975.7,4.02
um either on patreon Discord or LinkedIn,2976.839,5.101
or Twitter and every time they ask me,2979.72,3.599
like Dave what should I what should I do,2981.94,4.08
and I'm like AI man like it's going that,2983.319,5.04
way if you care about the future like,2986.02,4.2
take a look at like some of the stuff,2988.359,3.96
that I've written and advocate for yours,2990.22,3.66
to comparatives research and they're,2992.319,3.841
like cool that's what I'll do,2993.88,5.64
um so you know because education is the,2996.16,6.9
future as as as many criticisms as I,2999.52,5.22
have of particularly American,3003.06,4.14
institutions universities are here,3004.74,4.079
they're here to stay they're important,3007.2,4.58
stakeholders in this entire conversation,3008.819,5.941
media engagement so this is this has to,3011.78,5.44
do with mainstream media this has to do,3014.76,5.16
with social media uh all of the above,3017.22,4.2
one of the things that we're working on,3019.92,3.24
is we're working on producing materials,3021.42,4.08
to make all this as accessible and,3023.16,4.74
shareable as possible so we're creating,3025.5,3.96
graphical slide decks we're creating,3027.9,4.62
educational materials I've got my videos,3029.46,4.74
um that sort of stuff because the more,3032.52,3.54
information that we get out there the,3034.2,4.2
easier it is to consume the more widely,3036.06,4.08
it's shared the better off we're all,3038.4,3.02
going to be,3040.14,4.26
next up is industry Partnerships again,3041.42,5.199
as I mentioned just a minute ago one of,3044.4,3.419
the things that we're that we're working,3046.619,3.48
on is publishing those open source,3047.819,5.28
standards advocating for startups and,3050.099,4.861
other companies to build and adopt,3053.099,5.161
aligned AI services and Pro products and,3054.96,5.46
just by working with them to say hey we,3058.26,4.559
recognize that your bottom line is the,3060.42,4.139
most important thing to companies let's,3062.819,3.78
make sure that that that that you,3064.559,4.681
implement and deploy these things in a,3066.599,3.72
way that doesn't have unintended,3069.24,3.599
negative consequences and then finally,3070.319,5.401
policy advocacy so this has to do with,3072.839,5.041
back going back every layer which is,3075.72,5.78
working with legislators lawyers,3077.88,5.939
and other groups you know whether it's,3081.5,4.9
think tanks whoever in order to better,3083.819,4.981
understand this stuff so an example of,3086.4,4.26
this is I've got a few meetings coming,3088.8,4.62
up later in May where I'll be meeting,3090.66,5.52
with people to help bring them up to,3093.42,4.919
speed with some of these ideas and help,3096.18,4.26
guide them as to like okay this is,3098.339,4.02
what's happening this is how it works,3100.44,4.08
and here's a here's an approach that we,3102.359,4.681
can take to make sure that it doesn't uh,3104.52,5.22
go uh belly side up,3107.04,4.62
now,3109.74,4.02
um we all have a good story,3111.66,4.98
for understanding this so in Avengers,3113.76,5.099
which I talk about this probably more,3116.64,4.5
than I should near the very end when,3118.859,4.861
Thanos said I am inevitable,3121.14,6.54
that is a fictional representation of,3123.72,7.92
Malik so the the idea is that Thanos was,3127.68,6.24
an Unstoppable destructive force that,3131.64,4.14
nobody wanted he wanted an outcome that,3133.92,4.56
nobody wanted but it seemed inevitable,3135.78,5.22
and he even said I am inevitable,3138.48,3.839
the snap,3141.0,3.78
the idea that there could be a moment in,3142.319,5.641
time that everything goes sideways,3144.78,5.94
everything goes wrong that is what,3147.96,5.399
Singularity or hard takeoff or whatever,3150.72,5.58
could represent the Infinity Stones,3153.359,5.641
think of those as the power of AI as as,3156.3,4.68
we get more and more AI capabilities,3159.0,4.8
it's like we're loading up our Gauntlet,3160.98,4.68
um the sacrifice that various people,3163.8,3.84
make like Tony Stark we have a lot of,3165.66,4.02
hard choices to make including just the,3167.64,3.78
investment that people like me and,3169.68,3.84
everyone in the community are making in,3171.42,5.1
terms of time and energy and the risks,3173.52,5.4
that we're taking in order to say hey we,3176.52,4.26
see this problem coming and we're going,3178.92,3.72
to try and do something about it,3180.78,4.74
in the story of undoing the snap the,3182.64,5.1
idea is that there is always hope that,3185.52,4.68
with the right people the right team and,3187.74,5.22
the right effort you can either avert,3190.2,6.119
disaster or undo disaster now obviously,3192.96,4.74
a lot of doomers say we don't get a,3196.319,3.721
do-over we don't get we we get one shot,3197.7,3.84
at this I don't know whether or not,3200.04,4.68
that's true but the idea is that we are,3201.54,6.6
barreling towards our end game right we,3204.72,5.82
have we must have the right people the,3208.14,3.9
right team,3210.54,4.98
um in a concerted Global effort in order,3212.04,6.18
to solve this problem safely and not,3215.52,4.2
just not just solve it like,3218.22,3.06
satisfactorily,3219.72,3.48
because again there's many possible,3221.28,3.6
outcomes I don't want a dystopian,3223.2,4.02
outcome any more than I want Extinction,3224.88,6.179
or collapse there's one possible outcome,3227.22,6.839
that is win-win that is Utopia and we,3231.059,4.621
got to thread that needle and we'll be,3234.059,3.361
working as hard as we can to make sure,3235.68,4.679
that that happens so this is The,3237.42,6.24
Avengers Assemble moment if you want to,3240.359,6.361
join this effort the link to apply is in,3243.66,5.34
the description of this video if you,3246.72,4.08
don't want to participate directly you,3249.0,3.96
can also support me on patreon I'm also,3250.8,3.66
happy to support you if you support me,3252.96,4.32
on patreon I have a private patreon,3254.46,5.159
Discord where I answer questions we,3257.28,4.799
actually just started having office,3259.619,5.881
hours Town Hall Days where all my,3262.079,4.74
patreon supporters can interact with,3265.5,4.14
each other and with me in real time if,3266.819,4.141
you've been laid off and you've got,3269.64,3.24
technical skills or political skills or,3270.96,3.84
communication skills or whatever,3272.88,3.9
maybe now's the time for you to join the,3274.8,4.74
effort if you're scared one of the one,3276.78,4.38
of the most powerful things that people,3279.54,4.14
have told me in in in the heuristics,3281.16,4.56
imperatives Discord is that for the,3283.68,4.56
first time in since forever they feel,3285.72,5.099
empowered to make a difference in the,3288.24,4.26
outcome that we're heading towards and,3290.819,3.421
if you're optimistic like me we also,3292.5,6.0
need that so Avengers assembled thank,3294.24,6.44
you,3298.5,2.18