davidshapiro_youtube_transcripts / Sparse Priming Representations the secret ingredient to scalable AGI memories_transcript.csv
Stevross's picture
Upload 50 files
421fea8
text,start,duration
hey everybody David Shapiro here with a,0.84,4.02
video so,3.36,3.539
um one I've been scarce and I apologize,4.86,4.319
I am feeling better,6.899,4.32
um recovering from burnout although I,9.179,4.021
still need like some days just doing,11.219,3.361
nothing,13.2,3.419
um but anyways,14.58,5.22
um so y'all are really clamoring for me,16.619,7.021
to continue the um the Q a chat but not,19.8,5.1
that one,23.64,3.12
um and then the salience and,24.9,3.6
anticipating,26.76,4.439
um you know and auto Muse and all that,28.5,5.88
fun stuff so all these chat Bots,31.199,6.121
um I will continue working on them,34.38,6.499
but I kind of got to a stopping point,37.32,7.739
where uh basically the problem is memory,40.879,6.7
right so whether you're looking at,45.059,5.581
hundreds of scientific articles or an,47.579,6.48
arbitrarily long uh chat conversation or,50.64,5.46
an entire novel,54.059,3.901
um semantic search is just not good,56.1,4.26
enough breaking it up and chunking and,57.96,4.14
and stuff so we need a more,60.36,5.34
sophisticated a more organized uh memory,62.1,6.72
system for AI for autonomous AI,65.7,5.76
and so this is what I proposed,68.82,5.04
um and so basically there's there's,71.46,5.159
there's uh episodic memory there's two,73.86,4.259
primary kinds of memory in the human,76.619,3.981
brain there's episodic memory which is,78.119,5.04
chronologically linear so that is the,80.6,4.54
lived experience the live narrative that,83.159,5.1
is the a linear account of the,85.14,5.339
sensations you know your external senses,88.259,4.801
and your internal thoughts,90.479,3.901
um those are the two primary things that,93.06,3.84
you got Sensations thoughts and then in,94.38,4.199
thoughts are,96.9,3.84
um decisions uh memories that have been,98.579,4.321
recalled so on and so forth but you,100.74,3.72
forget most of this most of this is,102.9,3.359
noise right you don't need to remember,104.46,3.54
that you remembered something at all,106.259,3.661
times you just have like oh I'm thinking,108.0,3.36
about you know that time I went to the,109.92,4.5
beach right and then you know anyways,111.36,4.92
so you don't necessarily need to record,114.42,4.379
all your thoughts but you definitely,116.28,4.68
need to record uh to a certain extent,118.799,5.221
what's coming in and then you you slot,120.96,5.82
that into some kind of framework,124.02,4.5
um so,126.78,4.86
this is going to be the underpinning uh,128.52,5.7
work and I have written in all three of,131.64,4.98
my books so far that like I was putting,134.22,5.4
off memory systems because it is a super,136.62,5.16
non-trivial problem and it turns out,139.62,4.56
it's now the problem that like we all,141.78,4.92
have to solve so I'm working with,144.18,5.22
um a few people uh on various cognitive,146.7,3.96
architectures and we're actually going,149.4,2.76
to have some demos coming up in the,150.66,2.76
coming weeks,152.16,3.24
um because fortunately I'm no longer the,153.42,3.3
only person working on cognitive,155.4,3.059
architectures yay,156.72,4.14
um the idea is catching on,158.459,5.161
um so with that being said though,160.86,4.14
um,163.62,3.839
the this is this is a very difficult,165.0,5.76
problem and so the idea is Okay so we've,167.459,5.941
got raw data coming in right it's it's,170.76,4.68
unstructured the only well it's it's,173.4,4.02
semi-structured the only structure is,175.44,4.799
you know what time series it has but,177.42,4.319
other other than that you don't know,180.239,3.0
what,181.739,3.181
um what the topic is going to be and the,183.239,3.061
topics are going to change right and,184.92,3.06
there might be gaps in the time,186.3,5.04
so what we do is we take a chunk of logs,187.98,6.0
an arbitrary chunk of logs based on that,191.34,4.2
are temporally bounded,193.98,4.74
and you get an executive summary of that,195.54,5.94
information and in this chunk so this is,198.72,4.32
like going to be another Json file or,201.48,3.479
whatever you have pointers back to the,203.04,3.6
original log so that you can reconstruct,204.959,3.961
the memory because using sparse pointers,206.64,4.5
is actually a big thing that human,208.92,4.02
brains do,211.14,4.26
um and so then this is basically a a,212.94,4.439
very sparse summary and I'll show you,215.4,4.32
what I mean by sparse summary and then,217.379,4.801
finally as you accumulate more of these,219.72,4.799
summaries you eventually merge these,222.18,4.979
into a knowledge graph or a cluster them,224.519,6.8
and then use that clustering to make uh,227.159,6.901
to make like Wiki articles or KB,231.319,5.621
articles and give me just a second,234.06,6.48
sorry I needed my coffee okay so anyways,236.94,7.859
um yeah so this is the scheme and I,240.54,6.479
spent a long time talking through this,244.799,5.701
with chat gpt4 so you can see this is a,247.019,6.481
whoops this is a come on,250.5,5.159
Why is the,253.5,4.5
why is the okay it doesn't want to,255.659,4.381
scroll anyways you can see it is a very,258.0,4.019
very long conversation I talked through,260.04,5.099
code I talk through the math I talked,262.019,4.861
through the concept,265.139,4.861
and so anyways at the very end of it I,266.88,4.74
said can you write an executive summary,270.0,3.06
of the problem we're trying to solve,271.62,4.859
here and so this is just taking a step,273.06,4.859
back for a second,276.479,5.761
I am using gpt4 to help solve the,277.919,7.141
problems of AGI artificial general,282.24,4.8
intelligence or what I call autonomous,285.06,4.38
cognitive entities,287.04,4.62
so the problem at hand involves,289.44,4.259
designing an efficient memory system for,291.66,3.96
an autonomous cognitive entity or an ace,293.699,4.081
that can manage a large and constantly,295.62,4.019
growing Corpus of text Data generated,297.78,4.199
through thoughts inputs and outputs this,299.639,3.78
data can accumulate to hundreds of,301.979,3.241
gigabytes per year potentially reaching,303.419,4.5
millions or billions of individual logs,305.22,5.34
the primary challenge is to organize and,307.919,4.321
compress these logs into a manageable,310.56,3.6
set of knowledge-based Articles while,312.24,3.36
retaining as much meaningful information,314.16,3.12
as possible,315.6,3.96
this is such a concise summary I could,317.28,3.9
not have done better,319.56,4.26
our proposed hour see it's it's already,321.18,4.739
the the collective because where it,323.82,4.439
understands that we're collaborating our,325.919,4.261
proposed solution involves a multi-level,328.259,3.78
approach with the first level being the,330.18,3.78
consolidation of raw logs into roll-up,332.039,5.361
summaries so that's this,333.96,3.44
um these Roll-Ups serve as compressed,337.5,3.84
representations of the original logs,339.6,3.439
reducing the total number of Records,341.34,4.5
then we employ a gating or threshold,343.039,4.66
function to determine whether a roll-up,345.84,3.72
is semantically similar enough to an,347.699,4.741
existing KB articles or if it if it,349.56,5.579
should be added as a new article this,352.44,4.259
approach allows the KB to adapt,355.139,3.961
organically to the evolving data while,356.699,4.141
maintaining scalability,359.1,3.539
the key aspects to consider in this,360.84,3.96
solution are the choice of similarity,362.639,3.721
threshold and semantic similarity,364.8,3.179
measure as well as the balance between,366.36,3.559
number of KB articles and their quality,367.979,4.381
periodic evaluation and fine-tuning of,369.919,3.881
the system will help ensure its,372.36,3.72
continued Effectiveness as data grows,373.8,6.3
okay so this is a very very condensed,376.08,7.339
text summary of this system,380.1,5.46
and then,383.419,5.321
so I mentioned sparsity right so I've,385.56,5.579
been reading this book,388.74,6.299
behave so as always neuroscience and,391.139,7.801
life inspires what I'm working on and,395.039,6.121
one of the one of the experiments or,398.94,3.66
actually several the experiments that he,401.16,3.479
talks about in this book has to do with,402.6,3.719
linguistic priming,404.639,3.961
and so an example of linguistic priming,406.319,5.88
in humans in Psychology is that if you,408.6,6.719
use just a few words,412.199,5.761
um kind of placed arbitrarily it will,415.319,5.341
really change someone's cognition so one,417.96,6.239
example was they did a test with Asian,420.66,6.36
women and if you remind the Asian women,424.199,4.801
of The Stereotype that Asians are better,427.02,3.959
at math before giving them a math test,429.0,4.74
they do better if you remind them of The,430.979,5.34
Stereotype that uh that women are bad at,433.74,4.92
math than they do worse and then of,436.319,3.541
course if you just give them neutral,438.66,3.12
priming they kind of you know perform in,439.86,3.959
the middle and there's plenty of,441.78,4.859
examples of priming um Darren Darren,443.819,5.1
Brown the the British dude The Mentalist,446.639,6.0
he used a lot of priming to get people,448.919,5.881
to like do all kinds of cool stuff this,452.639,3.9
was back in the 90s,454.8,3.72
um but like one one experiment that he,456.539,4.44
did was he had a bunch of like marketing,458.52,4.619
guys and he put them in a car and drove,460.979,4.381
them around town and he drove them by,463.139,4.62
like a specific set of billboards,465.36,4.98
and so they were primed with images and,467.759,4.981
words and then he asked them to solve a,470.34,5.04
particular marketing problem and he had,472.74,4.56
almost exactly predicted what they were,475.38,4.2
going to produce based on how they had,477.3,6.179
been primed now I noticed that large,479.58,6.78
language models can also be primed and,483.479,5.041
so what I mean by primed is that by just,486.36,4.02
sprinkling in a few of the correct words,488.52,4.28
and terms it will then be able to,490.38,5.52
reproduce or reconstruct whatever it is,492.8,4.6
that you're talking about so what I want,495.9,3.419
to do is I want to show you that because,497.4,5.78
this this really high density,499.319,6.241
way of compressing things is what I call,503.18,4.54
sparse priming representations,505.56,5.18
is going to be super important,507.72,6.36
for managing uh artificial cognitive,510.74,5.56
entities or AGI memories because here's,514.08,4.259
the thing large language models already,516.3,4.08
have a tremendous amount of foundational,518.339,5.341
knowledge so all you need to do is prime,520.38,5.579
it with just a few rules and statements,523.68,3.44
and assertions,525.959,5.461
that will allow it to um just basically,527.12,6.279
kind of remember or reconstruct the,531.42,3.66
concept so what I'm going to do is I'm,533.399,3.0
going to take this,535.08,4.62
and put it into a new chat and we're,536.399,5.641
going to go to gpt4,539.7,6.02
and I'll say the following is a sparse,542.04,6.299
priming representation,545.72,6.28
of a concept or topic,548.339,5.821
um oh wow they they reduced it from 100,552.0,6.42
messages to 50. I guess they're busy uh,554.16,6.0
unsurprising,558.42,4.26
um please reconstruct,560.16,7.82
the topic or Concept in detail,562.68,5.3
and so here's what we'll do,568.62,3.74
so with just a handful of statements and,573.0,4.32
assertions,576.3,4.14
I will show you that gpt4,577.32,6.78
in the form of chat gpt4 is highly,580.44,7.14
capable of reconstituting this very,584.1,6.84
complex topic just by virtue of the fact,587.58,5.52
that it um it already has a tremendous,590.94,4.019
amount of background knowledge and,593.1,4.82
processing capability,594.959,2.961
um okay,598.019,4.561
so there we go so the autonomous uh,599.88,4.079
cognitive entity is an advanced,602.58,2.819
artificial intelligence system to design,603.959,4.081
it yep okay there you go,605.399,4.56
um,608.04,4.799
so it's kind of it's It's reconstructing,609.959,4.861
what this multi-level approach so what,612.839,3.481
it's doing here is it's kind of re,614.82,4.56
restating uh everything,616.32,5.4
um but what you'll see is that it will,619.38,4.019
be able to confabulate and kind of fill,621.72,4.98
in the blanks and so by having a sparse,623.399,4.921
representation,626.7,3.36
it kind of guides how it's going to,628.32,4.32
confabulate and this can be used for all,630.06,5.04
kinds of tasks right so some of my,632.64,4.08
patreon supporters I'm not going to give,635.1,3.0
anything away because I respect my,636.72,3.66
patreon supporters privacy but they ask,638.1,5.34
me like how do I represent X Y or Z and,640.38,4.62
what I'm going to say is this is a way,643.44,3.839
to represent a lot of stuff,645.0,4.38
um what whatever whatever your domain of,647.279,5.701
expertise is you can ask it to do what I,649.38,5.699
did in there which is say just give me a,652.98,4.02
short list of you know statements,655.079,4.26
assertions explanations such that a,657.0,5.399
subject matter expert could re um could,659.339,5.641
uh reconstitute it,662.399,3.601
um,664.98,3.78
there we go and so here here it's it's,666.0,6.0
figuring this out as it goes periodic,668.76,4.98
evaluation and necessary to continued,672.0,4.32
efficiency this may be involve adjusting,673.74,4.5
the similarity threshold refining,676.32,4.259
semantic similarity measure modifying,678.24,3.719
other aspects,680.579,3.481
sparse priming representation is a,681.959,3.541
technique using conjunction to fill,684.06,3.24
acetate knowledge transfer and,685.5,4.14
reconstruction spr concise statements,687.3,4.979
are generated to summarize yeah so it,689.64,5.52
even understands just by virtue of,692.279,5.041
saying this is an spr and a brief,695.16,3.54
definition it understands the,697.32,3.18
implications,698.7,5.4
um there you go so now that it has has,700.5,7.44
um has reconstituted it we can say Okay,704.1,6.12
um great thanks,707.94,4.56
um can you discuss,710.22,6.78
how we could uh go about implementing,712.5,7.68
this for a chat bot,717.0,6.959
and so again because,720.18,3.779
um because this uh because gpt4 already,724.2,7.02
knows a whole bunch of coding and data,728.399,5.221
and stuff it's going to be able to talk,731.22,4.64
through the process,733.62,6.06
so this is going to,735.86,7.06
okay I don't think it fully,739.68,5.099
I gave it very simple instructions let's,742.92,3.84
see where it goes because often what,744.779,4.141
happens is and someone someone pointed,746.76,4.319
this out to me is that it'll kind of,748.92,4.02
talk through the problem and then give,751.079,3.961
you the answer so I learned the hard way,752.94,3.959
just be patient what it's basically,755.04,4.859
doing is it's talking itself through,756.899,5.161
um the the problem in the solution,759.899,4.68
so anyways excuse me I don't know why,762.06,4.2
I'm so hoarse,764.579,4.44
um but yeah so this is this is what I'm,766.26,4.5
working on right now and this is going,769.019,4.38
to have implications for for all all,770.76,5.1
chat Bots but also all autonomous AI,773.399,4.44
because again,775.86,3.539
um you know this is this is like the,777.839,3.661
first two minutes of conversation but,779.399,3.361
what happens when you have a million,781.5,2.459
logs what happens when you have a,782.76,3.66
billion logs so one thing that I suspect,783.959,5.461
will happen is,786.42,4.979
um the number of whoops,789.42,5.039
nah come back no,791.399,6.661
um I suspect that the number of logs,794.459,6.241
will go up geometrically,798.06,7.5
but what I also suspect is that the um,800.7,8.04
is that the number of KB articles will,805.56,7.2
actually go up and approach an asymptote,808.74,7.219
how do you get it to stop,812.76,3.199
there you go so I think I think that,816.959,3.421
this is kind of how it'll look where,819.06,3.66
like when you're when your Ace is new,820.38,4.86
when it's young it'll be creating a,822.72,5.22
bunch of new KB articles uh very quickly,825.24,4.92
but then over time the number of KB,827.94,3.899
articles will taper off because say for,830.16,3.72
instance there's only a finite amount of,831.839,4.321
information to learn about you and then,833.88,4.92
there will be a very slow trickle as,836.16,4.76
your life progresses right,838.8,5.46
and we can also exclude KB articles,840.92,5.919
about basic World Knowledge right all it,844.26,5.28
needs all your Ace needs is KB articles,846.839,5.221
about truly new novel and unique,849.54,4.979
information it doesn't need to record a,852.06,4.38
world model the world model is baked,854.519,5.461
into gpt4 and future models now one,856.44,6.6
other thing was because this is kind of,859.98,6.24
incrementally adding the KB articles,863.04,4.5
um let's see what it came up with okay,866.22,3.48
so talk through the problem,867.54,5.039
um one thing is that I asked it for the,869.7,5.46
pros and cons so right here,872.579,5.221
uh using a gating or and this is this is,875.16,4.679
how sophisticated it is,877.8,3.599
um using a gating or threshold function,879.839,3.721
to compare Roll-Ups against existing KBS,881.399,4.021
can be a viable alternative to,883.56,3.839
clustering so basically what we were,885.42,3.96
exploring was what if we use a,887.399,4.921
clustering algorithm to to,889.38,4.86
um figure out the chunks but then I was,892.32,3.48
like okay but we're not gonna We're Not,894.24,4.62
Gonna regenerate the uh the KB articles,895.8,5.099
every single time because that's going,898.86,4.08
to be prohibitively expensive so what if,900.899,5.341
we treat it more incrementally,902.94,5.04
um let's see this approach involves,906.24,3.3
comparing semantic similarity between a,907.98,3.06
new roll-up and existing KB articles if,909.54,2.52
it doesn't meet a predetermined,911.04,2.94
threshold okay so the pros it's simple,912.06,4.019
this approach is conceptually simple and,913.98,3.539
can be easier to implement compared to,916.079,4.021
clustering algorithms yes scalability as,917.519,4.26
new Roll-Ups are processed individually,920.1,3.78
the computational complexity of updating,921.779,4.381
KB articles grows linearly with the,923.88,3.42
number of Roll-Ups making it more,926.16,2.22
scalable,927.3,3.539
Dynamic growth the number of KB articles,928.38,3.84
can grow organically with the addition,930.839,4.68
of new rollups and then the cons it very,932.22,5.82
rightly identifies sub-optimal,935.519,4.921
organization because we're not using,938.04,5.22
Global clustering that's fine redundancy,940.44,5.16
there's a risk of creating similar KB,943.26,3.9
articles,945.6,3.419
um depending on the semantic similarity,947.16,4.32
and then parameter sensitivity so on and,949.019,5.521
so forth now that being said there is a,951.48,5.34
final step that we were that I was going,954.54,4.68
to talk about which is every now and,956.82,5.4
then we should do a re-indexing event,959.22,5.64
and so basically what that says is when,962.22,4.5
you're when your Ace is offline during,964.86,4.2
the dream sequence right so real-time,966.72,4.5
learning it can update the KB articles,969.06,4.8
in real time but then the dream sequence,971.22,5.479
it will delete all the KB articles,973.86,5.339
cluster the chunks based on semantic,976.699,5.08
similarity and then based on those,979.199,5.281
chunks write a whole new set of KB,981.779,3.841
articles,984.48,3.0
and so every now and then your,985.62,3.959
autonomous cognitive entity is going to,987.48,5.7
update its entire internal Wiki and then,989.579,6.361
these internal wikis are going to be the,993.18,6.3
primary source of information for your,995.94,5.579
uh for your for your cognitive entity,999.48,5.159
and so instead of searching millions of,1001.519,4.801
logs you're going to be searching,1004.639,4.44
hundreds or maybe a couple thousand KB,1006.32,5.579
articles which is a much more tractable,1009.079,4.38
problem,1011.899,3.901
um to find the correct thing and also,1013.459,3.781
they can be cross-linked to each other,1015.8,3.06
right because these KB articles these,1017.24,2.94
wikis,1018.86,3.24
um can be nodes and a knowledge graph,1020.18,4.08
which means it's like so my fiance was,1022.1,4.5
like okay so I was explaining it to her,1024.26,4.62
and she's like so what if it has what if,1026.6,6.3
it has a um an article on me and an,1028.88,6.539
article on her would it link the two of,1032.9,4.439
us and say that like we're engaged and,1035.419,3.561
you know our relationship has been ex,1037.339,3.901
long and I'm like yes we could probably,1038.98,5.74
do that it might also topically,1041.24,5.099
um so in terms of the kinds of topics,1044.72,4.079
here's another important thing in terms,1046.339,4.441
of kinds of topics we're probably going,1048.799,4.861
to have have it focus on people,1050.78,5.1
events,1053.66,5.399
um things like objects,1055.88,6.0
um as well as Concepts so a concept,1059.059,4.921
could be like the concept of the,1061.88,4.44
autonomous cognitive entity so people,1063.98,7.199
events things and Concepts and included,1066.32,7.32
in things are like places right so like,1071.179,6.781
the year 1080 the the place Paris France,1073.64,8.52
right so those are all viable nodes for,1077.96,6.12
a Knowledge Graph so that's that's kind,1082.16,3.78
of where we're at,1084.08,3.719
um yeah I think that's all I'm going to,1085.94,4.02
do today because like this is a lot and,1087.799,3.841
you can see that this conversation was,1089.96,3.48
very long,1091.64,4.5
um and uh but yeah so let me know what,1093.44,5.28
you think in the comments we are,1096.14,4.8
continuing to work,1098.72,4.199
um I had a few other things that I was,1100.94,4.14
going to say but I forgot them this is,1102.919,3.481
the most important thing and this is,1105.08,2.76
this is the hardest problem I'm working,1106.4,4.44
on and once I unlock this it's going to,1107.84,5.1
unlock a lot more work because think,1110.84,4.32
about think about breaking what if these,1112.94,4.38
logs instead of like our conversation,1115.16,4.379
what if these logs are scientific papers,1117.32,4.979
or what if these logs are scenes in a,1119.539,5.461
book right pretty much everything can be,1122.299,6.0
represented this way I think and then,1125.0,4.5
once you have these higher order,1128.299,3.601
abstractions and all of them point back,1129.5,4.08
so here's another really important thing,1131.9,3.899
that I forgot to mention is that there's,1133.58,4.4
metadata attached with each of these,1135.799,4.681
entities that points back to the,1137.98,4.059
original so you can you can still,1140.48,4.439
reconstruct the original information so,1142.039,4.441
if you have like you know a topical,1144.919,3.781
article here it'll point to all the,1146.48,5.28
chunks that were in that cluster that um,1148.7,5.04
that helped create it and then each of,1151.76,3.36
those chunks will point back to the,1153.74,3.12
original logs so you have kind of a,1155.12,4.08
pyramid shape,1156.86,5.76
um yeah so that's what I'm working on uh,1159.2,4.979
that's it I'll call it a day thanks for,1162.62,3.919
watching,1164.179,2.36