Datasets:
Languages:
English
Multilinguality:
monolingual
Size Categories:
n<1K
Language Creators:
found
Source Datasets:
original
Tags:
karpathy,whisper,openai
WEBVTT | |
00:00.000 --> 00:02.360 | |
The following is a conversation with Jeff Hawkins. | |
00:02.360 --> 00:04.140 | |
He's the founder of the Redwood Center | |
00:04.140 --> 00:06.560 | |
for Theoretical and Neuroscience in 2002 | |
00:06.560 --> 00:08.980 | |
and New Menta in 2005. | |
00:08.980 --> 00:11.920 | |
In his 2004 book titled On Intelligence | |
00:11.920 --> 00:13.880 | |
and in the research before and after, | |
00:13.880 --> 00:16.200 | |
he and his team have worked to reverse engineer | |
00:16.200 --> 00:19.160 | |
the New York Cortex and propose artificial intelligence | |
00:19.160 --> 00:21.360 | |
architectures approaches and ideas | |
00:21.360 --> 00:23.640 | |
that are inspired by the human brain. | |
00:23.640 --> 00:25.960 | |
These ideas include hierarchical temporal memory, | |
00:25.960 --> 00:30.080 | |
HTM from 2004, and New Work, The Thousand's Brain's Theory | |
00:30.080 --> 00:33.800 | |
of Intelligence from 2017, 18, and 19. | |
00:33.800 --> 00:36.120 | |
Jeff's ideas have been an inspiration | |
00:36.120 --> 00:38.960 | |
to many who have looked for progress beyond the current | |
00:38.960 --> 00:41.760 | |
machine learning approaches, but they have also received | |
00:41.760 --> 00:44.680 | |
criticism for lacking a body of empirical evidence | |
00:44.680 --> 00:46.240 | |
supporting the models. | |
00:46.240 --> 00:49.040 | |
This is always a challenge when seeking more than small | |
00:49.040 --> 00:51.440 | |
incremental steps forward in AI. | |
00:51.440 --> 00:54.120 | |
Jeff is a brilliant mind and many of the ideas | |
00:54.120 --> 00:56.520 | |
he has developed and aggregated from neuroscience | |
00:56.520 --> 00:59.160 | |
are worth understanding and thinking about. | |
00:59.160 --> 01:00.960 | |
There are limits to deep learning | |
01:00.960 --> 01:02.960 | |
as it is currently defined. | |
01:02.960 --> 01:05.800 | |
Forward progress in AI is shrouded in mystery. | |
01:05.800 --> 01:07.800 | |
My hope is that conversations like this | |
01:07.800 --> 01:11.480 | |
can help provide an inspiring spark for new ideas. | |
01:11.480 --> 01:14.060 | |
This is the Artificial Intelligence Podcast. | |
01:14.060 --> 01:16.200 | |
If you enjoy it, subscribe on YouTube, | |
01:16.200 --> 01:18.680 | |
iTunes, or simply connect with me on Twitter | |
01:18.680 --> 01:21.560 | |
at Lex Freedman spelled F R I D. | |
01:21.560 --> 01:25.360 | |
And now here's my conversation with Jeff Hawkins. | |
01:26.800 --> 01:29.880 | |
Are you more interested in understanding the human brain | |
01:29.880 --> 01:32.040 | |
or in creating artificial systems | |
01:32.040 --> 01:34.680 | |
that have many of the same qualities, | |
01:34.680 --> 01:38.600 | |
but don't necessarily require that you actually understand | |
01:38.600 --> 01:41.520 | |
the underpinning workings of our mind? | |
01:41.520 --> 01:44.020 | |
So there's a clear answer to that question. | |
01:44.020 --> 01:46.800 | |
My primary interest is understanding the human brain. | |
01:46.800 --> 01:51.520 | |
No question about it, but I also firmly believe | |
01:51.520 --> 01:53.880 | |
that we will not be able to create | |
01:53.880 --> 01:55.040 | |
fully intelligent machines | |
01:55.040 --> 01:57.280 | |
until we understand how the human brain works. | |
01:57.280 --> 02:00.120 | |
So I don't see those as separate problems. | |
02:00.120 --> 02:01.720 | |
I think there's limits to what can be done | |
02:01.720 --> 02:02.640 | |
with machine intelligence | |
02:02.640 --> 02:04.040 | |
if you don't understand the principles | |
02:04.040 --> 02:05.680 | |
by which the brain works. | |
02:05.680 --> 02:07.880 | |
And so I actually believe that studying the brain | |
02:07.880 --> 02:11.960 | |
is actually the fastest way to get to machine intelligence. | |
02:11.960 --> 02:14.640 | |
And within that, let me ask the impossible question. | |
02:14.640 --> 02:17.160 | |
How do you not define, but at least think | |
02:17.160 --> 02:19.400 | |
about what it means to be intelligent? | |
02:19.400 --> 02:22.240 | |
So I didn't try to answer that question first. | |
02:22.240 --> 02:24.480 | |
We said, let's just talk about how the brain works. | |
02:24.480 --> 02:26.680 | |
And let's figure out how certain parts of the brain, | |
02:26.680 --> 02:29.880 | |
mostly the neocortex, but some other parts too, | |
02:29.880 --> 02:32.320 | |
the parts of the brain most associated with intelligence. | |
02:32.320 --> 02:35.800 | |
And let's discover the principles by how they work. | |
02:35.800 --> 02:39.320 | |
Because intelligence isn't just like some mechanism | |
02:39.320 --> 02:40.640 | |
and it's not just some capabilities. | |
02:40.640 --> 02:43.000 | |
It's like, okay, we don't even know where to begin | |
02:43.000 --> 02:44.000 | |
on this stuff. | |
02:44.000 --> 02:49.000 | |
And so now that we've made a lot of progress on this, | |
02:49.000 --> 02:50.480 | |
after we've made a lot of progress | |
02:50.480 --> 02:53.200 | |
on how the neocortex works, and we can talk about that, | |
02:53.200 --> 02:54.560 | |
I now have a very good idea | |
02:54.560 --> 02:57.200 | |
what's gonna be required to make intelligent machines. | |
02:57.200 --> 02:59.600 | |
I can tell you today, some of the things | |
02:59.600 --> 03:02.120 | |
are gonna be necessary, I believe, | |
03:02.120 --> 03:03.480 | |
to create intelligent machines. | |
03:03.480 --> 03:04.600 | |
Well, so we'll get there. | |
03:04.600 --> 03:07.440 | |
We'll get to the neocortex and some of the theories | |
03:07.440 --> 03:09.200 | |
of how the whole thing works. | |
03:09.200 --> 03:11.720 | |
And you're saying, as we understand more and more | |
03:12.720 --> 03:14.800 | |
about the neocortex, about our own human mind, | |
03:14.800 --> 03:17.720 | |
we'll be able to start to more specifically define | |
03:17.720 --> 03:18.680 | |
what it means to be intelligent. | |
03:18.680 --> 03:21.880 | |
It's not useful to really talk about that until... | |
03:21.880 --> 03:23.560 | |
I don't know if it's not useful. | |
03:23.560 --> 03:26.200 | |
Look, there's a long history of AI, as you know. | |
03:26.200 --> 03:28.920 | |
And there's been different approaches taken to it. | |
03:28.920 --> 03:30.160 | |
And who knows? | |
03:30.160 --> 03:32.280 | |
Maybe they're all useful. | |
03:32.280 --> 03:37.280 | |
So the good old fashioned AI, the expert systems, | |
03:37.360 --> 03:38.960 | |
the current convolutional neural networks, | |
03:38.960 --> 03:40.440 | |
they all have their utility. | |
03:41.280 --> 03:43.800 | |
They all have a value in the world. | |
03:43.800 --> 03:45.280 | |
But I would think almost everyone agreed | |
03:45.280 --> 03:46.640 | |
that none of them are really intelligent | |
03:46.640 --> 03:49.880 | |
in a sort of a deep way that humans are. | |
03:49.880 --> 03:53.600 | |
And so it's just the question of how do you get | |
03:53.600 --> 03:56.440 | |
from where those systems were or are today | |
03:56.440 --> 03:59.240 | |
to where a lot of people think we're gonna go. | |
03:59.240 --> 04:02.320 | |
And there's a big, big gap there, a huge gap. | |
04:02.320 --> 04:06.240 | |
And I think the quickest way of bridging that gap | |
04:06.240 --> 04:08.840 | |
is to figure out how the brain does that. | |
04:08.840 --> 04:10.160 | |
And then we can sit back and look and say, | |
04:10.160 --> 04:13.000 | |
oh, what are these principles that the brain works on | |
04:13.000 --> 04:15.160 | |
are necessary and which ones are not? | |
04:15.160 --> 04:16.640 | |
Clearly, we don't have to build this in, | |
04:16.640 --> 04:18.520 | |
and tellage machines aren't gonna be built | |
04:18.520 --> 04:22.760 | |
out of organic living cells. | |
04:22.760 --> 04:24.720 | |
But there's a lot of stuff that goes on the brain | |
04:24.720 --> 04:25.920 | |
that's gonna be necessary. | |
04:25.920 --> 04:30.320 | |
So let me ask maybe, before we get into the fun details, | |
04:30.320 --> 04:33.080 | |
let me ask maybe a depressing or a difficult question. | |
04:33.080 --> 04:36.240 | |
Do you think it's possible that we will never | |
04:36.240 --> 04:38.120 | |
be able to understand how our brain works, | |
04:38.120 --> 04:41.840 | |
that maybe there's aspects to the human mind | |
04:41.840 --> 04:46.160 | |
like we ourselves cannot introspectively get to the core, | |
04:46.160 --> 04:48.120 | |
that there's a wall you eventually hit? | |
04:48.120 --> 04:50.200 | |
Yeah, I don't believe that's the case. | |
04:52.040 --> 04:53.240 | |
I have never believed that's the case. | |
04:53.240 --> 04:55.760 | |
There's not been a single thing we've ever, | |
04:55.760 --> 04:57.760 | |
humans have ever put their minds to that we've said, | |
04:57.760 --> 05:00.320 | |
oh, we reached the wall, we can't go any further. | |
05:00.320 --> 05:01.680 | |
People keep saying that. | |
05:01.680 --> 05:03.400 | |
People used to believe that about life, you know, | |
05:03.400 --> 05:04.480 | |
Alain Vitao, right? | |
05:04.480 --> 05:06.360 | |
There's like, what's the difference in living matter | |
05:06.360 --> 05:07.280 | |
and nonliving matter? | |
05:07.280 --> 05:09.120 | |
Something special you never understand. | |
05:09.120 --> 05:10.640 | |
We no longer think that. | |
05:10.640 --> 05:14.720 | |
So there's no historical evidence that suggests this is the case | |
05:14.720 --> 05:17.600 | |
and I just never even consider that's a possibility. | |
05:17.600 --> 05:21.840 | |
I would also say today, we understand so much | |
05:21.840 --> 05:22.800 | |
about the neocortex. | |
05:22.800 --> 05:25.480 | |
We've made tremendous progress in the last few years | |
05:25.480 --> 05:29.160 | |
that I no longer think of as an open question. | |
05:30.000 --> 05:32.560 | |
The answers are very clear to me and the pieces | |
05:32.560 --> 05:34.800 | |
that we don't know are clear to me, | |
05:34.800 --> 05:37.440 | |
but the framework is all there and it's like, oh, okay, | |
05:37.440 --> 05:38.600 | |
we're gonna be able to do this. | |
05:38.600 --> 05:39.960 | |
This is not a problem anymore. | |
05:39.960 --> 05:42.680 | |
It just takes time and effort, but there's no mystery, | |
05:42.680 --> 05:44.040 | |
a big mystery anymore. | |
05:44.040 --> 05:47.800 | |
So then let's get into it for people like myself | |
05:47.800 --> 05:52.800 | |
who are not very well versed in the human brain, | |
05:52.960 --> 05:53.840 | |
except my own. | |
05:54.800 --> 05:57.320 | |
Can you describe to me at the highest level, | |
05:57.320 --> 05:59.120 | |
what are the different parts of the human brain | |
05:59.120 --> 06:02.080 | |
and then zooming in on the neocortex, | |
06:02.080 --> 06:05.480 | |
the parts of the neocortex and so on, a quick overview. | |
06:05.480 --> 06:06.640 | |
Yeah, sure. | |
06:06.640 --> 06:10.800 | |
The human brain, we can divide it roughly into two parts. | |
06:10.800 --> 06:14.200 | |
There's the old parts, lots of pieces, | |
06:14.200 --> 06:15.680 | |
and then there's the new part. | |
06:15.680 --> 06:18.040 | |
The new part is the neocortex. | |
06:18.040 --> 06:20.440 | |
It's new because it didn't exist before mammals. | |
06:20.440 --> 06:23.000 | |
The only mammals have a neocortex and in humans | |
06:23.000 --> 06:24.760 | |
and primates is very large. | |
06:24.760 --> 06:29.400 | |
In the human brain, the neocortex occupies about 70 to 75% | |
06:29.400 --> 06:30.640 | |
of the volume of the brain. | |
06:30.640 --> 06:32.080 | |
It's huge. | |
06:32.080 --> 06:34.840 | |
And the old parts of the brain are, | |
06:34.840 --> 06:36.000 | |
there's lots of pieces there. | |
06:36.000 --> 06:38.760 | |
There's a spinal cord and there's the brainstem | |
06:38.760 --> 06:40.240 | |
and the cerebellum and the different parts | |
06:40.240 --> 06:42.040 | |
of the basal ganglion and so on. | |
06:42.040 --> 06:42.960 | |
In the old parts of the brain, | |
06:42.960 --> 06:44.800 | |
you have the autonomic regulation, | |
06:44.800 --> 06:46.280 | |
like breathing and heart rate. | |
06:46.280 --> 06:48.240 | |
You have basic behaviors. | |
06:48.240 --> 06:49.960 | |
So like walking and running are controlled | |
06:49.960 --> 06:51.400 | |
by the old parts of the brain. | |
06:51.400 --> 06:53.080 | |
All the emotional centers of the brain | |
06:53.080 --> 06:53.920 | |
are in the old part of the brain. | |
06:53.920 --> 06:55.080 | |
So when you feel anger or hungry, | |
06:55.080 --> 06:56.080 | |
lust or things like that, | |
06:56.080 --> 06:57.880 | |
those are all in the old parts of the brain. | |
06:59.080 --> 07:02.160 | |
And we associate with the neocortex | |
07:02.160 --> 07:03.320 | |
all the things we think about | |
07:03.320 --> 07:05.760 | |
as sort of high level perception. | |
07:05.760 --> 07:10.760 | |
And cognitive functions, anything from seeing and hearing | |
07:10.920 --> 07:14.560 | |
and touching things to language, to mathematics | |
07:14.560 --> 07:16.920 | |
and engineering and science and so on. | |
07:16.920 --> 07:19.760 | |
Those are all associated with the neocortex. | |
07:19.760 --> 07:21.760 | |
And they're certainly correlated. | |
07:21.760 --> 07:24.000 | |
Our abilities in those regards are correlated | |
07:24.000 --> 07:25.800 | |
with the relative size of our neocortex | |
07:25.800 --> 07:27.960 | |
compared to other mammals. | |
07:27.960 --> 07:30.520 | |
So that's like the rough division. | |
07:30.520 --> 07:32.760 | |
And you obviously can't understand | |
07:32.760 --> 07:35.160 | |
the neocortex completely isolated, | |
07:35.160 --> 07:37.040 | |
but you can understand a lot of it | |
07:37.040 --> 07:38.720 | |
with just a few interfaces | |
07:38.720 --> 07:40.320 | |
to the old parts of the brain. | |
07:40.320 --> 07:44.960 | |
And so it gives you a system to study. | |
07:44.960 --> 07:48.040 | |
The other remarkable thing about the neocortex | |
07:48.040 --> 07:49.880 | |
compared to the old parts of the brain | |
07:49.880 --> 07:52.880 | |
is the neocortex is extremely uniform. | |
07:52.880 --> 07:55.720 | |
It's not visually or anatomically, | |
07:55.720 --> 07:58.800 | |
or it's very, it's like a, | |
07:58.800 --> 08:00.080 | |
I always like to say it's like the size | |
08:00.080 --> 08:03.720 | |
of a dinner napkin, about two and a half millimeters thick. | |
08:03.720 --> 08:06.000 | |
And it looks remarkably the same everywhere. | |
08:06.000 --> 08:07.920 | |
Everywhere you look in that two and a half millimeters | |
08:07.920 --> 08:10.080 | |
is this detailed architecture. | |
08:10.080 --> 08:11.560 | |
And it looks remarkably the same everywhere. | |
08:11.560 --> 08:12.600 | |
And that's a cross species, | |
08:12.600 --> 08:15.360 | |
a mouse versus a cat and a dog and a human. | |
08:15.360 --> 08:17.080 | |
Where if you look at the old parts of the brain, | |
08:17.080 --> 08:19.640 | |
there's lots of little pieces do specific things. | |
08:19.640 --> 08:22.040 | |
So it's like the old parts of a brain evolved, | |
08:22.040 --> 08:23.640 | |
like this is the part that controls heart rate | |
08:23.640 --> 08:24.840 | |
and this is the part that controls this | |
08:24.840 --> 08:25.800 | |
and this is this kind of thing. | |
08:25.800 --> 08:27.200 | |
And that's this kind of thing. | |
08:27.200 --> 08:30.080 | |
And these evolve for eons of a long, long time | |
08:30.080 --> 08:31.600 | |
and they have those specific functions. | |
08:31.600 --> 08:33.240 | |
And all of a sudden mammals come along | |
08:33.240 --> 08:35.200 | |
and they got this thing called the neocortex | |
08:35.200 --> 08:38.200 | |
and it got large by just replicating the same thing | |
08:38.200 --> 08:39.440 | |
over and over and over again. | |
08:39.440 --> 08:42.680 | |
This is like, wow, this is incredible. | |
08:42.680 --> 08:46.240 | |
So all the evidence we have, | |
08:46.240 --> 08:50.040 | |
and this is an idea that was first articulated | |
08:50.040 --> 08:52.080 | |
in a very cogent and beautiful argument | |
08:52.080 --> 08:55.720 | |
by a guy named Vernon Malkassel in 1978, I think it was, | |
08:56.880 --> 09:01.640 | |
that the neocortex all works on the same principle. | |
09:01.640 --> 09:05.320 | |
So language, hearing, touch, vision, engineering, | |
09:05.320 --> 09:07.040 | |
all these things are basically underlying | |
09:07.040 --> 09:10.400 | |
or all built in the same computational substrate. | |
09:10.400 --> 09:11.880 | |
They're really all the same problem. | |
09:11.880 --> 09:14.880 | |
So the low level of the building blocks all look similar. | |
09:14.880 --> 09:16.320 | |
Yeah, and they're not even that low level. | |
09:16.320 --> 09:17.920 | |
We're not talking about like neurons. | |
09:17.920 --> 09:19.960 | |
We're talking about this very complex circuit | |
09:19.960 --> 09:23.560 | |
that exists throughout the neocortex is remarkably similar. | |
09:23.560 --> 09:26.400 | |
It is, it's like, yes, you see variations of it here | |
09:26.400 --> 09:29.680 | |
and they're more of the cell, that's not old and so on. | |
09:29.680 --> 09:31.840 | |
But what Malkassel argued was, | |
09:31.840 --> 09:35.640 | |
it says, you know, if you take a section on neocortex, | |
09:35.640 --> 09:38.640 | |
why is one a visual area and one is a auditory area? | |
09:38.640 --> 09:41.240 | |
Or why is, and his answer was, | |
09:41.240 --> 09:43.240 | |
it's because one is connected to eyes | |
09:43.240 --> 09:45.440 | |
and one is connected to ears. | |
09:45.440 --> 09:47.840 | |
Literally, you mean just as most closest | |
09:47.840 --> 09:50.440 | |
in terms of the number of connections to the sensor? | |
09:50.440 --> 09:52.920 | |
Literally, if you took the optic nerve | |
09:52.920 --> 09:55.320 | |
and attached it to a different part of the neocortex, | |
09:55.320 --> 09:58.000 | |
that part would become a visual region. | |
09:58.000 --> 10:00.400 | |
This actually, this experiment was actually done | |
10:00.400 --> 10:05.000 | |
by Murgankasur in developing, I think it was lemurs, | |
10:05.000 --> 10:06.720 | |
I can't remember what it was, it's some animal. | |
10:06.720 --> 10:08.560 | |
And there's a lot of evidence to this. | |
10:08.560 --> 10:09.960 | |
You know, if you take a blind person, | |
10:09.960 --> 10:12.240 | |
a person is born blind at birth, | |
10:12.240 --> 10:15.480 | |
they're born with a visual neocortex. | |
10:15.480 --> 10:18.320 | |
It doesn't, may not get any input from the eyes | |
10:18.320 --> 10:21.280 | |
because of some congenital defect or something. | |
10:21.280 --> 10:24.720 | |
And that region becomes, does something else. | |
10:24.720 --> 10:27.000 | |
It picks up another task. | |
10:27.000 --> 10:32.000 | |
So, and it's, so it's this very complex thing. | |
10:32.280 --> 10:33.760 | |
It's not like, oh, they're all built on neurons. | |
10:33.760 --> 10:36.480 | |
No, they're all built in this very complex circuit. | |
10:36.480 --> 10:40.280 | |
And somehow that circuit underlies everything. | |
10:40.280 --> 10:44.760 | |
And so this is, it's called the common cortical algorithm, | |
10:44.760 --> 10:47.960 | |
if you will, some scientists just find it hard to believe. | |
10:47.960 --> 10:50.040 | |
And they just say, I can't believe that's true. | |
10:50.040 --> 10:52.080 | |
But the evidence is overwhelming in this case. | |
10:52.080 --> 10:54.320 | |
And so a large part of what it means | |
10:54.320 --> 10:56.440 | |
to figure out how the brain creates intelligence | |
10:56.440 --> 10:59.840 | |
and what is intelligence in the brain | |
10:59.840 --> 11:02.040 | |
is to understand what that circuit does. | |
11:02.040 --> 11:05.040 | |
If you can figure out what that circuit does, | |
11:05.040 --> 11:06.920 | |
as amazing as it is, then you can, | |
11:06.920 --> 11:10.480 | |
then you understand what all these other cognitive functions are. | |
11:10.480 --> 11:13.280 | |
So if you were to sort of put neocortex | |
11:13.280 --> 11:15.160 | |
outside of your book on intelligence, | |
11:15.160 --> 11:17.480 | |
you look, if you wrote a giant tome, | |
11:17.480 --> 11:19.800 | |
a textbook on the neocortex, | |
11:19.800 --> 11:23.760 | |
and you look maybe a couple of centuries from now, | |
11:23.760 --> 11:26.520 | |
how much of what we know now would still be accurate | |
11:26.520 --> 11:27.680 | |
two centuries from now. | |
11:27.680 --> 11:30.840 | |
So how close are we in terms of understanding? | |
11:30.840 --> 11:33.000 | |
I have to speak from my own particular experience here. | |
11:33.000 --> 11:36.440 | |
So I run a small research lab here. | |
11:36.440 --> 11:38.040 | |
It's like any other research lab. | |
11:38.040 --> 11:39.440 | |
I'm sort of the principal investigator. | |
11:39.440 --> 11:40.280 | |
There's actually two of us, | |
11:40.280 --> 11:42.560 | |
and there's a bunch of other people. | |
11:42.560 --> 11:43.840 | |
And this is what we do. | |
11:43.840 --> 11:44.960 | |
We started the neocortex, | |
11:44.960 --> 11:46.960 | |
and we publish our results and so on. | |
11:46.960 --> 11:48.520 | |
So about three years ago, | |
11:49.840 --> 11:52.480 | |
we had a real breakthrough in this field. | |
11:52.480 --> 11:53.320 | |
Just tremendous breakthrough. | |
11:53.320 --> 11:56.520 | |
We started, we now publish, I think, three papers on it. | |
11:56.520 --> 12:00.200 | |
And so I have a pretty good understanding | |
12:00.200 --> 12:02.320 | |
of all the pieces and what we're missing. | |
12:02.320 --> 12:06.280 | |
I would say that almost all the empirical data | |
12:06.280 --> 12:08.520 | |
we've collected about the brain, which is enormous. | |
12:08.520 --> 12:10.320 | |
If you don't know the neuroscience literature, | |
12:10.320 --> 12:13.960 | |
it's just incredibly big. | |
12:13.960 --> 12:16.840 | |
And it's, for the most part, all correct. | |
12:16.840 --> 12:20.240 | |
It's facts and experimental results | |
12:20.240 --> 12:22.960 | |
and measurements and all kinds of stuff. | |
12:22.960 --> 12:25.800 | |
But none of that has been really assimilated | |
12:25.800 --> 12:27.840 | |
into a theoretical framework. | |
12:27.840 --> 12:32.240 | |
It's data without, in the language of Thomas Kuhn, | |
12:32.240 --> 12:35.280 | |
the historian, it would be sort of a preparadigm science. | |
12:35.280 --> 12:38.160 | |
Lots of data, but no way to fit it in together. | |
12:38.160 --> 12:39.520 | |
I think almost all of that's correct. | |
12:39.520 --> 12:42.120 | |
There's gonna be some mistakes in there. | |
12:42.120 --> 12:43.240 | |
And for the most part, | |
12:43.240 --> 12:45.480 | |
there aren't really good cogent theories | |
12:45.480 --> 12:47.240 | |
about how to put it together. | |
12:47.240 --> 12:50.040 | |
It's not like we have two or three competing good theories, | |
12:50.040 --> 12:51.520 | |
which ones are right and which ones are wrong. | |
12:51.520 --> 12:53.720 | |
It's like, yeah, people just scratching their heads | |
12:53.720 --> 12:55.560 | |
throwing things, you know, some people giving up | |
12:55.560 --> 12:57.560 | |
on trying to figure out what the whole thing does. | |
12:57.560 --> 13:00.960 | |
In fact, there's very, very few labs that we do | |
13:00.960 --> 13:03.280 | |
that focus really on theory | |
13:03.280 --> 13:06.760 | |
and all this unassimilated data and trying to explain it. | |
13:06.760 --> 13:08.880 | |
So it's not like we've got it wrong. | |
13:08.880 --> 13:11.120 | |
It's just that we haven't got it at all. | |
13:11.120 --> 13:15.040 | |
So it's really, I would say, pretty early days | |
13:15.040 --> 13:18.360 | |
in terms of understanding the fundamental theories, | |
13:18.360 --> 13:20.240 | |
forces of the way our mind works. | |
13:20.240 --> 13:21.080 | |
I don't think so. | |
13:21.080 --> 13:23.760 | |
I would have said that's true five years ago. | |
13:25.360 --> 13:28.600 | |
So as I said, we had some really big breakthroughs | |
13:28.600 --> 13:30.800 | |
on this recently and we started publishing papers on this. | |
13:30.800 --> 13:34.240 | |
So you can get to that. | |
13:34.240 --> 13:36.760 | |
But so I don't think it's, you know, I'm an optimist | |
13:36.760 --> 13:38.280 | |
and from where I sit today, | |
13:38.280 --> 13:39.440 | |
most people would disagree with this, | |
13:39.440 --> 13:41.640 | |
but from where I sit today, from what I know, | |
13:43.240 --> 13:44.920 | |
it's not super early days anymore. | |
13:44.920 --> 13:46.840 | |
We are, you know, the way these things go | |
13:46.840 --> 13:48.200 | |
is it's not a linear path, right? | |
13:48.200 --> 13:49.840 | |
You don't just start accumulating | |
13:49.840 --> 13:50.800 | |
and get better and better and better. | |
13:50.800 --> 13:52.920 | |
No, you got all the stuff you've collected. | |
13:52.920 --> 13:53.760 | |
None of it makes sense. | |
13:53.760 --> 13:55.640 | |
All these different things are just sort of around. | |
13:55.640 --> 13:57.120 | |
And then you're going to have some breaking points | |
13:57.120 --> 13:59.400 | |
all of a sudden, oh my God, now we got it right. | |
13:59.400 --> 14:01.120 | |
That's how it goes in science. | |
14:01.120 --> 14:04.480 | |
And I personally feel like we passed that little thing | |
14:04.480 --> 14:06.320 | |
about a couple of years ago. | |
14:06.320 --> 14:07.560 | |
All that big thing a couple of years ago. | |
14:07.560 --> 14:09.600 | |
So we can talk about that. | |
14:09.600 --> 14:11.000 | |
Time will tell if I'm right, | |
14:11.000 --> 14:12.640 | |
but I feel very confident about it. | |
14:12.640 --> 14:15.120 | |
That's when we'll just say it on tape like this. | |
14:15.120 --> 14:18.040 | |
At least very optimistic. | |
14:18.040 --> 14:20.160 | |
So let's, before those few years ago, | |
14:20.160 --> 14:23.200 | |
let's take a step back to HTM, | |
14:23.200 --> 14:25.960 | |
the hierarchical temporal memory theory, | |
14:25.960 --> 14:27.480 | |
which you first proposed on intelligence | |
14:27.480 --> 14:29.280 | |
and went through a few different generations. | |
14:29.280 --> 14:31.200 | |
Can you describe what it is, | |
14:31.200 --> 14:33.560 | |
how it would evolve through the three generations | |
14:33.560 --> 14:35.360 | |
since you first put it on paper? | |
14:35.360 --> 14:39.240 | |
Yeah, so one of the things that neuroscientists | |
14:39.240 --> 14:42.920 | |
just sort of missed for many, many years. | |
14:42.920 --> 14:45.720 | |
And especially people were thinking about theory | |
14:45.720 --> 14:47.720 | |
was the nature of time in the brain. | |
14:47.720 --> 14:50.440 | |
Brain's process, information through time, | |
14:50.440 --> 14:53.280 | |
the information coming into the brain is constantly changing. | |
14:53.280 --> 14:56.160 | |
The patterns from my speech right now, | |
14:56.160 --> 14:58.520 | |
if you're listening to it at normal speed, | |
14:58.520 --> 15:00.080 | |
would be changing on your ears | |
15:00.080 --> 15:02.680 | |
about every 10 milliseconds or so, you'd have a change. | |
15:02.680 --> 15:05.320 | |
This constant flow, when you look at the world, | |
15:05.320 --> 15:06.800 | |
your eyes are moving constantly, | |
15:06.800 --> 15:08.240 | |
three to five times a second, | |
15:08.240 --> 15:09.920 | |
and the input's completely, completely. | |
15:09.920 --> 15:11.800 | |
If I were to touch something like a coffee cup | |
15:11.800 --> 15:13.880 | |
as I move my fingers, the input changes. | |
15:13.880 --> 15:16.840 | |
So this idea that the brain works on time | |
15:16.840 --> 15:19.640 | |
changing patterns is almost completely, | |
15:19.640 --> 15:21.080 | |
or was almost completely missing | |
15:21.080 --> 15:23.520 | |
from a lot of the basic theories like fears of vision | |
15:23.520 --> 15:24.360 | |
and so on. | |
15:24.360 --> 15:26.280 | |
It's like, oh no, we're gonna put this image in front of you | |
15:26.280 --> 15:28.360 | |
and flash it and say, what is it? | |
15:28.360 --> 15:31.120 | |
A convolutional neural network's worked that way today, right? | |
15:31.120 --> 15:33.280 | |
Classified this picture. | |
15:33.280 --> 15:35.120 | |
But that's not what vision is like. | |
15:35.120 --> 15:37.760 | |
Vision is this sort of crazy time based pattern | |
15:37.760 --> 15:39.080 | |
that's going all over the place, | |
15:39.080 --> 15:40.920 | |
and so is touch and so is hearing. | |
15:40.920 --> 15:42.880 | |
So the first part of a hierarchical temporal memory | |
15:42.880 --> 15:44.280 | |
was the temporal part. | |
15:44.280 --> 15:47.680 | |
It's to say, you won't understand the brain, | |
15:47.680 --> 15:49.360 | |
nor will you understand intelligent machines | |
15:49.360 --> 15:51.720 | |
unless you're dealing with time based patterns. | |
15:51.720 --> 15:54.760 | |
The second thing was, the memory component of it was, | |
15:54.760 --> 15:59.760 | |
is to say that we aren't just processing input, | |
15:59.760 --> 16:02.000 | |
we learn a model of the world. | |
16:02.000 --> 16:04.000 | |
And the memory stands for that model. | |
16:04.000 --> 16:06.640 | |
The point of the brain, part of the neocortex, | |
16:06.640 --> 16:07.840 | |
it learns a model of the world. | |
16:07.840 --> 16:10.840 | |
We have to store things that are experiences | |
16:10.840 --> 16:13.520 | |
in a form that leads to a model of the world. | |
16:13.520 --> 16:15.080 | |
So we can move around the world, | |
16:15.080 --> 16:16.240 | |
we can pick things up and do things | |
16:16.240 --> 16:17.520 | |
and navigate and know how it's going on. | |
16:17.520 --> 16:19.320 | |
So that's what the memory referred to. | |
16:19.320 --> 16:22.320 | |
And many people just, they were thinking about like, | |
16:22.320 --> 16:24.480 | |
certain processes without memory at all. | |
16:24.480 --> 16:26.120 | |
They're just like processing things. | |
16:26.120 --> 16:28.320 | |
And then finally, the hierarchical component | |
16:28.320 --> 16:31.640 | |
was a reflection to that the neocortex, | |
16:31.640 --> 16:33.920 | |
although it's just a uniform sheet of cells, | |
16:33.920 --> 16:36.920 | |
different parts of it project to other parts, | |
16:36.920 --> 16:38.680 | |
which project to other parts. | |
16:38.680 --> 16:42.400 | |
And there is a sort of rough hierarchy in terms of that. | |
16:42.400 --> 16:46.000 | |
So the hierarchical temporal memory is just saying, | |
16:46.000 --> 16:47.720 | |
look, we should be thinking about the brain | |
16:47.720 --> 16:52.720 | |
as time based, model memory based and hierarchical processing. | |
16:54.760 --> 16:58.160 | |
And that was a placeholder for a bunch of components | |
16:58.160 --> 17:00.720 | |
that we would then plug into that. | |
17:00.720 --> 17:02.600 | |
We still believe all those things I just said, | |
17:02.600 --> 17:06.960 | |
but we now know so much more that I'm stopping to use | |
17:06.960 --> 17:08.200 | |
the word hierarchical temporal memory yet | |
17:08.200 --> 17:11.320 | |
because it's insufficient to capture the stuff we know. | |
17:11.320 --> 17:12.960 | |
So again, it's not incorrect, | |
17:12.960 --> 17:15.800 | |
but I now know more and I would rather describe it | |
17:15.800 --> 17:16.800 | |
more accurately. | |
17:16.800 --> 17:20.360 | |
Yeah, so you're basically, we could think of HTM | |
17:20.360 --> 17:24.800 | |
as emphasizing that there's three aspects of intelligence | |
17:24.800 --> 17:25.920 | |
that are important to think about | |
17:25.920 --> 17:28.880 | |
whatever the eventual theory converges to. | |
17:28.880 --> 17:32.480 | |
So in terms of time, how do you think of nature of time | |
17:32.480 --> 17:33.880 | |
across different time scales? | |
17:33.880 --> 17:36.800 | |
So you mentioned things changing, | |
17:36.800 --> 17:39.160 | |
sensory inputs changing every 10, 20 minutes. | |
17:39.160 --> 17:40.520 | |
What about every few minutes? | |
17:40.520 --> 17:42.120 | |
Every few months and years? | |
17:42.120 --> 17:44.840 | |
Well, if you think about a neuroscience problem, | |
17:44.840 --> 17:49.640 | |
the brain problem, neurons themselves can stay active | |
17:49.640 --> 17:51.560 | |
for certain periods of time. | |
17:51.560 --> 17:53.280 | |
They're parts of the brain where they stay active | |
17:53.280 --> 17:56.680 | |
for minutes, so you could hold a certain perception | |
17:56.680 --> 18:01.320 | |
or an activity for a certain period of time, | |
18:01.320 --> 18:04.480 | |
but not most of them don't last that long. | |
18:04.480 --> 18:07.160 | |
And so if you think about your thoughts | |
18:07.160 --> 18:09.080 | |
or the activity neurons, | |
18:09.080 --> 18:10.680 | |
if you're gonna wanna involve something | |
18:10.680 --> 18:11.920 | |
that happened a long time ago, | |
18:11.920 --> 18:14.400 | |
even just this morning, for example, | |
18:14.400 --> 18:16.360 | |
the neurons haven't been active throughout that time. | |
18:16.360 --> 18:17.800 | |
So you have to store that. | |
18:17.800 --> 18:20.720 | |
So by I ask you, what did you have for breakfast today? | |
18:20.720 --> 18:22.000 | |
That is memory. | |
18:22.000 --> 18:24.160 | |
That is, you've built it into your model of the world now. | |
18:24.160 --> 18:27.880 | |
You remember that and that memory is in the synapses, | |
18:27.880 --> 18:30.080 | |
it's basically in the formation of synapses. | |
18:30.080 --> 18:35.080 | |
And so you're sliding into what used to different time scales. | |
18:36.760 --> 18:38.280 | |
There's time scales of which we are | |
18:38.280 --> 18:40.440 | |
like understanding my language and moving about | |
18:40.440 --> 18:41.840 | |
and seeing things rapidly and over time. | |
18:41.840 --> 18:44.280 | |
That's the time scales of activities of neurons. | |
18:44.280 --> 18:46.200 | |
But if you wanna get in longer time scales, | |
18:46.200 --> 18:48.840 | |
then it's more memory and we have to invoke those memories | |
18:48.840 --> 18:50.960 | |
to say, oh, yes, well, now I can remember | |
18:50.960 --> 18:54.160 | |
what I had for breakfast because I stored that someplace. | |
18:54.160 --> 18:58.200 | |
I may forget it tomorrow, but I'd store it for now. | |
18:58.200 --> 19:01.600 | |
So does memory also need to have, | |
19:02.880 --> 19:06.240 | |
so the hierarchical aspect of reality | |
19:06.240 --> 19:07.720 | |
is not just about concepts, | |
19:07.720 --> 19:08.800 | |
it's also about time. | |
19:08.800 --> 19:10.280 | |
Do you think of it that way? | |
19:10.280 --> 19:12.840 | |
Yeah, time is infused in everything. | |
19:12.840 --> 19:15.560 | |
It's like, you really can't separate it out. | |
19:15.560 --> 19:19.560 | |
If I ask you, what is your, how's the brain | |
19:19.560 --> 19:21.360 | |
learn a model of this coffee cup here? | |
19:21.360 --> 19:23.200 | |
I have a coffee cup, then I met the coffee cup. | |
19:23.200 --> 19:26.000 | |
I said, well, time is not an inherent property | |
19:26.000 --> 19:28.520 | |
of the model I have of this cup, | |
19:28.520 --> 19:31.440 | |
whether it's a visual model or tactile model. | |
19:31.440 --> 19:32.600 | |
I can sense it through time, | |
19:32.600 --> 19:34.880 | |
but the model itself doesn't really have much time. | |
19:34.880 --> 19:36.560 | |
If I asked you, if I say, well, | |
19:36.560 --> 19:39.000 | |
what is the model of my cell phone? | |
19:39.000 --> 19:41.480 | |
My brain has learned a model of the cell phones. | |
19:41.480 --> 19:43.360 | |
If you have a smartphone like this, | |
19:43.360 --> 19:45.680 | |
and I said, well, this has time aspects to it. | |
19:45.680 --> 19:48.040 | |
I have expectations when I turn it on, | |
19:48.040 --> 19:49.480 | |
what's gonna happen, what water, | |
19:49.480 --> 19:51.960 | |
how long it's gonna take to do certain things, | |
19:51.960 --> 19:54.040 | |
if I bring up an app, what sequences, | |
19:54.040 --> 19:56.520 | |
and so I have instant, it's like melodies in the world, | |
19:56.520 --> 19:58.560 | |
you know, melody has a sense of time. | |
19:58.560 --> 20:01.200 | |
So many things in the world move and act, | |
20:01.200 --> 20:03.720 | |
and there's a sense of time related to them. | |
20:03.720 --> 20:08.280 | |
Some don't, but most things do actually. | |
20:08.280 --> 20:12.120 | |
So it's sort of infused throughout the models of the world. | |
20:12.120 --> 20:13.720 | |
You build a model of the world, | |
20:13.720 --> 20:16.400 | |
you're learning the structure of the objects in the world, | |
20:16.400 --> 20:17.840 | |
and you're also learning | |
20:17.840 --> 20:19.760 | |
how those things change through time. | |
20:20.760 --> 20:23.920 | |
Okay, so it really is just a fourth dimension | |
20:23.920 --> 20:25.280 | |
that's infused deeply, | |
20:25.280 --> 20:26.760 | |
and you have to make sure | |
20:26.760 --> 20:30.960 | |
that your models of intelligence incorporate it. | |
20:30.960 --> 20:34.840 | |
So, like you mentioned, the state of neuroscience | |
20:34.840 --> 20:36.000 | |
is deeply empirical. | |
20:36.000 --> 20:40.120 | |
A lot of data collection, it's, you know, | |
20:40.120 --> 20:43.120 | |
that's where it is, you mentioned Thomas Kuhn, right? | |
20:43.120 --> 20:44.560 | |
Yeah. | |
20:44.560 --> 20:48.040 | |
And then you're proposing a theory of intelligence, | |
20:48.040 --> 20:50.520 | |
and which is really the next step, | |
20:50.520 --> 20:52.920 | |
the really important step to take, | |
20:52.920 --> 20:57.920 | |
but why is HTM, or what we'll talk about soon, | |
20:57.920 --> 21:01.160 | |
the right theory? | |
21:01.160 --> 21:05.160 | |
So is it more in this, is it backed by intuition, | |
21:05.160 --> 21:09.160 | |
is it backed by evidence, is it backed by a mixture of both? | |
21:09.160 --> 21:12.800 | |
Is it kind of closer to where string theory is in physics, | |
21:12.800 --> 21:15.800 | |
where there's mathematical components | |
21:15.800 --> 21:18.160 | |
which show that, you know what, | |
21:18.160 --> 21:20.160 | |
it seems that this, | |
21:20.160 --> 21:23.560 | |
it fits together too well for it not to be true, | |
21:23.560 --> 21:25.360 | |
which is where string theory is. | |
21:25.360 --> 21:28.080 | |
Is that where you're kind of thinking? | |
21:28.080 --> 21:30.080 | |
It's a mixture of all those things, | |
21:30.080 --> 21:32.080 | |
although definitely where we are right now, | |
21:32.080 --> 21:34.080 | |
it's definitely much more on the empirical side | |
21:34.080 --> 21:36.080 | |
than, let's say, string theory. | |
21:36.080 --> 21:39.080 | |
The way this goes about, we're theorists, right? | |
21:39.080 --> 21:41.080 | |
So we look at all this data, | |
21:41.080 --> 21:43.080 | |
and we're trying to come up with some sort of model | |
21:43.080 --> 21:45.080 | |
that explains it, basically, | |
21:45.080 --> 21:47.080 | |
and there's, unlike string theory, | |
21:47.080 --> 21:50.080 | |
there's vast more amounts of empirical data here | |
21:50.080 --> 21:54.080 | |
than I think that most physicists deal with. | |
21:54.080 --> 21:57.080 | |
And so our challenge is to sort through that | |
21:57.080 --> 22:01.080 | |
and figure out what kind of constructs would explain this. | |
22:01.080 --> 22:04.080 | |
And when we have an idea, | |
22:04.080 --> 22:06.080 | |
you come up with a theory of some sort, | |
22:06.080 --> 22:08.080 | |
you have lots of ways of testing it. | |
22:08.080 --> 22:10.080 | |
First of all, I am, you know, | |
22:10.080 --> 22:14.080 | |
there are 100 years of assimilated, | |
22:14.080 --> 22:16.080 | |
und assimilated empirical data from neuroscience. | |
22:16.080 --> 22:18.080 | |
So we go back and repapers, and we say, | |
22:18.080 --> 22:20.080 | |
oh, did someone find this already? | |
22:20.080 --> 22:23.080 | |
We can predict X, Y, and Z, | |
22:23.080 --> 22:25.080 | |
and maybe no one's even talked about it | |
22:25.080 --> 22:27.080 | |
since 1972 or something, | |
22:27.080 --> 22:29.080 | |
but we go back and find that, and we say, | |
22:29.080 --> 22:31.080 | |
oh, either it can support the theory | |
22:31.080 --> 22:33.080 | |
or it can invalidate the theory. | |
22:33.080 --> 22:35.080 | |
And then we say, okay, we have to start over again. | |
22:35.080 --> 22:37.080 | |
Oh, no, it's support. Let's keep going with that one. | |
22:37.080 --> 22:40.080 | |
So the way I kind of view it, | |
22:40.080 --> 22:43.080 | |
when we do our work, we come up, | |
22:43.080 --> 22:45.080 | |
we look at all this empirical data, | |
22:45.080 --> 22:47.080 | |
and it's what I call it is a set of constraints. | |
22:47.080 --> 22:49.080 | |
We're not interested in something that's biologically inspired. | |
22:49.080 --> 22:52.080 | |
We're trying to figure out how the actual brain works. | |
22:52.080 --> 22:55.080 | |
So every piece of empirical data is a constraint on a theory. | |
22:55.080 --> 22:57.080 | |
If you have the correct theory, | |
22:57.080 --> 22:59.080 | |
it needs to explain every pin, right? | |
22:59.080 --> 23:02.080 | |
So we have this huge number of constraints on the problem, | |
23:02.080 --> 23:05.080 | |
which initially makes it very, very difficult. | |
23:05.080 --> 23:07.080 | |
If you don't have many constraints, | |
23:07.080 --> 23:09.080 | |
you can make up stuff all the day. | |
23:09.080 --> 23:11.080 | |
You can say, oh, here's an answer to how you can do this, | |
23:11.080 --> 23:13.080 | |
you can do that, you can do this. | |
23:13.080 --> 23:15.080 | |
But if you consider all biology as a set of constraints, | |
23:15.080 --> 23:17.080 | |
all neuroscience as a set of constraints, | |
23:17.080 --> 23:19.080 | |
and even if you're working in one little part of the Neocortex, | |
23:19.080 --> 23:21.080 | |
for example, there are hundreds and hundreds of constraints. | |
23:21.080 --> 23:23.080 | |
There are a lot of empirical constraints | |
23:23.080 --> 23:25.080 | |
that it's very, very difficult initially | |
23:25.080 --> 23:27.080 | |
to come up with a theoretical framework for that. | |
23:27.080 --> 23:31.080 | |
But when you do, and it solves all those constraints at once, | |
23:31.080 --> 23:33.080 | |
you have a high confidence | |
23:33.080 --> 23:36.080 | |
that you got something close to correct. | |
23:36.080 --> 23:39.080 | |
It's just mathematically almost impossible not to be. | |
23:39.080 --> 23:43.080 | |
So that's the curse and the advantage of what we have. | |
23:43.080 --> 23:47.080 | |
The curse is we have to meet all these constraints, | |
23:47.080 --> 23:49.080 | |
which is really hard. | |
23:49.080 --> 23:51.080 | |
But when you do meet them, | |
23:51.080 --> 23:53.080 | |
then you have a great confidence | |
23:53.080 --> 23:55.080 | |
that you've discovered something. | |
23:55.080 --> 23:58.080 | |
In addition, then we work with scientific labs. | |
23:58.080 --> 24:00.080 | |
So we'll say, oh, there's something we can't find, | |
24:00.080 --> 24:02.080 | |
we can predict something, | |
24:02.080 --> 24:04.080 | |
but we can't find it anywhere in the literature. | |
24:04.080 --> 24:07.080 | |
So we will then, we have people we collaborated with, | |
24:07.080 --> 24:09.080 | |
we'll say, sometimes they'll say, you know what, | |
24:09.080 --> 24:11.080 | |
I have some collected data, which I didn't publish, | |
24:11.080 --> 24:13.080 | |
but we can go back and look at it | |
24:13.080 --> 24:15.080 | |
and see if we can find that, | |
24:15.080 --> 24:17.080 | |
which is much easier than designing a new experiment. | |
24:17.080 --> 24:20.080 | |
You know, neuroscience experiments take a long time, years. | |
24:20.080 --> 24:23.080 | |
So although some people are doing that now too. | |
24:23.080 --> 24:27.080 | |
So, but between all of these things, | |
24:27.080 --> 24:29.080 | |
I think it's a reasonable, | |
24:29.080 --> 24:32.080 | |
it's actually a very, very good approach. | |
24:32.080 --> 24:35.080 | |
We are blessed with the fact that we can test our theories | |
24:35.080 --> 24:37.080 | |
out to yin and yang here, | |
24:37.080 --> 24:39.080 | |
because there's so much on a similar data, | |
24:39.080 --> 24:41.080 | |
and we can also falsify our theories very easily, | |
24:41.080 --> 24:43.080 | |
which we do often. | |
24:43.080 --> 24:46.080 | |
So it's kind of reminiscent to whenever that was with Copernicus, | |
24:46.080 --> 24:49.080 | |
you know, when you figure out that the sun is at the center, | |
24:49.080 --> 24:53.080 | |
the solar system as opposed to Earth, | |
24:53.080 --> 24:55.080 | |
the pieces just fall into place. | |
24:55.080 --> 24:59.080 | |
Yeah, I think that's the general nature of the Ha moments, | |
24:59.080 --> 25:02.080 | |
is in Copernicus, it could be, | |
25:02.080 --> 25:05.080 | |
you could say the same thing about Darwin, | |
25:05.080 --> 25:07.080 | |
you could say the same thing about, you know, | |
25:07.080 --> 25:09.080 | |
about the double helix, | |
25:09.080 --> 25:13.080 | |
that people have been working on a problem for so long, | |
25:13.080 --> 25:14.080 | |
and have all this data, | |
25:14.080 --> 25:15.080 | |
and they can't make sense of it, they can't make sense of it. | |
25:15.080 --> 25:17.080 | |
But when the answer comes to you, | |
25:17.080 --> 25:19.080 | |
and everything falls into place, | |
25:19.080 --> 25:21.080 | |
it's like, oh my gosh, that's it. | |
25:21.080 --> 25:23.080 | |
That's got to be right. | |
25:23.080 --> 25:28.080 | |
I asked both Jim Watson and Francis Crick about this. | |
25:28.080 --> 25:30.080 | |
I asked them, you know, | |
25:30.080 --> 25:33.080 | |
when you were working on trying to discover the structure | |
25:33.080 --> 25:35.080 | |
of the double helix, | |
25:35.080 --> 25:38.080 | |
and when you came up with the sort of, | |
25:38.080 --> 25:42.080 | |
the structure that ended up being correct, | |
25:42.080 --> 25:44.080 | |
but it was sort of a guess, you know, | |
25:44.080 --> 25:46.080 | |
it wasn't really verified yet. | |
25:46.080 --> 25:48.080 | |
I said, did you know that it was right? | |
25:48.080 --> 25:50.080 | |
And they both said, absolutely. | |
25:50.080 --> 25:52.080 | |
We absolutely knew it was right. | |
25:52.080 --> 25:55.080 | |
And it doesn't matter if other people didn't believe it or not, | |
25:55.080 --> 25:57.080 | |
we knew it was right, they'd get around to thinking it | |
25:57.080 --> 25:59.080 | |
and agree with it eventually anyway. | |
25:59.080 --> 26:01.080 | |
And that's the kind of thing you hear a lot with scientists | |
26:01.080 --> 26:04.080 | |
who really are studying a difficult problem, | |
26:04.080 --> 26:07.080 | |
and I feel that way too, about our work. | |
26:07.080 --> 26:10.080 | |
Have you talked to Crick or Watson about the problem | |
26:10.080 --> 26:15.080 | |
you're trying to solve, the, of finding the DNA of the brain? | |
26:15.080 --> 26:16.080 | |
Yeah. | |
26:16.080 --> 26:19.080 | |
In fact, Francis Crick was very interested in this, | |
26:19.080 --> 26:21.080 | |
in the latter part of his life. | |
26:21.080 --> 26:23.080 | |
And in fact, I got interested in brains | |
26:23.080 --> 26:26.080 | |
by reading an essay he wrote in 1979 | |
26:26.080 --> 26:28.080 | |
called Thinking About the Brain. | |
26:28.080 --> 26:30.080 | |
And that was when I decided | |
26:30.080 --> 26:33.080 | |
I'm going to leave my profession of computers and engineering | |
26:33.080 --> 26:35.080 | |
and become a neuroscientist. | |
26:35.080 --> 26:37.080 | |
Just reading that one essay from Francis Crick. | |
26:37.080 --> 26:39.080 | |
I got to meet him later in life. | |
26:39.080 --> 26:43.080 | |
I got to, I spoke at the Salk Institute | |
26:43.080 --> 26:44.080 | |
and he was in the audience | |
26:44.080 --> 26:47.080 | |
and then I had a tea with him afterwards. | |
26:47.080 --> 26:50.080 | |
You know, he was interested in a different problem. | |
26:50.080 --> 26:52.080 | |
He was focused on consciousness. | |
26:52.080 --> 26:54.080 | |
The easy problem, right? | |
26:54.080 --> 26:58.080 | |
Well, I think it's the red herring | |
26:58.080 --> 27:01.080 | |
and so we weren't really overlapping a lot there. | |
27:01.080 --> 27:05.080 | |
Jim Watson, who's still alive, | |
27:05.080 --> 27:07.080 | |
is also interested in this problem | |
27:07.080 --> 27:11.080 | |
and when he was director of the Coltsman Harbor Laboratories, | |
27:11.080 --> 27:13.080 | |
he was really sort of behind | |
27:13.080 --> 27:16.080 | |
moving in the direction of neuroscience there. | |
27:16.080 --> 27:19.080 | |
And so he had a personal interest in this field | |
27:19.080 --> 27:23.080 | |
and I have met with him numerous times. | |
27:23.080 --> 27:25.080 | |
And in fact, the last time, | |
27:25.080 --> 27:27.080 | |
a little bit over a year ago, | |
27:27.080 --> 27:30.080 | |
I gave a talk at Coltsman Harbor Labs | |
27:30.080 --> 27:34.080 | |
about the progress we were making in our work. | |
27:34.080 --> 27:39.080 | |
And it was a lot of fun because he said, | |
27:39.080 --> 27:41.080 | |
well, you wouldn't be coming here | |
27:41.080 --> 27:42.080 | |
unless you had something important to say, | |
27:42.080 --> 27:44.080 | |
so I'm going to go attend your talk. | |
27:44.080 --> 27:46.080 | |
So he sat in the very front row. | |
27:46.080 --> 27:50.080 | |
Next to him was the director of the lab, Bruce Stillman. | |
27:50.080 --> 27:52.080 | |
So these guys were in the front row of this auditorium, right? | |
27:52.080 --> 27:54.080 | |
So nobody else in the auditorium wants to sit in the front row | |
27:54.080 --> 27:57.080 | |
because there's Jim Watson there as the director. | |
27:57.080 --> 28:03.080 | |
And I gave a talk and then I had dinner with Jim afterwards. | |
28:03.080 --> 28:06.080 | |
But there's a great picture of my colleague, | |
28:06.080 --> 28:08.080 | |
Subitai Amantik, where I'm up there | |
28:08.080 --> 28:11.080 | |
sort of expiring the basics of this new framework we have. | |
28:11.080 --> 28:13.080 | |
And Jim Watson's on the edge of his chair. | |
28:13.080 --> 28:15.080 | |
He's literally on the edge of his chair, | |
28:15.080 --> 28:17.080 | |
like, internally staring up at the screen. | |
28:17.080 --> 28:21.080 | |
And when he discovered the structure of DNA, | |
28:21.080 --> 28:25.080 | |
the first public talk he gave was at Coltsman Harbor Labs. | |
28:25.080 --> 28:27.080 | |
And there's a picture, there's a famous picture | |
28:27.080 --> 28:29.080 | |
of Jim Watson standing at the whiteboard | |
28:29.080 --> 28:31.080 | |
with an overhead thing pointing at something, | |
28:31.080 --> 28:33.080 | |
pointing at the double helix at this pointer. | |
28:33.080 --> 28:35.080 | |
And it actually looks a lot like the picture of me. | |
28:35.080 --> 28:37.080 | |
So there was a sort of funny, there's an area talking about the brain | |
28:37.080 --> 28:39.080 | |
and there's Jim Watson staring up at the tent. | |
28:39.080 --> 28:41.080 | |
And of course, there was, you know, whatever, | |
28:41.080 --> 28:44.080 | |
60 years earlier he was standing pointing at the double helix. | |
28:44.080 --> 28:47.080 | |
It's one of the great discoveries in all of, you know, | |
28:47.080 --> 28:50.080 | |
whatever, by all the science, all science and DNA. | |
28:50.080 --> 28:54.080 | |
So it's the funny that there's echoes of that in your presentation. | |
28:54.080 --> 28:58.080 | |
Do you think in terms of evolutionary timeline and history, | |
28:58.080 --> 29:01.080 | |
the development of the neocortex was a big leap? | |
29:01.080 --> 29:06.080 | |
Or is it just a small step? | |
29:06.080 --> 29:09.080 | |
So, like, if we ran the whole thing over again, | |
29:09.080 --> 29:12.080 | |
from the birth of life on Earth, | |
29:12.080 --> 29:15.080 | |
how likely would we develop the mechanism of the neocortex? | |
29:15.080 --> 29:17.080 | |
Okay, well, those are two separate questions. | |
29:17.080 --> 29:19.080 | |
One, was it a big leap? | |
29:19.080 --> 29:21.080 | |
And one was how likely it is, okay? | |
29:21.080 --> 29:23.080 | |
They're not necessarily related. | |
29:23.080 --> 29:25.080 | |
Maybe correlated. | |
29:25.080 --> 29:28.080 | |
And we don't really have enough data to make a judgment about that. | |
29:28.080 --> 29:30.080 | |
I would say definitely it was a big leap. | |
29:30.080 --> 29:31.080 | |
And I can tell you why. | |
29:31.080 --> 29:34.080 | |
I don't think it was just another incremental step. | |
29:34.080 --> 29:36.080 | |
I'll get that in a moment. | |
29:36.080 --> 29:38.080 | |
I don't really have any idea how likely it is. | |
29:38.080 --> 29:41.080 | |
If we look at evolution, we have one data point, | |
29:41.080 --> 29:43.080 | |
which is Earth, right? | |
29:43.080 --> 29:45.080 | |
Life formed on Earth billions of years ago, | |
29:45.080 --> 29:48.080 | |
whether it was introduced here or it created it here | |
29:48.080 --> 29:50.080 | |
or someone introduced it we don't really know, | |
29:50.080 --> 29:51.080 | |
but it was here early. | |
29:51.080 --> 29:55.080 | |
It took a long, long time to get to multicellular life. | |
29:55.080 --> 29:58.080 | |
And then from multicellular life, | |
29:58.080 --> 30:02.080 | |
it took a long, long time to get the neocortex. | |
30:02.080 --> 30:05.080 | |
And we've only had the neocortex for a few hundred thousand years. | |
30:05.080 --> 30:07.080 | |
So that's like nothing. | |
30:07.080 --> 30:09.080 | |
Okay, so is it likely? | |
30:09.080 --> 30:13.080 | |
Well, certainly it isn't something that happened right away on Earth. | |
30:13.080 --> 30:15.080 | |
And there were multiple steps to get there. | |
30:15.080 --> 30:17.080 | |
So I would say it's probably not going to something that would happen | |
30:17.080 --> 30:20.080 | |
instantaneously on other planets that might have life. | |
30:20.080 --> 30:23.080 | |
It might take several billion years on average. | |
30:23.080 --> 30:24.080 | |
Is it likely? | |
30:24.080 --> 30:25.080 | |
I don't know. | |
30:25.080 --> 30:28.080 | |
But you'd have to survive for several billion years to find out. | |
30:28.080 --> 30:29.080 | |
Probably. | |
30:29.080 --> 30:30.080 | |
Is it a big leap? | |
30:30.080 --> 30:35.080 | |
Yeah, I think it is a qualitative difference | |
30:35.080 --> 30:38.080 | |
in all other evolutionary steps. | |
30:38.080 --> 30:40.080 | |
I can try to describe that if you'd like. | |
30:40.080 --> 30:42.080 | |
Sure, in which way? | |
30:42.080 --> 30:44.080 | |
Yeah, I can tell you how. | |
30:44.080 --> 30:48.080 | |
Pretty much, let's start with a little preface. | |
30:48.080 --> 30:54.080 | |
Maybe the things that humans are able to do do not have obvious | |
30:54.080 --> 30:59.080 | |
survival advantages precedent. | |
30:59.080 --> 31:00.080 | |
We create music. | |
31:00.080 --> 31:03.080 | |
Is there a really survival advantage to that? | |
31:03.080 --> 31:04.080 | |
Maybe, maybe not. | |
31:04.080 --> 31:05.080 | |
What about mathematics? | |
31:05.080 --> 31:09.080 | |
Is there a real survival advantage to mathematics? | |
31:09.080 --> 31:10.080 | |
You can stretch it. | |
31:10.080 --> 31:13.080 | |
You can try to figure these things out, right? | |
31:13.080 --> 31:18.080 | |
But most of evolutionary history, everything had immediate survival | |
31:18.080 --> 31:19.080 | |
advantages to it. | |
31:19.080 --> 31:22.080 | |
I'll tell you a story, which I like. | |
31:22.080 --> 31:25.080 | |
It may not be true. | |
31:25.080 --> 31:29.080 | |
But the story goes as follows. | |
31:29.080 --> 31:34.080 | |
Organisms have been evolving since the beginning of life here on Earth. | |
31:34.080 --> 31:37.080 | |
Adding this sort of complexity onto that and this sort of complexity onto that. | |
31:37.080 --> 31:40.080 | |
And the brain itself is evolved this way. | |
31:40.080 --> 31:44.080 | |
There's an old part, an older part, an older, older part to the brain that kind of just | |
31:44.080 --> 31:47.080 | |
keeps calming on new things and we keep adding capabilities. | |
31:47.080 --> 31:52.080 | |
When we got to the neocortex, initially it had a very clear survival advantage | |
31:52.080 --> 31:56.080 | |
in that it produced better vision and better hearing and better touch and maybe | |
31:56.080 --> 31:58.080 | |
a new place and so on. | |
31:58.080 --> 32:04.080 | |
But what I think happens is that evolution took a mechanism, and this is in our | |
32:04.080 --> 32:08.080 | |
recent theory, but it took a mechanism that evolved a long time ago for | |
32:08.080 --> 32:10.080 | |
navigating in the world, for knowing where you are. | |
32:10.080 --> 32:14.080 | |
These are the so called grid cells and place cells of that old part of the brain. | |
32:14.080 --> 32:21.080 | |
And it took that mechanism for building maps of the world and knowing where you are | |
32:21.080 --> 32:26.080 | |
on those maps and how to navigate those maps and turns it into a sort of a slim | |
32:26.080 --> 32:29.080 | |
down idealized version of it. | |
32:29.080 --> 32:32.080 | |
And that idealized version could now apply to building maps of other things, | |
32:32.080 --> 32:36.080 | |
maps of coffee cups and maps of phones, maps of mathematics. | |
32:36.080 --> 32:40.080 | |
Concepts, yes, and not just almost, exactly. | |
32:40.080 --> 32:44.080 | |
And it just started replicating this stuff. | |
32:44.080 --> 32:46.080 | |
You just think more and more and more. | |
32:46.080 --> 32:51.080 | |
So we went from being sort of dedicated purpose neural hardware to solve certain | |
32:51.080 --> 32:56.080 | |
problems that are important to survival to a general purpose neural hardware | |
32:56.080 --> 33:02.080 | |
that could be applied to all problems and now it's escaped the orbit of survival. | |
33:02.080 --> 33:08.080 | |
It's, we are now able to apply it to things which we find enjoyment, you know, | |
33:08.080 --> 33:13.080 | |
but aren't really clearly survival characteristics. | |
33:13.080 --> 33:19.080 | |
And that it seems to only have happened in humans to the large extent. | |
33:19.080 --> 33:24.080 | |
And so that's what's going on where we sort of have, we've sort of escaped the | |
33:24.080 --> 33:28.080 | |
gravity of evolutionary pressure in some sense in the near cortex. | |
33:28.080 --> 33:32.080 | |
And it now does things which are not, that are really interesting, | |
33:32.080 --> 33:36.080 | |
discovering models of the universe, which may not really help us. | |
33:36.080 --> 33:37.080 | |
It doesn't matter. | |
33:37.080 --> 33:41.080 | |
How does it help us surviving knowing that there might be multiple verses or that | |
33:41.080 --> 33:44.080 | |
there might be, you know, the age of the universe or how do, you know, | |
33:44.080 --> 33:46.080 | |
various stellar things occur? | |
33:46.080 --> 33:47.080 | |
It doesn't really help us survive at all. | |
33:47.080 --> 33:50.080 | |
But we enjoy it and that's what happened. | |
33:50.080 --> 33:53.080 | |
Or at least not in the obvious way, perhaps. | |
33:53.080 --> 33:58.080 | |
It is required, if you look at the entire universe in an evolutionary way, | |
33:58.080 --> 34:03.080 | |
it's required for us to do interplanetary travel and therefore survive past our own fun. | |
34:03.080 --> 34:05.080 | |
But you know, let's not get too quick. | |
34:05.080 --> 34:07.080 | |
Yeah, but, you know, evolution works at one time frame. | |
34:07.080 --> 34:11.080 | |
It's survival, if you think of survival of the phenotype, | |
34:11.080 --> 34:13.080 | |
survival of the individual. | |
34:13.080 --> 34:16.080 | |
What you're talking about there is spans well beyond that. | |
34:16.080 --> 34:22.080 | |
So there's no genetic, I'm not transferring any genetic traits to my children. | |
34:22.080 --> 34:25.080 | |
That are going to help them survive better on Mars. | |
34:25.080 --> 34:27.080 | |
Totally different mechanism. | |
34:27.080 --> 34:32.080 | |
So let's get into the new, as you've mentioned, this idea, | |
34:32.080 --> 34:35.080 | |
I don't know if you have a nice name, thousand. | |
34:35.080 --> 34:37.080 | |
We call it the thousand brain theory of intelligence. | |
34:37.080 --> 34:38.080 | |
I like it. | |
34:38.080 --> 34:44.080 | |
So can you talk about this idea of spatial view of concepts and so on? | |
34:44.080 --> 34:45.080 | |
Yeah. | |
34:45.080 --> 34:49.080 | |
So can I just describe sort of the, there's an underlying core discovery, | |
34:49.080 --> 34:51.080 | |
which then everything comes from that. | |
34:51.080 --> 34:55.080 | |
That's a very simple, this is really what happened. | |
34:55.080 --> 35:00.080 | |
We were deep into problems about understanding how we build models of stuff in the world | |
35:00.080 --> 35:03.080 | |
and how we make predictions about things. | |
35:03.080 --> 35:07.080 | |
And I was holding a coffee cup just like this in my hand. | |
35:07.080 --> 35:10.080 | |
And I had my finger was touching the side, my index finger. | |
35:10.080 --> 35:15.080 | |
And then I moved it to the top and I was going to feel the rim at the top of the cup. | |
35:15.080 --> 35:18.080 | |
And I asked myself a very simple question. | |
35:18.080 --> 35:22.080 | |
I said, well, first of all, let's say I know that my brain predicts what it's going to feel | |
35:22.080 --> 35:23.080 | |
before it touches it. | |
35:23.080 --> 35:25.080 | |
You can just think about it and imagine it. | |
35:25.080 --> 35:28.080 | |
And so we know that the brain's making predictions all the time. | |
35:28.080 --> 35:31.080 | |
So the question is, what does it take to predict that? | |
35:31.080 --> 35:33.080 | |
And there's a very interesting answer. | |
35:33.080 --> 35:36.080 | |
First of all, it says the brain has to know it's touching a coffee cup. | |
35:36.080 --> 35:38.080 | |
It has to have a model of a coffee cup. | |
35:38.080 --> 35:43.080 | |
It needs to know where the finger currently is on the cup, relative to the cup. | |
35:43.080 --> 35:46.080 | |
Because when I make a movement, it needs to know where it's going to be on the cup | |
35:46.080 --> 35:50.080 | |
after the movement is completed, relative to the cup. | |
35:50.080 --> 35:53.080 | |
And then it can make a prediction about what it's going to sense. | |
35:53.080 --> 35:56.080 | |
So this told me that the neocortex, which is making this prediction, | |
35:56.080 --> 35:59.080 | |
needs to know that it's sensing it's touching a cup. | |
35:59.080 --> 36:02.080 | |
And it needs to know the location of my finger relative to that cup | |
36:02.080 --> 36:04.080 | |
in a reference frame of the cup. | |
36:04.080 --> 36:06.080 | |
It doesn't matter where the cup is relative to my body. | |
36:06.080 --> 36:08.080 | |
It doesn't matter its orientation. | |
36:08.080 --> 36:09.080 | |
None of that matters. | |
36:09.080 --> 36:13.080 | |
It's where my finger is relative to the cup, which tells me then that the neocortex | |
36:13.080 --> 36:17.080 | |
has a reference frame that's anchored to the cup. | |
36:17.080 --> 36:19.080 | |
Because otherwise, I wouldn't be able to say the location | |
36:19.080 --> 36:21.080 | |
and I wouldn't be able to predict my new location. | |
36:21.080 --> 36:24.080 | |
And then we quickly, very instantly, you can say, | |
36:24.080 --> 36:26.080 | |
well, every part of my skin could touch this cup | |
36:26.080 --> 36:28.080 | |
and therefore every part of my skin is making predictions | |
36:28.080 --> 36:30.080 | |
and every part of my skin must have a reference frame | |
36:30.080 --> 36:33.080 | |
that it's using to make predictions. | |
36:33.080 --> 36:39.080 | |
So the big idea is that throughout the neocortex, | |
36:39.080 --> 36:47.080 | |
there are, everything is being stored and referenced in reference frames. | |
36:47.080 --> 36:49.080 | |
You can think of them like XYZ reference frames, | |
36:49.080 --> 36:50.080 | |
but they're not like that. | |
36:50.080 --> 36:52.080 | |
We know a lot about the neural mechanisms for this. | |
36:52.080 --> 36:55.080 | |
But the brain thinks in reference frames. | |
36:55.080 --> 36:58.080 | |
And as an engineer, if you're an engineer, this is not surprising. | |
36:58.080 --> 37:01.080 | |
You'd say, if I were to build a CAD model of the coffee cup, | |
37:01.080 --> 37:03.080 | |
well, I would bring it up in some CAD software | |
37:03.080 --> 37:05.080 | |
and I would assign some reference frame and say, | |
37:05.080 --> 37:07.080 | |
this features at this location and so on. | |
37:07.080 --> 37:10.080 | |
But the fact that this, the idea that this is occurring | |
37:10.080 --> 37:14.080 | |
throughout the neocortex everywhere, it was a novel idea. | |
37:14.080 --> 37:20.080 | |
And then a zillion things fell into place after that, a zillion. | |
37:20.080 --> 37:23.080 | |
So now we think about the neocortex as processing information | |
37:23.080 --> 37:25.080 | |
quite differently than we used to do it. | |
37:25.080 --> 37:28.080 | |
We used to think about the neocortex as processing sensory data | |
37:28.080 --> 37:30.080 | |
and extracting features from that sensory data | |
37:30.080 --> 37:32.080 | |
and then extracting features from the features | |
37:32.080 --> 37:35.080 | |
very much like a deep learning network does today. | |
37:35.080 --> 37:36.080 | |
But that's not how the brain works at all. | |
37:36.080 --> 37:39.080 | |
The brain works by assigning everything, | |
37:39.080 --> 37:41.080 | |
every input, everything to reference frames, | |
37:41.080 --> 37:44.080 | |
and there are thousands, hundreds of thousands of them | |
37:44.080 --> 37:47.080 | |
active at once in your neocortex. | |
37:47.080 --> 37:49.080 | |
It's a surprising thing to think about, | |
37:49.080 --> 37:51.080 | |
but once you sort of internalize this, | |
37:51.080 --> 37:54.080 | |
you understand that it explains almost every, | |
37:54.080 --> 37:57.080 | |
almost all the mysteries we've had about this structure. | |
37:57.080 --> 38:00.080 | |
So one of the consequences of that is that | |
38:00.080 --> 38:04.080 | |
every small part of the neocortex, say a millimeter square, | |
38:04.080 --> 38:06.080 | |
and there's 150,000 of those. | |
38:06.080 --> 38:08.080 | |
So it's about 150,000 square millimeters. | |
38:08.080 --> 38:11.080 | |
If you take every little square millimeter of the cortex, | |
38:11.080 --> 38:13.080 | |
it's got some input coming into it, | |
38:13.080 --> 38:15.080 | |
and it's going to have reference frames | |
38:15.080 --> 38:17.080 | |
where it's assigning that input to. | |
38:17.080 --> 38:21.080 | |
And each square millimeter can learn complete models of objects. | |
38:21.080 --> 38:22.080 | |
So what do I mean by that? | |
38:22.080 --> 38:23.080 | |
If I'm touching the coffee cup, | |
38:23.080 --> 38:25.080 | |
well, if I just touch it in one place, | |
38:25.080 --> 38:27.080 | |
I can't learn what this coffee cup is | |
38:27.080 --> 38:29.080 | |
because I'm just feeling one part. | |
38:29.080 --> 38:32.080 | |
But if I move it around the cup and touch it in different areas, | |
38:32.080 --> 38:34.080 | |
I can build up a complete model of the cup | |
38:34.080 --> 38:36.080 | |
because I'm now filling in that three dimensional map, | |
38:36.080 --> 38:37.080 | |
which is the coffee cup. | |
38:37.080 --> 38:39.080 | |
I can say, oh, what am I feeling in all these different locations? | |
38:39.080 --> 38:40.080 | |
That's the basic idea. | |
38:40.080 --> 38:42.080 | |
It's more complicated than that. | |
38:42.080 --> 38:46.080 | |
But so through time, and we talked about time earlier, | |
38:46.080 --> 38:48.080 | |
through time, even a single column, | |
38:48.080 --> 38:50.080 | |
which is only looking at, or a single part of the cortex, | |
38:50.080 --> 38:52.080 | |
which is only looking at a small part of the world, | |
38:52.080 --> 38:54.080 | |
can build up a complete model of an object. | |
38:54.080 --> 38:57.080 | |
And so if you think about the part of the brain, | |
38:57.080 --> 38:59.080 | |
which is getting input from all my fingers, | |
38:59.080 --> 39:01.080 | |
so they're spread across the top of your head here. | |
39:01.080 --> 39:03.080 | |
This is the somatosensory cortex. | |
39:03.080 --> 39:07.080 | |
There's columns associated with all the different areas of my skin. | |
39:07.080 --> 39:10.080 | |
And what we believe is happening is that | |
39:10.080 --> 39:12.080 | |
all of them are building models of this cup, | |
39:12.080 --> 39:15.080 | |
every one of them, or things. | |
39:15.080 --> 39:18.080 | |
Not every column or every part of the cortex | |
39:18.080 --> 39:19.080 | |
builds models of everything, | |
39:19.080 --> 39:21.080 | |
but they're all building models of something. | |
39:21.080 --> 39:26.080 | |
And so when I touch this cup with my hand, | |
39:26.080 --> 39:29.080 | |
there are multiple models of the cup being invoked. | |
39:29.080 --> 39:30.080 | |
If I look at it with my eyes, | |
39:30.080 --> 39:32.080 | |
there are again many models of the cup being invoked, | |
39:32.080 --> 39:34.080 | |
because each part of the visual system, | |
39:34.080 --> 39:36.080 | |
the brain doesn't process an image. | |
39:36.080 --> 39:38.080 | |
That's a misleading idea. | |
39:38.080 --> 39:40.080 | |
It's just like your fingers touching the cup, | |
39:40.080 --> 39:43.080 | |
so different parts of my retina are looking at different parts of the cup. | |
39:43.080 --> 39:45.080 | |
And thousands and thousands of models of the cup | |
39:45.080 --> 39:47.080 | |
are being invoked at once. | |
39:47.080 --> 39:49.080 | |
And they're all voting with each other, | |
39:49.080 --> 39:50.080 | |
trying to figure out what's going on. | |
39:50.080 --> 39:52.080 | |
So that's why we call it the thousand brains theory of intelligence, | |
39:52.080 --> 39:54.080 | |
because there isn't one model of a cup. | |
39:54.080 --> 39:56.080 | |
There are thousands of models of this cup. | |
39:56.080 --> 39:58.080 | |
There are thousands of models of your cell phone, | |
39:58.080 --> 40:01.080 | |
and about cameras and microphones and so on. | |
40:01.080 --> 40:03.080 | |
It's a distributed modeling system, | |
40:03.080 --> 40:05.080 | |
which is very different than what people have thought about it. | |
40:05.080 --> 40:07.080 | |
So that's a really compelling and interesting idea. | |
40:07.080 --> 40:09.080 | |
I have two first questions. | |
40:09.080 --> 40:12.080 | |
So one, on the ensemble part of everything coming together, | |
40:12.080 --> 40:14.080 | |
you have these thousand brains. | |
40:14.080 --> 40:19.080 | |
How do you know which one has done the best job of forming the cup? | |
40:19.080 --> 40:20.080 | |
Great question. Let me try to explain. | |
40:20.080 --> 40:23.080 | |
There's a problem that's known in neuroscience | |
40:23.080 --> 40:25.080 | |
called the sensor fusion problem. | |
40:25.080 --> 40:26.080 | |
Yes. | |
40:26.080 --> 40:28.080 | |
And so the idea is something like, | |
40:28.080 --> 40:29.080 | |
oh, the image comes from the eye. | |
40:29.080 --> 40:30.080 | |
There's a picture on the retina. | |
40:30.080 --> 40:32.080 | |
And it gets projected to the neocortex. | |
40:32.080 --> 40:35.080 | |
Oh, by now it's all sped out all over the place, | |
40:35.080 --> 40:37.080 | |
and it's kind of squirrely and distorted, | |
40:37.080 --> 40:39.080 | |
and pieces are all over the, you know, | |
40:39.080 --> 40:41.080 | |
it doesn't look like a picture anymore. | |
40:41.080 --> 40:43.080 | |
When does it all come back together again? | |
40:43.080 --> 40:44.080 | |
Right? | |
40:44.080 --> 40:46.080 | |
Or you might say, well, yes, but I also, | |
40:46.080 --> 40:48.080 | |
I also have sounds or touches associated with the cup. | |
40:48.080 --> 40:50.080 | |
So I'm seeing the cup and touching the cup. | |
40:50.080 --> 40:52.080 | |
How do they get combined together again? | |
40:52.080 --> 40:54.080 | |
So this is called the sensor fusion problem. | |
40:54.080 --> 40:57.080 | |
As if all these disparate parts have to be brought together | |
40:57.080 --> 40:59.080 | |
into one model someplace. | |
40:59.080 --> 41:01.080 | |
That's the wrong idea. | |
41:01.080 --> 41:03.080 | |
The right idea is that you get all these guys voting. | |
41:03.080 --> 41:05.080 | |
There's auditory models of the cup, | |
41:05.080 --> 41:07.080 | |
there's visual models of the cup, | |
41:07.080 --> 41:09.080 | |
there's tactile models of the cup. | |
41:09.080 --> 41:11.080 | |
In the vision system, there might be ones | |
41:11.080 --> 41:13.080 | |
that are more focused on black and white, | |
41:13.080 --> 41:14.080 | |
ones versioned on color. | |
41:14.080 --> 41:15.080 | |
It doesn't really matter. | |
41:15.080 --> 41:17.080 | |
There's just thousands and thousands of models of this cup. | |
41:17.080 --> 41:18.080 | |
And they vote. | |
41:18.080 --> 41:20.080 | |
They don't actually come together in one spot. | |
41:20.080 --> 41:22.080 | |
Just literally think of it this way. | |
41:22.080 --> 41:25.080 | |
Imagine you have, each columns are like about the size | |
41:25.080 --> 41:26.080 | |
of a little piece of spaghetti. | |
41:26.080 --> 41:27.080 | |
Okay? | |
41:27.080 --> 41:28.080 | |
Like a two and a half millimeters tall | |
41:28.080 --> 41:30.080 | |
and about a millimeter in white. | |
41:30.080 --> 41:33.080 | |
They're not physical like, but you can think of them that way. | |
41:33.080 --> 41:36.080 | |
And each one's trying to guess what this thing is or touching. | |
41:36.080 --> 41:38.080 | |
Now they can, they can do a pretty good job | |
41:38.080 --> 41:40.080 | |
if they're allowed to move over time. | |
41:40.080 --> 41:42.080 | |
So I can reach my hand into a black box and move my finger | |
41:42.080 --> 41:44.080 | |
around an object and if I touch enough space, | |
41:44.080 --> 41:46.080 | |
it's like, okay, I know what it is. | |
41:46.080 --> 41:48.080 | |
But often we don't do that. | |
41:48.080 --> 41:50.080 | |
Often I can just reach and grab something with my hand | |
41:50.080 --> 41:51.080 | |
all at once and I get it. | |
41:51.080 --> 41:53.080 | |
Or if I had to look through the world through a straw, | |
41:53.080 --> 41:55.080 | |
so I'm only invoking one little column, | |
41:55.080 --> 41:57.080 | |
I can only see part of something because I have to move | |
41:57.080 --> 41:58.080 | |
the straw around. | |
41:58.080 --> 42:00.080 | |
But if I open my eyes to see the whole thing at once. | |
42:00.080 --> 42:02.080 | |
So what we think is going on is all these little pieces | |
42:02.080 --> 42:05.080 | |
of spaghetti, all these little columns in the cortex | |
42:05.080 --> 42:08.080 | |
are all trying to guess what it is that they're sensing. | |
42:08.080 --> 42:10.080 | |
They'll do a better guess if they have time | |
42:10.080 --> 42:11.080 | |
and can move over time. | |
42:11.080 --> 42:13.080 | |
So if I move my eyes and move my fingers. | |
42:13.080 --> 42:16.080 | |
But if they don't, they have a, they have a poor guess. | |
42:16.080 --> 42:19.080 | |
It's a, it's a probabilistic guess of what they might be touching. | |
42:19.080 --> 42:22.080 | |
Now imagine they can post their probability | |
42:22.080 --> 42:24.080 | |
at the top of little piece of spaghetti. | |
42:24.080 --> 42:25.080 | |
Each one of them says, I think, | |
42:25.080 --> 42:27.080 | |
and it's not really a probability distribution. | |
42:27.080 --> 42:29.080 | |
It's more like a set of possibilities in the brain. | |
42:29.080 --> 42:31.080 | |
It doesn't work as a probability distribution. | |
42:31.080 --> 42:33.080 | |
It works as more like what we call a union. | |
42:33.080 --> 42:35.080 | |
You could say, and one column says, | |
42:35.080 --> 42:39.080 | |
I think it could be a coffee cup, a soda can or a water bottle. | |
42:39.080 --> 42:42.080 | |
And another column says, I think it could be a coffee cup | |
42:42.080 --> 42:45.080 | |
or a, you know, telephone or camera or whatever. | |
42:45.080 --> 42:46.080 | |
Right. | |
42:46.080 --> 42:49.080 | |
And all these guys are saying what they think it might be. | |
42:49.080 --> 42:51.080 | |
And there's these long range connections | |
42:51.080 --> 42:53.080 | |
in certain layers in the cortex. | |
42:53.080 --> 42:57.080 | |
So there's some layers in some cell types in each column | |
42:57.080 --> 42:59.080 | |
send the projections across the brain. | |
42:59.080 --> 43:01.080 | |
And that's the voting occurs. | |
43:01.080 --> 43:04.080 | |
And so there's a simple associative memory mechanism. | |
43:04.080 --> 43:07.080 | |
We've described this in a recent paper and we've modeled this | |
43:07.080 --> 43:11.080 | |
that says they can all quickly settle on the only | |
43:11.080 --> 43:14.080 | |
or the one best answer for all of them. | |
43:14.080 --> 43:17.080 | |
If there is a single best answer, they all vote and say, | |
43:17.080 --> 43:19.080 | |
yep, it's got to be the coffee cup. | |
43:19.080 --> 43:21.080 | |
And at that point, they all know it's a coffee cup. | |
43:21.080 --> 43:23.080 | |
And at that point, everyone acts as if it's a coffee cup. | |
43:23.080 --> 43:24.080 | |
Yeah, we know it's a coffee. | |
43:24.080 --> 43:26.080 | |
Even though I've only seen one little piece of this world, | |
43:26.080 --> 43:28.080 | |
I know it's a coffee cup I'm touching or I'm seeing or whatever. | |
43:28.080 --> 43:31.080 | |
And so you can think of all these columns are looking | |
43:31.080 --> 43:33.080 | |
at different parts and different places, | |
43:33.080 --> 43:35.080 | |
different sensory input, different locations. | |
43:35.080 --> 43:36.080 | |
They're all different. | |
43:36.080 --> 43:40.080 | |
But this layer that's doing the voting, it solidifies. | |
43:40.080 --> 43:43.080 | |
It crystallizes and says, oh, we all know what we're doing. | |
43:43.080 --> 43:46.080 | |
And so you don't bring these models together in one model, | |
43:46.080 --> 43:49.080 | |
you just vote and there's a crystallization of the vote. | |
43:49.080 --> 43:50.080 | |
Great. | |
43:50.080 --> 43:56.080 | |
That's at least a compelling way to think about the way you | |
43:56.080 --> 43:58.080 | |
form a model of the world. | |
43:58.080 --> 44:00.080 | |
Now, you talk about a coffee cup. | |
44:00.080 --> 44:04.080 | |
Do you see this as far as I understand that you were proposing | |
44:04.080 --> 44:07.080 | |
this as well, that this extends to much more than coffee cups? | |
44:07.080 --> 44:09.080 | |
Yeah, it does. | |
44:09.080 --> 44:11.080 | |
Or at least the physical world. | |
44:11.080 --> 44:14.080 | |
It expands to the world of concepts. | |
44:14.080 --> 44:15.080 | |
Yeah, it does. | |
44:15.080 --> 44:18.080 | |
And well, the first, the primary phase of evidence for that | |
44:18.080 --> 44:21.080 | |
is that the regions of the neocortex that are associated | |
44:21.080 --> 44:24.080 | |
with language or high level thought or mathematics or things | |
44:24.080 --> 44:26.080 | |
like that, they look like the regions of the neocortex | |
44:26.080 --> 44:28.080 | |
that process vision and hearing and touch. | |
44:28.080 --> 44:31.080 | |
They don't look any different or they look only marginally | |
44:31.080 --> 44:32.080 | |
different. | |
44:32.080 --> 44:36.080 | |
And so one would say, well, if Vernon Mountcastle, | |
44:36.080 --> 44:39.080 | |
who proposed that all the parts of the neocortex | |
44:39.080 --> 44:42.080 | |
are the same thing, if he's right, then the parts | |
44:42.080 --> 44:44.080 | |
that are doing language or mathematics or physics | |
44:44.080 --> 44:46.080 | |
are working on the same principle. | |
44:46.080 --> 44:48.080 | |
They must be working on the principle of reference frames. | |
44:48.080 --> 44:51.080 | |
So that's a little odd thought. | |
44:51.080 --> 44:55.080 | |
But of course, we had no prior idea how these things happen. | |
44:55.080 --> 44:57.080 | |
So let's go with that. | |
44:57.080 --> 45:01.080 | |
And in our recent paper, we talked a little bit about that. | |
45:01.080 --> 45:03.080 | |
I've been working on it more since. | |
45:03.080 --> 45:05.080 | |
I have better ideas about it now. | |
45:05.080 --> 45:08.080 | |
I'm sitting here very confident that that's what's happening. | |
45:08.080 --> 45:11.080 | |
And I can give you some examples to help you think about that. | |
45:11.080 --> 45:13.080 | |
It's not that we understand it completely, | |
45:13.080 --> 45:15.080 | |
but I understand it better than I've described it in any paper | |
45:15.080 --> 45:16.080 | |
so far. | |
45:16.080 --> 45:18.080 | |
But we did put that idea out there. | |
45:18.080 --> 45:22.080 | |
It's a good place to start. | |
45:22.080 --> 45:25.080 | |
And the evidence would suggest it's how it's happening. | |
45:25.080 --> 45:27.080 | |
And then we can start tackling that problem one piece at a time. | |
45:27.080 --> 45:29.080 | |
What does it mean to do high level thought? | |
45:29.080 --> 45:30.080 | |
What does it mean to do language? | |
45:30.080 --> 45:34.080 | |
How would that fit into a reference framework? | |
45:34.080 --> 45:38.080 | |
I don't know if you could tell me if there's a connection, | |
45:38.080 --> 45:42.080 | |
but there's an app called Anki that helps you remember different concepts. | |
45:42.080 --> 45:46.080 | |
And they talk about like a memory palace that helps you remember | |
45:46.080 --> 45:50.080 | |
completely random concepts by trying to put them in a physical space | |
45:50.080 --> 45:52.080 | |
in your mind and putting them next to each other. | |
45:52.080 --> 45:54.080 | |
It's called the method of loci. | |
45:54.080 --> 45:57.080 | |
For some reason, that seems to work really well. | |
45:57.080 --> 46:00.080 | |
Now that's a very narrow kind of application of just remembering some facts. | |
46:00.080 --> 46:03.080 | |
But that's a very, very telling one. | |
46:03.080 --> 46:04.080 | |
Yes, exactly. | |
46:04.080 --> 46:09.080 | |
So this seems like you're describing a mechanism why this seems to work. | |
46:09.080 --> 46:13.080 | |
So basically the way what we think is going on is all things you know, | |
46:13.080 --> 46:17.080 | |
all concepts, all ideas, words, everything, you know, | |
46:17.080 --> 46:20.080 | |
are stored in reference frames. | |
46:20.080 --> 46:24.080 | |
And so if you want to remember something, | |
46:24.080 --> 46:27.080 | |
you have to basically navigate through a reference frame the same way | |
46:27.080 --> 46:28.080 | |
a rat navigates to a man. | |
46:28.080 --> 46:31.080 | |
Even the same way my finger rat navigates to this coffee cup. | |
46:31.080 --> 46:33.080 | |
You are moving through some space. | |
46:33.080 --> 46:37.080 | |
And so if you have a random list of things you would ask to remember | |
46:37.080 --> 46:39.080 | |
by assigning them to a reference frame, | |
46:39.080 --> 46:42.080 | |
you've already know very well to see your house, right? | |
46:42.080 --> 46:44.080 | |
And the idea of the method of loci is you can say, | |
46:44.080 --> 46:46.080 | |
okay, in my lobby, I'm going to put this thing. | |
46:46.080 --> 46:48.080 | |
And then the bedroom, I put this one. | |
46:48.080 --> 46:49.080 | |
I go down the hall, I put this thing. | |
46:49.080 --> 46:51.080 | |
And then you want to recall those facts. | |
46:51.080 --> 46:52.080 | |
So recall those things. | |
46:52.080 --> 46:53.080 | |
You just walk mentally. | |
46:53.080 --> 46:54.080 | |
You walk through your house. | |
46:54.080 --> 46:57.080 | |
You're mentally moving through a reference frame that you already had. | |
46:57.080 --> 47:00.080 | |
And that tells you there's two things that are really important about that. | |
47:00.080 --> 47:03.080 | |
It tells us the brain prefers to store things in reference frames. | |
47:03.080 --> 47:08.080 | |
And the method of recalling things or thinking, if you will, | |
47:08.080 --> 47:11.080 | |
is to move mentally through those reference frames. | |
47:11.080 --> 47:13.080 | |
You could move physically through some reference frames, | |
47:13.080 --> 47:16.080 | |
like I could physically move through the reference frame of this coffee cup. | |
47:16.080 --> 47:18.080 | |
I can also mentally move through the reference frame of the coffee cup, | |
47:18.080 --> 47:19.080 | |
imagining me touching it. | |
47:19.080 --> 47:22.080 | |
But I can also mentally move my house. | |
47:22.080 --> 47:26.080 | |
And so now we can ask ourselves, are all concepts stored this way? | |
47:26.080 --> 47:32.080 | |
There was some recent research using human subjects in fMRI. | |
47:32.080 --> 47:36.080 | |
And I'm going to apologize for not knowing the name of the scientists who did this. | |
47:36.080 --> 47:41.080 | |
But what they did is they put humans in this fMRI machine, | |
47:41.080 --> 47:42.080 | |
which was one of these imaging machines. | |
47:42.080 --> 47:46.080 | |
And they gave the humans tasks to think about birds. | |
47:46.080 --> 47:49.080 | |
So they had different types of birds, and birds that looked big and small | |
47:49.080 --> 47:51.080 | |
and long necks and long legs, things like that. | |
47:51.080 --> 47:56.080 | |
And what they could tell from the fMRI was a very clever experiment. | |
47:56.080 --> 48:00.080 | |
You get to tell when humans were thinking about the birds, | |
48:00.080 --> 48:05.080 | |
that the birds, the knowledge of birds was arranged in a reference frame | |
48:05.080 --> 48:08.080 | |
similar to the ones that are used when you navigate in a room. | |
48:08.080 --> 48:10.080 | |
These are called grid cells. | |
48:10.080 --> 48:14.080 | |
And there are grid cell like patterns of activity in the neocortex when they do this. | |
48:14.080 --> 48:18.080 | |
So that, it's a very clever experiment. | |
48:18.080 --> 48:22.080 | |
And what it basically says is that even when you're thinking about something abstract | |
48:22.080 --> 48:24.080 | |
and you're not really thinking about it as a reference frame, | |
48:24.080 --> 48:27.080 | |
it tells us the brain is actually using a reference frame. | |
48:27.080 --> 48:29.080 | |
And it's using the same neural mechanisms. | |
48:29.080 --> 48:32.080 | |
These grid cells are the basic same neural mechanisms that we propose | |
48:32.080 --> 48:36.080 | |
that grid cells, which exist in the old part of the brain, the entomonic cortex, | |
48:36.080 --> 48:40.080 | |
that that mechanism is now similar mechanism, is used throughout the neocortex. | |
48:40.080 --> 48:44.080 | |
It's the same nature to preserve this interesting way of creating reference frames. | |
48:44.080 --> 48:49.080 | |
And so now they have empirical evidence that when you think about concepts like birds | |
48:49.080 --> 48:53.080 | |
that you're using reference frames that are built on grid cells. | |
48:53.080 --> 48:55.080 | |
So that's similar to the method of loci. | |
48:55.080 --> 48:57.080 | |
But in this case, the birds are related so that makes, | |
48:57.080 --> 49:01.080 | |
they create their own reference frame, which is consistent with bird space. | |
49:01.080 --> 49:03.080 | |
And when you think about something, you go through that. | |
49:03.080 --> 49:04.080 | |
You can make the same example. | |
49:04.080 --> 49:06.080 | |
Let's take a math mathematics. | |
49:06.080 --> 49:08.080 | |
Let's say you want to prove a conjecture. | |
49:08.080 --> 49:09.080 | |
Okay. | |
49:09.080 --> 49:10.080 | |
What is a conjecture? | |
49:10.080 --> 49:13.080 | |
A conjecture is a statement you believe to be true, | |
49:13.080 --> 49:15.080 | |
but you haven't proven it. | |
49:15.080 --> 49:17.080 | |
And so it might be an equation. | |
49:17.080 --> 49:19.080 | |
I want to show that this is equal to that. | |
49:19.080 --> 49:21.080 | |
And you have some places you start with. | |
49:21.080 --> 49:23.080 | |
You say, well, I know this is true and I know this is true. | |
49:23.080 --> 49:26.080 | |
And I think that maybe to get to the final proof, | |
49:26.080 --> 49:28.080 | |
I need to go through some intermediate results. | |
49:28.080 --> 49:33.080 | |
What I believe is happening is literally these equations | |
49:33.080 --> 49:36.080 | |
or these points are assigned to a reference frame, | |
49:36.080 --> 49:38.080 | |
a mathematical reference frame. | |
49:38.080 --> 49:40.080 | |
And when you do mathematical operations, | |
49:40.080 --> 49:42.080 | |
a simple one might be multiply or divide, | |
49:42.080 --> 49:44.080 | |
maybe a little plus transform or something else. | |
49:44.080 --> 49:47.080 | |
That is like a movement in the reference frame of the math. | |
49:47.080 --> 49:50.080 | |
And so you're literally trying to discover a path | |
49:50.080 --> 49:56.080 | |
from one location to another location in a space of mathematics. | |
49:56.080 --> 49:58.080 | |
And if you can get to these intermediate results, | |
49:58.080 --> 50:00.080 | |
then you know your map is pretty good | |
50:00.080 --> 50:03.080 | |
and you know you're using the right operations. | |
50:03.080 --> 50:06.080 | |
Much of what we think about is solving hard problems | |
50:06.080 --> 50:09.080 | |
is designing the correct reference frame for that problem, | |
50:09.080 --> 50:12.080 | |
how to organize the information, and what behaviors | |
50:12.080 --> 50:15.080 | |
I want to use in that space to get me there. | |
50:15.080 --> 50:19.080 | |
Yeah, so if you dig in on an idea of this reference frame, | |
50:19.080 --> 50:21.080 | |
whether it's the math, you start a set of axioms | |
50:21.080 --> 50:24.080 | |
to try to get to proving the conjecture. | |
50:24.080 --> 50:27.080 | |
Can you try to describe, maybe take a step back, | |
50:27.080 --> 50:30.080 | |
how you think of the reference frame in that context? | |
50:30.080 --> 50:35.080 | |
Is it the reference frame that the axioms are happy in? | |
50:35.080 --> 50:38.080 | |
Is it the reference frame that might contain everything? | |
50:38.080 --> 50:41.080 | |
Is it a changing thing as you... | |
50:41.080 --> 50:43.080 | |
You have many, many reference frames. | |
50:43.080 --> 50:45.080 | |
In fact, the way the thousand brain theories of intelligence | |
50:45.080 --> 50:48.080 | |
says that every single thing in the world has its own reference frame. | |
50:48.080 --> 50:50.080 | |
So every word has its own reference frames. | |
50:50.080 --> 50:52.080 | |
And we can talk about this. | |
50:52.080 --> 50:55.080 | |
The mathematics work out this is no problem for neurons to do this. | |
50:55.080 --> 50:58.080 | |
But how many reference frames does the coffee cup have? | |
50:58.080 --> 51:03.080 | |
Well, let's say you ask how many reference frames | |
51:03.080 --> 51:07.080 | |
could the column in my finger that's touching the coffee cup have | |
51:07.080 --> 51:10.080 | |
because there are many, many models of the coffee cup. | |
51:10.080 --> 51:12.080 | |
So there is no model of the coffee cup. | |
51:12.080 --> 51:14.080 | |
There are many models of the coffee cup. | |
51:14.080 --> 51:17.080 | |
And you can say, well, how many different things can my finger learn? | |
51:17.080 --> 51:19.080 | |
Is this the question you want to ask? | |
51:19.080 --> 51:21.080 | |
Imagine I say every concept, every idea, | |
51:21.080 --> 51:23.080 | |
everything you've ever know about that you can say, | |
51:23.080 --> 51:28.080 | |
I know that thing has a reference frame associated with it. | |
51:28.080 --> 51:30.080 | |
And what we do when we build composite objects, | |
51:30.080 --> 51:34.080 | |
we assign reference frames to point another reference frame. | |
51:34.080 --> 51:37.080 | |
So my coffee cup has multiple components to it. | |
51:37.080 --> 51:38.080 | |
It's got a limb. | |
51:38.080 --> 51:39.080 | |
It's got a cylinder. | |
51:39.080 --> 51:40.080 | |
It's got a handle. | |
51:40.080 --> 51:43.080 | |
And those things have their own reference frames. | |
51:43.080 --> 51:45.080 | |
And they're assigned to a master reference frame, | |
51:45.080 --> 51:46.080 | |
which is called this cup. | |
51:46.080 --> 51:48.080 | |
And now I have this mental logo on it. | |
51:48.080 --> 51:50.080 | |
Well, that's something that exists elsewhere in the world. | |
51:50.080 --> 51:51.080 | |
It's its own thing. | |
51:51.080 --> 51:52.080 | |
So it has its own reference frame. | |
51:52.080 --> 51:56.080 | |
So we now have to say, how can I assign the mental logo reference frame | |
51:56.080 --> 51:59.080 | |
onto the cylinder or onto the coffee cup? | |
51:59.080 --> 52:04.080 | |
So we talked about this in the paper that came out in December | |
52:04.080 --> 52:06.080 | |
of this last year. | |
52:06.080 --> 52:09.080 | |
The idea of how you can assign reference frames to reference frames, | |
52:09.080 --> 52:10.080 | |
how neurons could do this. | |
52:10.080 --> 52:14.080 | |
So my question is, even though you mentioned reference frames a lot, | |
52:14.080 --> 52:18.080 | |
I almost feel it's really useful to dig into how you think | |
52:18.080 --> 52:20.080 | |
of what a reference frame is. | |
52:20.080 --> 52:22.080 | |
It was already helpful for me to understand that you think | |
52:22.080 --> 52:26.080 | |
of reference frames as something there is a lot of. | |
52:26.080 --> 52:29.080 | |
OK, so let's just say that we're going to have some neurons | |
52:29.080 --> 52:32.080 | |
in the brain, not many actually, 10,000, 20,000, | |
52:32.080 --> 52:34.080 | |
are going to create a whole bunch of reference frames. | |
52:34.080 --> 52:35.080 | |
What does it mean? | |
52:35.080 --> 52:37.080 | |
What is a reference frame? | |
52:37.080 --> 52:40.080 | |
First of all, these reference frames are different than the ones | |
52:40.080 --> 52:42.080 | |
you might be used to. | |
52:42.080 --> 52:43.080 | |
We know lots of reference frames. | |
52:43.080 --> 52:45.080 | |
For example, we know the Cartesian coordinates, | |
52:45.080 --> 52:47.080 | |
XYZ, that's a type of reference frame. | |
52:47.080 --> 52:50.080 | |
We know longitude and latitude. | |
52:50.080 --> 52:52.080 | |
That's a different type of reference frame. | |
52:52.080 --> 52:55.080 | |
If I look at a printed map, it might have columns, | |
52:55.080 --> 52:59.080 | |
A through M and rows, 1 through 20, | |
52:59.080 --> 53:01.080 | |
that's a different type of reference frame. | |
53:01.080 --> 53:04.080 | |
It's kind of a Cartesian reference frame. | |
53:04.080 --> 53:07.080 | |
The interesting thing about the reference frames in the brain, | |
53:07.080 --> 53:09.080 | |
and we know this because these have been established | |
53:09.080 --> 53:12.080 | |
through neuroscience studying the entorhinal cortex. | |
53:12.080 --> 53:13.080 | |
So I'm not speculating here. | |
53:13.080 --> 53:16.080 | |
This is known neuroscience in an old part of the brain. | |
53:16.080 --> 53:18.080 | |
The way these cells create reference frames, | |
53:18.080 --> 53:20.080 | |
they have no origin. | |
53:20.080 --> 53:24.080 | |
So what it's more like, you have a point, | |
53:24.080 --> 53:26.080 | |
a point in some space, | |
53:26.080 --> 53:29.080 | |
and you, given a particular movement, | |
53:29.080 --> 53:32.080 | |
you can then tell what the next point should be. | |
53:32.080 --> 53:34.080 | |
And you can then tell what the next point would be. | |
53:34.080 --> 53:35.080 | |
And so on. | |
53:35.080 --> 53:40.080 | |
You can use this to calculate how to get from one point to another. | |
53:40.080 --> 53:43.080 | |
So how do I get from my house to my home, | |
53:43.080 --> 53:45.080 | |
or how do I get my finger from the side of my cup | |
53:45.080 --> 53:46.080 | |
to the top of the cup? | |
53:46.080 --> 53:52.080 | |
How do I get from the axioms to the conjecture? | |
53:52.080 --> 53:54.080 | |
So it's a different type of reference frame. | |
53:54.080 --> 53:57.080 | |
And I can, if you want, I can describe in more detail. | |
53:57.080 --> 53:59.080 | |
I can paint a picture how you might want to think about that. | |
53:59.080 --> 54:00.080 | |
It's really helpful to think. | |
54:00.080 --> 54:02.080 | |
It's something you can move through. | |
54:02.080 --> 54:03.080 | |
Yeah. | |
54:03.080 --> 54:08.080 | |
But is it helpful to think of it as spatial in some sense, | |
54:08.080 --> 54:09.080 | |
or is there something? | |
54:09.080 --> 54:11.080 | |
No, it's definitely spatial. | |
54:11.080 --> 54:13.080 | |
It's spatial in a mathematical sense. | |
54:13.080 --> 54:14.080 | |
How many dimensions? | |
54:14.080 --> 54:16.080 | |
Can it be a crazy number of dimensions? | |
54:16.080 --> 54:17.080 | |
Well, that's an interesting question. | |
54:17.080 --> 54:20.080 | |
In the old part of the brain, the entorhinal cortex, | |
54:20.080 --> 54:22.080 | |
they studied rats. | |
54:22.080 --> 54:24.080 | |
And initially, it looks like, oh, this is just two dimensional. | |
54:24.080 --> 54:27.080 | |
It's like the rat is in some box in a maze or whatever, | |
54:27.080 --> 54:29.080 | |
and they know whether the rat is using these two dimensional | |
54:29.080 --> 54:32.080 | |
reference frames and know where it is in the maze. | |
54:32.080 --> 54:35.080 | |
We say, OK, well, what about bats? | |
54:35.080 --> 54:38.080 | |
That's a mammal, and they fly in three dimensional space. | |
54:38.080 --> 54:39.080 | |
How do they do that? | |
54:39.080 --> 54:41.080 | |
They seem to know where they are, right? | |
54:41.080 --> 54:44.080 | |
So this is a current area of active research, | |
54:44.080 --> 54:47.080 | |
and it seems like somehow the neurons in the entorhinal cortex | |
54:47.080 --> 54:50.080 | |
can learn three dimensional space. | |
54:50.080 --> 54:55.080 | |
We just, two members of our team, along with Ilefet from MIT, | |
54:55.080 --> 54:59.080 | |
just released a paper this literally last week, | |
54:59.080 --> 55:03.080 | |
it's on bioarchive, where they show that you can, | |
55:03.080 --> 55:06.080 | |
the way these things work, and unless you want to, | |
55:06.080 --> 55:10.080 | |
I won't get into the detail, but grid cells | |
55:10.080 --> 55:12.080 | |
can represent any n dimensional space. | |
55:12.080 --> 55:15.080 | |
It's not inherently limited. | |
55:15.080 --> 55:18.080 | |
You can think of it this way, if you had two dimensional, | |
55:18.080 --> 55:21.080 | |
the way it works is you had a bunch of two dimensional slices. | |
55:21.080 --> 55:22.080 | |
That's the way these things work. | |
55:22.080 --> 55:24.080 | |
There's a whole bunch of two dimensional models, | |
55:24.080 --> 55:27.080 | |
and you can slice up any n dimensional space | |
55:27.080 --> 55:29.080 | |
with two dimensional projections. | |
55:29.080 --> 55:31.080 | |
And you could have one dimensional models. | |
55:31.080 --> 55:34.080 | |
So there's nothing inherent about the mathematics | |
55:34.080 --> 55:36.080 | |
about the way the neurons do this, | |
55:36.080 --> 55:39.080 | |
which constrained the dimensionality of the space, | |
55:39.080 --> 55:41.080 | |
which I think was important. | |
55:41.080 --> 55:44.080 | |
So obviously, I have a three dimensional map of this cup. | |
55:44.080 --> 55:46.080 | |
Maybe it's even more than that, I don't know. | |
55:46.080 --> 55:48.080 | |
But it's a clearly three dimensional map of the cup. | |
55:48.080 --> 55:50.080 | |
I don't just have a projection of the cup. | |
55:50.080 --> 55:52.080 | |
But when I think about birds, | |
55:52.080 --> 55:53.080 | |
or when I think about mathematics, | |
55:53.080 --> 55:55.080 | |
perhaps it's more than three dimensions. | |
55:55.080 --> 55:56.080 | |
Who knows? | |
55:56.080 --> 56:00.080 | |
So in terms of each individual column | |
56:00.080 --> 56:04.080 | |
building up more and more information over time, | |
56:04.080 --> 56:06.080 | |
do you think that mechanism is well understood? | |
56:06.080 --> 56:10.080 | |
In your mind, you've proposed a lot of architectures there. | |
56:10.080 --> 56:14.080 | |
Is that a key piece, or is it, is the big piece, | |
56:14.080 --> 56:16.080 | |
the thousand brain theory of intelligence, | |
56:16.080 --> 56:18.080 | |
the ensemble of it all? | |
56:18.080 --> 56:19.080 | |
Well, I think they're both big. | |
56:19.080 --> 56:21.080 | |
I mean, clearly the concept, as a theorist, | |
56:21.080 --> 56:23.080 | |
the concept is most exciting, right? | |
56:23.080 --> 56:24.080 | |
A high level concept. | |
56:24.080 --> 56:25.080 | |
A high level concept. | |
56:25.080 --> 56:26.080 | |
This is a totally new way of thinking about | |
56:26.080 --> 56:27.080 | |
how the near characteristics work. | |
56:27.080 --> 56:29.080 | |
So that is appealing. | |
56:29.080 --> 56:31.080 | |
It has all these ramifications. | |
56:31.080 --> 56:34.080 | |
And with that, as a framework for how the brain works, | |
56:34.080 --> 56:35.080 | |
you can make all kinds of predictions | |
56:35.080 --> 56:36.080 | |
and solve all kinds of problems. | |
56:36.080 --> 56:38.080 | |
Now we're trying to work through many of these details right now. | |
56:38.080 --> 56:40.080 | |
Okay, how do the neurons actually do this? | |
56:40.080 --> 56:42.080 | |
Well, it turns out, if you think about grid cells | |
56:42.080 --> 56:44.080 | |
and place cells in the old parts of the brain, | |
56:44.080 --> 56:46.080 | |
there's a lot that's known about them, | |
56:46.080 --> 56:47.080 | |
but there's still some mysteries. | |
56:47.080 --> 56:49.080 | |
There's a lot of debate about exactly the details, | |
56:49.080 --> 56:50.080 | |
how these work, and what are the signs. | |
56:50.080 --> 56:52.080 | |
And we have that same level of detail, | |
56:52.080 --> 56:54.080 | |
that same level of concern. | |
56:54.080 --> 56:56.080 | |
What we spend here, most of our time doing, | |
56:56.080 --> 56:59.080 | |
is trying to make a very good list | |
56:59.080 --> 57:02.080 | |
of the things we don't understand yet. | |
57:02.080 --> 57:04.080 | |
That's the key part here. | |
57:04.080 --> 57:05.080 | |
What are the constraints? | |
57:05.080 --> 57:07.080 | |
It's not like, oh, this seems to work, we're done. | |
57:07.080 --> 57:09.080 | |
It's like, okay, it kind of works, | |
57:09.080 --> 57:11.080 | |
but these are other things we know it has to do, | |
57:11.080 --> 57:13.080 | |
and it's not doing those yet. | |
57:13.080 --> 57:15.080 | |
I would say we're well on the way here. | |
57:15.080 --> 57:17.080 | |
We're not done yet. | |
57:17.080 --> 57:20.080 | |
There's a lot of trickiness to this system, | |
57:20.080 --> 57:23.080 | |
but the basic principles about how different layers | |
57:23.080 --> 57:27.080 | |
in the neocortex are doing much of this, we understand. | |
57:27.080 --> 57:29.080 | |
But there's some fundamental parts | |
57:29.080 --> 57:30.080 | |
that we don't understand as well. | |
57:30.080 --> 57:34.080 | |
So what would you say is one of the harder open problems, | |
57:34.080 --> 57:37.080 | |
or one of the ones that have been bothering you, | |
57:37.080 --> 57:39.080 | |
keeping you up at night the most? | |
57:39.080 --> 57:41.080 | |
Well, right now, this is a detailed thing | |
57:41.080 --> 57:43.080 | |
that wouldn't apply to most people, okay? | |
57:43.080 --> 57:44.080 | |
Sure. | |
57:44.080 --> 57:45.080 | |
But you want me to answer that question? | |
57:45.080 --> 57:46.080 | |
Yeah, please. | |
57:46.080 --> 57:49.080 | |
We've talked about, as if, oh, to predict | |
57:49.080 --> 57:51.080 | |
what you're going to sense on this coffee cup, | |
57:51.080 --> 57:54.080 | |
I need to know where my finger's going to be on the coffee cup. | |
57:54.080 --> 57:56.080 | |
That is true, but it's insufficient. | |
57:56.080 --> 57:59.080 | |
Think about my finger touching the edge of the coffee cup. | |
57:59.080 --> 58:02.080 | |
My finger can touch it at different orientations. | |
58:02.080 --> 58:05.080 | |
I can touch it at my finger around here. | |
58:05.080 --> 58:06.080 | |
And that doesn't change. | |
58:06.080 --> 58:09.080 | |
I can make that prediction, and somehow, | |
58:09.080 --> 58:10.080 | |
so it's not just the location. | |
58:10.080 --> 58:13.080 | |
There's an orientation component of this as well. | |
58:13.080 --> 58:15.080 | |
This is known in the old part of the brain, too. | |
58:15.080 --> 58:17.080 | |
There's things called head direction cells, | |
58:17.080 --> 58:18.080 | |
which way the rat is facing. | |
58:18.080 --> 58:20.080 | |
It's the same kind of basic idea. | |
58:20.080 --> 58:23.080 | |
So if my finger were a rat, you know, in three dimensions, | |
58:23.080 --> 58:25.080 | |
I have a three dimensional orientation, | |
58:25.080 --> 58:27.080 | |
and I have a three dimensional location. | |
58:27.080 --> 58:29.080 | |
If I was a rat, I would have a, | |
58:29.080 --> 58:31.080 | |
I think it was a two dimensional location, | |
58:31.080 --> 58:33.080 | |
or one dimensional orientation, like this, | |
58:33.080 --> 58:35.080 | |
which way is it facing? | |
58:35.080 --> 58:38.080 | |
So how the two components work together, | |
58:38.080 --> 58:41.080 | |
how does it, I combine orientation, | |
58:41.080 --> 58:43.080 | |
the orientation of my sensor, | |
58:43.080 --> 58:47.080 | |
as well as the location, | |
58:47.080 --> 58:49.080 | |
is a tricky problem. | |
58:49.080 --> 58:52.080 | |
And I think I've made progress on it. | |
58:52.080 --> 58:55.080 | |
So at a bigger version of that, | |
58:55.080 --> 58:57.080 | |
the perspective is super interesting, | |
58:57.080 --> 58:58.080 | |
but super specific. | |
58:58.080 --> 58:59.080 | |
Yeah, I warned you. | |
58:59.080 --> 59:01.080 | |
No, no, no, it's really good, | |
59:01.080 --> 59:04.080 | |
but there's a more general version of that. | |
59:04.080 --> 59:06.080 | |
Do you think context matters? | |
59:06.080 --> 59:10.080 | |
The fact that we are in a building in North America, | |
59:10.080 --> 59:15.080 | |
that we, in the day and age where we have mugs, | |
59:15.080 --> 59:18.080 | |
I mean, there's all this extra information | |
59:18.080 --> 59:22.080 | |
that you bring to the table about everything else in the room | |
59:22.080 --> 59:24.080 | |
that's outside of just the coffee cup. | |
59:24.080 --> 59:25.080 | |
Of course it is. | |
59:25.080 --> 59:27.080 | |
How does it get connected, do you think? | |
59:27.080 --> 59:30.080 | |
Yeah, and that is another really interesting question. | |
59:30.080 --> 59:32.080 | |
I'm going to throw that under the rubric | |
59:32.080 --> 59:34.080 | |
or the name of attentional problems. | |
59:34.080 --> 59:36.080 | |
First of all, we have this model. | |
59:36.080 --> 59:37.080 | |
I have many, many models. | |
59:37.080 --> 59:39.080 | |
And also the question, does it matter? | |
59:39.080 --> 59:41.080 | |
Well, it matters for certain things. | |
59:41.080 --> 59:42.080 | |
Of course it does. | |
59:42.080 --> 59:44.080 | |
Maybe what we think about as a coffee cup | |
59:44.080 --> 59:47.080 | |
in another part of the world is viewed as something completely different. | |
59:47.080 --> 59:51.080 | |
Or maybe our logo, which is very benign in this part of the world, | |
59:51.080 --> 59:53.080 | |
it means something very different in another part of the world. | |
59:53.080 --> 59:56.080 | |
So those things do matter. | |
59:56.080 --> 1:00:00.080 | |
I think the way to think about it as the following, | |
1:00:00.080 --> 1:00:01.080 | |
one way to think about it, | |
1:00:01.080 --> 1:00:03.080 | |
is we have all these models of the world. | |
1:00:03.080 --> 1:00:06.080 | |
And we model everything. | |
1:00:06.080 --> 1:00:08.080 | |
And as I said earlier, I kind of snuck it in there. | |
1:00:08.080 --> 1:00:12.080 | |
Our models are actually, we build composite structures. | |
1:00:12.080 --> 1:00:15.080 | |
So every object is composed of other objects, | |
1:00:15.080 --> 1:00:16.080 | |
which are composed of other objects, | |
1:00:16.080 --> 1:00:18.080 | |
and they become members of other objects. | |
1:00:18.080 --> 1:00:21.080 | |
So this room is chairs and a table and a room and walls and so on. | |
1:00:21.080 --> 1:00:24.080 | |
Now we can just arrange these things in a certain way. | |
1:00:24.080 --> 1:00:27.080 | |
And you go, oh, that's in the Nementa conference room. | |
1:00:27.080 --> 1:00:32.080 | |
So, and what we do is when we go around the world, | |
1:00:32.080 --> 1:00:34.080 | |
when we experience the world, | |
1:00:34.080 --> 1:00:36.080 | |
by walking to a room, for example, | |
1:00:36.080 --> 1:00:38.080 | |
the first thing I do is like, oh, I'm in this room. | |
1:00:38.080 --> 1:00:39.080 | |
Do I recognize the room? | |
1:00:39.080 --> 1:00:42.080 | |
Then I can say, oh, look, there's a table here. | |
1:00:42.080 --> 1:00:44.080 | |
And by attending to the table, | |
1:00:44.080 --> 1:00:46.080 | |
I'm then assigning this table in a context of the room. | |
1:00:46.080 --> 1:00:48.080 | |
Then I say, oh, on the table, there's a coffee cup. | |
1:00:48.080 --> 1:00:50.080 | |
Oh, and on the table, there's a logo. | |
1:00:50.080 --> 1:00:52.080 | |
And in the logo, there's the word Nementa. | |
1:00:52.080 --> 1:00:54.080 | |
So if you look in the logo, there's the letter E. | |
1:00:54.080 --> 1:00:56.080 | |
And look, it has an unusual surf. | |
1:00:56.080 --> 1:00:59.080 | |
It doesn't actually, but I pretend it does. | |
1:00:59.080 --> 1:01:05.080 | |
So the point is your attention is kind of drilling deep in and out | |
1:01:05.080 --> 1:01:07.080 | |
of these nested structures. | |
1:01:07.080 --> 1:01:09.080 | |
And I can pop back up and I can pop back down. | |
1:01:09.080 --> 1:01:11.080 | |
I can pop back up and I can pop back down. | |
1:01:11.080 --> 1:01:13.080 | |
So when I attend to the coffee cup, | |
1:01:13.080 --> 1:01:15.080 | |
I haven't lost the context of everything else, | |
1:01:15.080 --> 1:01:19.080 | |
but it's sort of, there's this sort of nested structure. | |
1:01:19.080 --> 1:01:22.080 | |
The attention filters the reference frame formation | |
1:01:22.080 --> 1:01:24.080 | |
for that particular period of time. | |
1:01:24.080 --> 1:01:25.080 | |
Yes. | |
1:01:25.080 --> 1:01:28.080 | |
It basically, a moment to moment, you attend the subcomponents | |
1:01:28.080 --> 1:01:30.080 | |
and then you can attend the subcomponents to subcomponents. | |
1:01:30.080 --> 1:01:31.080 | |
You can move up and down. | |
1:01:31.080 --> 1:01:32.080 | |
You can move up and down. | |
1:01:32.080 --> 1:01:33.080 | |
We do that all the time. | |
1:01:33.080 --> 1:01:35.080 | |
You're not even, now that I'm aware of it, | |
1:01:35.080 --> 1:01:37.080 | |
I'm very conscious of it. | |
1:01:37.080 --> 1:01:40.080 | |
But most people don't even think about this. | |
1:01:40.080 --> 1:01:42.080 | |
You know, you just walk in a room and you don't say, | |
1:01:42.080 --> 1:01:43.080 | |
oh, I looked at the chair and I looked at the board | |
1:01:43.080 --> 1:01:44.080 | |
and looked at that word on the board | |
1:01:44.080 --> 1:01:45.080 | |
and I looked over here. | |
1:01:45.080 --> 1:01:46.080 | |
What's going on? | |
1:01:46.080 --> 1:01:47.080 | |
Right. | |
1:01:47.080 --> 1:01:50.080 | |
So what percentage of your day are you deeply aware of this? | |
1:01:50.080 --> 1:01:53.080 | |
In what part can you actually relax and just be Jeff? | |
1:01:53.080 --> 1:01:55.080 | |
Me personally, like my personal day. | |
1:01:55.080 --> 1:01:56.080 | |
Yeah. | |
1:01:56.080 --> 1:02:01.080 | |
Unfortunately, I'm afflicted with too much of the former. | |
1:02:01.080 --> 1:02:03.080 | |
Well, unfortunately or unfortunately. | |
1:02:03.080 --> 1:02:04.080 | |
Yeah. | |
1:02:04.080 --> 1:02:05.080 | |
You don't think it's useful? | |
1:02:05.080 --> 1:02:06.080 | |
Oh, it is useful. | |
1:02:06.080 --> 1:02:07.080 | |
Totally useful. | |
1:02:07.080 --> 1:02:09.080 | |
I think about this stuff almost all the time. | |
1:02:09.080 --> 1:02:13.080 | |
And one of my primary ways of thinking is | |
1:02:13.080 --> 1:02:14.080 | |
when I'm asleep at night, | |
1:02:14.080 --> 1:02:16.080 | |
I always wake up in the middle of the night | |
1:02:16.080 --> 1:02:19.080 | |
and I stay awake for at least an hour with my eyes shut | |
1:02:19.080 --> 1:02:21.080 | |
in sort of a half sleep state thinking about these things. | |
1:02:21.080 --> 1:02:23.080 | |
I come up with answers to problems very often | |
1:02:23.080 --> 1:02:25.080 | |
in that sort of half sleeping state. | |
1:02:25.080 --> 1:02:27.080 | |
I think about on my bike ride, I think about on walks. | |
1:02:27.080 --> 1:02:29.080 | |
I'm just constantly thinking about this. | |
1:02:29.080 --> 1:02:34.080 | |
I have to almost schedule time to not think about this stuff | |
1:02:34.080 --> 1:02:37.080 | |
because it's very, it's mentally taxing. | |
1:02:37.080 --> 1:02:39.080 | |
Are you, when you're thinking about this stuff, | |
1:02:39.080 --> 1:02:41.080 | |
are you thinking introspectively, | |
1:02:41.080 --> 1:02:43.080 | |
like almost taking a step outside of yourself | |
1:02:43.080 --> 1:02:45.080 | |
and trying to figure out what is your mind doing right now? | |
1:02:45.080 --> 1:02:48.080 | |
I do that all the time, but that's not all I do. | |
1:02:48.080 --> 1:02:50.080 | |
I'm constantly observing myself. | |
1:02:50.080 --> 1:02:52.080 | |
So as soon as I started thinking about grid cells, | |
1:02:52.080 --> 1:02:54.080 | |
for example, and getting into that, | |
1:02:54.080 --> 1:02:57.080 | |
I started saying, oh, well, grid cells can have my place of sense | |
1:02:57.080 --> 1:02:58.080 | |
in the world. | |
1:02:58.080 --> 1:02:59.080 | |
That's where you know where you are. | |
1:02:59.080 --> 1:03:01.080 | |
And it's interesting, we always have a sense of where we are | |
1:03:01.080 --> 1:03:02.080 | |
unless we're lost. | |
1:03:02.080 --> 1:03:05.080 | |
And so I started at night when I got up to go to the bathroom, | |
1:03:05.080 --> 1:03:07.080 | |
I would start trying to do it completely with my eyes closed | |
1:03:07.080 --> 1:03:09.080 | |
all the time and I would test my sense of grid cells. | |
1:03:09.080 --> 1:03:13.080 | |
I would walk five feet and say, okay, I think I'm here. | |
1:03:13.080 --> 1:03:14.080 | |
Am I really there? | |
1:03:14.080 --> 1:03:15.080 | |
What's my error? | |
1:03:15.080 --> 1:03:17.080 | |
And then I would calculate my error again and see how the errors | |
1:03:17.080 --> 1:03:18.080 | |
accumulate. | |
1:03:18.080 --> 1:03:20.080 | |
So even something as simple as getting up in the middle of the | |
1:03:20.080 --> 1:03:22.080 | |
night to go to the bathroom, I'm testing these theories out. | |
1:03:22.080 --> 1:03:23.080 | |
It's kind of fun. | |
1:03:23.080 --> 1:03:25.080 | |
I mean, the coffee cup is an example of that too. | |
1:03:25.080 --> 1:03:30.080 | |
So I think I find that these sort of everyday introspections | |
1:03:30.080 --> 1:03:32.080 | |
are actually quite helpful. | |
1:03:32.080 --> 1:03:34.080 | |
It doesn't mean you can ignore the science. | |
1:03:34.080 --> 1:03:38.080 | |
I mean, I spend hours every day reading ridiculously complex | |
1:03:38.080 --> 1:03:39.080 | |
papers. | |
1:03:39.080 --> 1:03:41.080 | |
That's not nearly as much fun, | |
1:03:41.080 --> 1:03:44.080 | |
but you have to sort of build up those constraints and the knowledge | |
1:03:44.080 --> 1:03:47.080 | |
about the field and who's doing what and what exactly they think | |
1:03:47.080 --> 1:03:48.080 | |
is happening here. | |
1:03:48.080 --> 1:03:51.080 | |
And then you can sit back and say, okay, let's try to have pieces | |
1:03:51.080 --> 1:03:52.080 | |
all together. | |
1:03:52.080 --> 1:03:56.080 | |
Let's come up with some, you know, I'm very in this group here | |
1:03:56.080 --> 1:03:58.080 | |
and people, they know they do this. | |
1:03:58.080 --> 1:03:59.080 | |
I do this all the time. | |
1:03:59.080 --> 1:04:01.080 | |
I come in with these introspective ideas and say, well, | |
1:04:01.080 --> 1:04:02.080 | |
there we ever thought about this. | |
1:04:02.080 --> 1:04:04.080 | |
Now watch, well, let's all do this together. | |
1:04:04.080 --> 1:04:06.080 | |
And it's helpful. | |
1:04:06.080 --> 1:04:10.080 | |
It's not, as long as you don't, if all you did was that, | |
1:04:10.080 --> 1:04:12.080 | |
then you're just making up stuff, right? | |
1:04:12.080 --> 1:04:15.080 | |
But if you're constraining it by the reality of the neuroscience, | |
1:04:15.080 --> 1:04:17.080 | |
then it's really helpful. | |
1:04:17.080 --> 1:04:22.080 | |
So let's talk a little bit about deep learning and the successes | |
1:04:22.080 --> 1:04:28.080 | |
in the applied space of neural networks, ideas of training model | |
1:04:28.080 --> 1:04:31.080 | |
on data and these simple computational units, | |
1:04:31.080 --> 1:04:37.080 | |
artificial neurons that with back propagation have statistical | |
1:04:37.080 --> 1:04:42.080 | |
ways of being able to generalize from the training set on to | |
1:04:42.080 --> 1:04:44.080 | |
data that similar to that training set. | |
1:04:44.080 --> 1:04:48.080 | |
So where do you think are the limitations of those approaches? | |
1:04:48.080 --> 1:04:52.080 | |
What do you think are strengths relative to your major efforts | |
1:04:52.080 --> 1:04:55.080 | |
of constructing a theory of human intelligence? | |
1:04:55.080 --> 1:04:56.080 | |
Yeah. | |
1:04:56.080 --> 1:04:58.080 | |
Well, I'm not an expert in this field. | |
1:04:58.080 --> 1:04:59.080 | |
I'm somewhat knowledgeable. | |
1:04:59.080 --> 1:05:00.080 | |
So, but I'm not. | |
1:05:00.080 --> 1:05:02.080 | |
A little bit in just your intuition. | |
1:05:02.080 --> 1:05:04.080 | |
Well, I have a little bit more than intuition, | |
1:05:04.080 --> 1:05:07.080 | |
but I just want to say like, you know, one of the things that you asked me, | |
1:05:07.080 --> 1:05:09.080 | |
do I spend all my time thinking about neuroscience? | |
1:05:09.080 --> 1:05:10.080 | |
I do. | |
1:05:10.080 --> 1:05:12.080 | |
That's to the exclusion of thinking about things like convolutional neural | |
1:05:12.080 --> 1:05:13.080 | |
networks. | |
1:05:13.080 --> 1:05:15.080 | |
But I try to stay current. | |
1:05:15.080 --> 1:05:18.080 | |
So look, I think it's great the progress they've made. | |
1:05:18.080 --> 1:05:19.080 | |
It's fantastic. | |
1:05:19.080 --> 1:05:23.080 | |
And as I mentioned earlier, it's very highly useful for many things. | |
1:05:23.080 --> 1:05:27.080 | |
The models that we have today are actually derived from a lot of | |
1:05:27.080 --> 1:05:28.080 | |
neuroscience principles. | |
1:05:28.080 --> 1:05:31.080 | |
They are distributed processing systems and distributed memory systems, | |
1:05:31.080 --> 1:05:33.080 | |
and that's how the brain works. | |
1:05:33.080 --> 1:05:36.080 | |
And they use things that we might call them neurons, | |
1:05:36.080 --> 1:05:37.080 | |
but they're really not neurons at all. | |
1:05:37.080 --> 1:05:39.080 | |
So we can just, they're not really neurons. | |
1:05:39.080 --> 1:05:42.080 | |
So they're distributed processing systems. | |
1:05:42.080 --> 1:05:47.080 | |
And nature of hierarchy that came also from neuroscience. | |
1:05:47.080 --> 1:05:50.080 | |
And so there's a lot of things, the learning rules, basically, | |
1:05:50.080 --> 1:05:52.080 | |
not backprop, but other, you know, sort of heavy entire learning. | |
1:05:52.080 --> 1:05:55.080 | |
I'll be curious to say they're not neurons at all. | |
1:05:55.080 --> 1:05:56.080 | |
Can you describe in which way? | |
1:05:56.080 --> 1:06:00.080 | |
I mean, some of it is obvious, but I'd be curious if you have specific | |
1:06:00.080 --> 1:06:02.080 | |
ways in which you think are the biggest differences. | |
1:06:02.080 --> 1:06:06.080 | |
Yeah, we had a paper in 2016 called Why Neurons of Thousands of Synapses. | |
1:06:06.080 --> 1:06:11.080 | |
And if you read that paper, you'll know what I'm talking about here. | |
1:06:11.080 --> 1:06:14.080 | |
A real neuron in the brain is a complex thing. | |
1:06:14.080 --> 1:06:18.080 | |
Let's just start with the synapses on it, which is a connection between neurons. | |
1:06:18.080 --> 1:06:24.080 | |
Real neurons can everywhere from five to 30,000 synapses on them. | |
1:06:24.080 --> 1:06:30.080 | |
The ones near the cell body, the ones that are close to the soma, the cell body, | |
1:06:30.080 --> 1:06:33.080 | |
those are like the ones that people model in artificial neurons. | |
1:06:33.080 --> 1:06:35.080 | |
There's a few hundred of those. | |
1:06:35.080 --> 1:06:37.080 | |
Maybe they can affect the cell. | |
1:06:37.080 --> 1:06:39.080 | |
They can make the cell become active. | |
1:06:39.080 --> 1:06:43.080 | |
95% of the synapses can't do that. | |
1:06:43.080 --> 1:06:44.080 | |
They're too far away. | |
1:06:44.080 --> 1:06:47.080 | |
So if you activate one of those synapses, it just doesn't affect the cell body | |
1:06:47.080 --> 1:06:49.080 | |
enough to make any difference. | |
1:06:49.080 --> 1:06:50.080 | |
Any one of them individually. | |
1:06:50.080 --> 1:06:53.080 | |
Any one of them individually, or even if you do a mass of them. | |
1:06:53.080 --> 1:06:57.080 | |
What real neurons do is the following. | |
1:06:57.080 --> 1:07:04.080 | |
If you activate, or you get 10 to 20 of them active at the same time, | |
1:07:04.080 --> 1:07:06.080 | |
meaning they're all receiving an input at the same time, | |
1:07:06.080 --> 1:07:10.080 | |
and those 10 to 20 synapses or 40 synapses are within a very short distance | |
1:07:10.080 --> 1:07:13.080 | |
on the dendrite, like 40 microns, a very small area. | |
1:07:13.080 --> 1:07:17.080 | |
So if you activate a bunch of these right next to each other at some distant place, | |
1:07:17.080 --> 1:07:21.080 | |
what happens is it creates what's called the dendritic spike. | |
1:07:21.080 --> 1:07:24.080 | |
And dendritic spike travels through the dendrites | |
1:07:24.080 --> 1:07:27.080 | |
and can reach the soma or the cell body. | |
1:07:27.080 --> 1:07:31.080 | |
Now, when it gets there, it changes the voltage, | |
1:07:31.080 --> 1:07:33.080 | |
which is sort of like going to make the cell fire, | |
1:07:33.080 --> 1:07:35.080 | |
but never enough to make the cell fire. | |
1:07:35.080 --> 1:07:38.080 | |
It's sort of what we call, it says we depolarize the cell. | |
1:07:38.080 --> 1:07:41.080 | |
You raise the voltage a little bit, but not enough to do anything. | |
1:07:41.080 --> 1:07:42.080 | |
It's like, well, what good is that? | |
1:07:42.080 --> 1:07:44.080 | |
And then it goes back down again. | |
1:07:44.080 --> 1:07:50.080 | |
So we proposed a theory, which I'm very confident in basics are, | |
1:07:50.080 --> 1:07:54.080 | |
is that what's happening there is those 95% of the synapses | |
1:07:54.080 --> 1:07:58.080 | |
are recognizing dozens to hundreds of unique patterns. | |
1:07:58.080 --> 1:08:01.080 | |
They can write, you know, about 10, 20 synapses at a time, | |
1:08:01.080 --> 1:08:04.080 | |
and they're acting like predictions. | |
1:08:04.080 --> 1:08:07.080 | |
So the neuron actually is a predictive engine on its own. | |
1:08:07.080 --> 1:08:11.080 | |
It can fire when it gets enough, what they call proximal input from those ones | |
1:08:11.080 --> 1:08:15.080 | |
near the cell fire, but it can get ready to fire from dozens to hundreds | |
1:08:15.080 --> 1:08:17.080 | |
of patterns that it recognizes from the other guys. | |
1:08:17.080 --> 1:08:22.080 | |
And the advantage of this to the neuron is that when it actually does produce | |
1:08:22.080 --> 1:08:27.080 | |
a spike in action potential, it does so slightly sooner than it would have otherwise. | |
1:08:27.080 --> 1:08:29.080 | |
And so what could just slightly sooner? | |
1:08:29.080 --> 1:08:33.080 | |
Well, the slightly sooner part is it, there's all the neurons in the, | |
1:08:33.080 --> 1:08:36.080 | |
the excited throwing neurons in the brain are surrounded by these inhibitory neurons, | |
1:08:36.080 --> 1:08:40.080 | |
and they're very fast, the inhibitory neurons, these baskets all. | |
1:08:40.080 --> 1:08:44.080 | |
And if I get my spike out a little bit sooner than someone else, | |
1:08:44.080 --> 1:08:46.080 | |
I inhibit all my neighbors around me, right? | |
1:08:46.080 --> 1:08:49.080 | |
And what you end up with is a different representation. | |
1:08:49.080 --> 1:08:52.080 | |
You end up with a representation that matches your prediction. | |
1:08:52.080 --> 1:08:55.080 | |
It's a sparser representation, meaning fewer neurons are active, | |
1:08:55.080 --> 1:08:57.080 | |
but it's much more specific. | |
1:08:57.080 --> 1:09:04.080 | |
And so we showed how networks of these neurons can do very sophisticated temporal prediction, basically. | |
1:09:04.080 --> 1:09:10.080 | |
So this summarizes real neurons in the brain are time based prediction engines, | |
1:09:10.080 --> 1:09:17.080 | |
and there's no concept of this at all in artificial, what we call point neurons. | |
1:09:17.080 --> 1:09:19.080 | |
I don't think you can mail the brain without them. | |
1:09:19.080 --> 1:09:25.080 | |
I don't think you can build intelligence without them because it's where a large part of the time comes from. | |
1:09:25.080 --> 1:09:31.080 | |
These are predictive models and the time is, there's a prior prediction and an action, | |
1:09:31.080 --> 1:09:34.080 | |
and it's inherent through every neuron in the neocortex. | |
1:09:34.080 --> 1:09:38.080 | |
So I would say that point neurons sort of model a piece of that, | |
1:09:38.080 --> 1:09:45.080 | |
and not very well at that either, but, you know, like, for example, synapses are very unreliable, | |
1:09:45.080 --> 1:09:49.080 | |
and you cannot assign any precision to them. | |
1:09:49.080 --> 1:09:52.080 | |
So even one digit of precision is not possible. | |
1:09:52.080 --> 1:09:57.080 | |
So the way real neurons work is they don't add these, they don't change these weights accurately, | |
1:09:57.080 --> 1:09:59.080 | |
like artificial neural networks do. | |
1:09:59.080 --> 1:10:03.080 | |
They basically form new synapses, and so what you're trying to always do is | |
1:10:03.080 --> 1:10:09.080 | |
detect the presence of some 10 to 20 active synapses at the same time as opposed, | |
1:10:09.080 --> 1:10:11.080 | |
and they're almost binary. | |
1:10:11.080 --> 1:10:14.080 | |
It's like, because you can't really represent anything much finer than that. | |
1:10:14.080 --> 1:10:18.080 | |
So these are the kind of, and I think that's actually another essential component | |
1:10:18.080 --> 1:10:24.080 | |
because the brain works on sparse patterns, and all that mechanism is based on sparse patterns, | |
1:10:24.080 --> 1:10:28.080 | |
and I don't actually think you could build real brains or machine intelligence | |
1:10:28.080 --> 1:10:30.080 | |
without incorporating some of those ideas. | |
1:10:30.080 --> 1:10:34.080 | |
It's hard to even think about the complexity that emerges from the fact that | |
1:10:34.080 --> 1:10:40.080 | |
the timing of the firing matters in the brain, the fact that you form new synapses, | |
1:10:40.080 --> 1:10:44.080 | |
and everything you just mentioned in the past couple minutes. | |
1:10:44.080 --> 1:10:47.080 | |
Trust me, if you spend time on it, you can get your mind around it. | |
1:10:47.080 --> 1:10:49.080 | |
It's not like it's no longer a mystery to me. | |
1:10:49.080 --> 1:10:53.080 | |
No, but sorry, as a function in a mathematical way, | |
1:10:53.080 --> 1:10:58.080 | |
can you start getting an intuition about what gets it excited, what not, | |
1:10:58.080 --> 1:11:00.080 | |
and what kind of representation? | |
1:11:00.080 --> 1:11:04.080 | |
Yeah, it's not as easy as there are many other types of neural networks | |
1:11:04.080 --> 1:11:10.080 | |
that are more amenable to pure analysis, especially very simple networks. | |
1:11:10.080 --> 1:11:12.080 | |
You know, oh, I have four neurons, and they're doing this. | |
1:11:12.080 --> 1:11:16.080 | |
Can we describe them mathematically what they're doing type of thing? | |
1:11:16.080 --> 1:11:19.080 | |
Even the complexity of convolutional neural networks today, | |
1:11:19.080 --> 1:11:23.080 | |
it's sort of a mystery. They can't really describe the whole system. | |
1:11:23.080 --> 1:11:25.080 | |
And so it's different. | |
1:11:25.080 --> 1:11:31.080 | |
My colleague, Subitain Ahmad, he did a nice paper on this. | |
1:11:31.080 --> 1:11:34.080 | |
You can get all the stuff on our website if you're interested. | |
1:11:34.080 --> 1:11:38.080 | |
Talking about sort of mathematical properties of sparse representations, | |
1:11:38.080 --> 1:11:42.080 | |
and so what we can do is we can show mathematically, for example, | |
1:11:42.080 --> 1:11:46.080 | |
why 10 to 20 synapses to recognize a pattern is the correct number, | |
1:11:46.080 --> 1:11:48.080 | |
is the right number you'd want to use. | |
1:11:48.080 --> 1:11:50.080 | |
And by the way, that matches biology. | |
1:11:50.080 --> 1:11:55.080 | |
We can show mathematically some of these concepts about the show | |
1:11:55.080 --> 1:12:01.080 | |
why the brain is so robust to noise and error and fallout and so on. | |
1:12:01.080 --> 1:12:05.080 | |
We can show that mathematically as well as empirically in simulations. | |
1:12:05.080 --> 1:12:08.080 | |
But the system can't be analyzed completely. | |
1:12:08.080 --> 1:12:12.080 | |
Any complex system can, and so that's out of the realm. | |
1:12:12.080 --> 1:12:19.080 | |
But there is mathematical benefits and intuitions that can be derived from mathematics. | |
1:12:19.080 --> 1:12:21.080 | |
And we try to do that as well. | |
1:12:21.080 --> 1:12:23.080 | |
Most of our papers have a section about that. | |
1:12:23.080 --> 1:12:28.080 | |
So I think it's refreshing and useful for me to be talking to you about deep neural networks, | |
1:12:28.080 --> 1:12:36.080 | |
because your intuition basically says that we can't achieve anything like intelligence with artificial neural networks. | |
1:12:36.080 --> 1:12:37.080 | |
Well, not in the current form. | |
1:12:37.080 --> 1:12:38.080 | |
Not in the current form. | |
1:12:38.080 --> 1:12:40.080 | |
I'm sure we can do it in the ultimate form, sure. | |
1:12:40.080 --> 1:12:43.080 | |
So let me dig into it and see what your thoughts are there a little bit. | |
1:12:43.080 --> 1:12:49.080 | |
So I'm not sure if you read this little blog post called Bitter Lesson by Rich Sutton recently. | |
1:12:49.080 --> 1:12:51.080 | |
He's a reinforcement learning pioneer. | |
1:12:51.080 --> 1:12:53.080 | |
I'm not sure if you're familiar with him. | |
1:12:53.080 --> 1:13:02.080 | |
His basic idea is that all the stuff we've done in AI in the past 70 years, he's one of the old school guys. | |
1:13:02.080 --> 1:13:10.080 | |
The biggest lesson learned is that all the tricky things we've done don't, you know, they benefit in the short term. | |
1:13:10.080 --> 1:13:20.080 | |
But in the long term, what wins out is a simple general method that just relies on Moore's law on computation getting faster and faster. | |
1:13:20.080 --> 1:13:21.080 | |
This is what he's saying. | |
1:13:21.080 --> 1:13:23.080 | |
This is what has worked up to now. | |
1:13:23.080 --> 1:13:25.080 | |
This is what has worked up to now. | |
1:13:25.080 --> 1:13:31.080 | |
If you're trying to build a system, if we're talking about, he's not concerned about intelligence. | |
1:13:31.080 --> 1:13:38.080 | |
He's concerned about a system that works in terms of making predictions on applied, narrow AI problems. | |
1:13:38.080 --> 1:13:41.080 | |
That's what the discussion is about. | |
1:13:41.080 --> 1:13:50.080 | |
That you just try to go as general as possible and wait years or decades for the computation to make it actually possible. | |
1:13:50.080 --> 1:13:54.080 | |
Is he saying that as a criticism or is he saying this is a prescription of what we ought to be doing? | |
1:13:54.080 --> 1:13:55.080 | |
Well, it's very difficult. | |
1:13:55.080 --> 1:13:57.080 | |
He's saying this is what has worked. | |
1:13:57.080 --> 1:14:03.080 | |
And yes, a prescription, but it's a difficult prescription because it says all the fun things you guys are trying to do. | |
1:14:03.080 --> 1:14:05.080 | |
We are trying to do. | |
1:14:05.080 --> 1:14:07.080 | |
He's part of the community. | |
1:14:07.080 --> 1:14:11.080 | |
He's saying it's only going to be short term gains. | |
1:14:11.080 --> 1:14:19.080 | |
This all leads up to a question, I guess, on artificial neural networks and maybe our own biological neural networks. | |
1:14:19.080 --> 1:14:24.080 | |
Do you think if we just scale things up significantly? | |
1:14:24.080 --> 1:14:28.080 | |
Take these dumb artificial neurons, the point neurons. | |
1:14:28.080 --> 1:14:30.080 | |
I like that term. | |
1:14:30.080 --> 1:14:36.080 | |
If we just have a lot more of them, do you think some of the elements that we see in the brain | |
1:14:36.080 --> 1:14:38.080 | |
may start emerging? | |
1:14:38.080 --> 1:14:39.080 | |
No, I don't think so. | |
1:14:39.080 --> 1:14:43.080 | |
We can do bigger problems of the same type. | |
1:14:43.080 --> 1:14:50.080 | |
I mean, it's been pointed out by many people that today's convolutional neural networks aren't really much different than the ones we had quite a while ago. | |
1:14:50.080 --> 1:14:56.080 | |
We just, they're bigger and train more and we have more labeled data and so on. | |
1:14:56.080 --> 1:15:03.080 | |
But I don't think you can get to the kind of things I know the brain can do and that we think about as intelligence by just scaling it up. | |
1:15:03.080 --> 1:15:12.080 | |
So that may be, it's a good description of what's happened in the past, what's happened recently with the reemergence of artificial neural networks. | |
1:15:12.080 --> 1:15:17.080 | |
It may be a good prescription for what's going to happen in the short term. | |
1:15:17.080 --> 1:15:19.080 | |
But I don't think that's the path. | |
1:15:19.080 --> 1:15:20.080 | |
I've said that earlier. | |
1:15:20.080 --> 1:15:21.080 | |
There's an alternate path. | |
1:15:21.080 --> 1:15:29.080 | |
I should mention to you, by the way, that we've made sufficient progress on our, the whole cortical theory in the last few years. | |
1:15:29.080 --> 1:15:40.080 | |
But last year, we decided to start actively pursuing how we get these ideas embedded into machine learning. | |
1:15:40.080 --> 1:15:45.080 | |
That's, again, being led by my colleague, and he's more of a machine learning guy. | |
1:15:45.080 --> 1:15:47.080 | |
I'm more of an neuroscience guy. | |
1:15:47.080 --> 1:15:58.080 | |
So this is now our, I wouldn't say our focus, but it is now an equal focus here because we need to proselytize what we've learned. | |
1:15:58.080 --> 1:16:03.080 | |
And we need to show how it's beneficial to the machine learning. | |
1:16:03.080 --> 1:16:05.080 | |
So we're putting, we have a plan in place right now. | |
1:16:05.080 --> 1:16:07.080 | |
In fact, we just did our first paper on this. | |
1:16:07.080 --> 1:16:09.080 | |
I can tell you about that. | |
1:16:09.080 --> 1:16:15.080 | |
But, you know, one of the reasons I want to talk to you is because I'm trying to get more people in the machine learning community to say, | |
1:16:15.080 --> 1:16:17.080 | |
I need to learn about this stuff. | |
1:16:17.080 --> 1:16:21.080 | |
And maybe we should just think about this a bit more about what we've learned about the brain. | |
1:16:21.080 --> 1:16:23.080 | |
And what are those team, what have they done? | |
1:16:23.080 --> 1:16:25.080 | |
Is that useful for us? | |
1:16:25.080 --> 1:16:32.080 | |
Yeah, so is there elements of all the, the cortical theory that things we've been talking about that may be useful in the short term? | |
1:16:32.080 --> 1:16:34.080 | |
Yes, in the short term, yes. | |
1:16:34.080 --> 1:16:41.080 | |
This is the, sorry to interrupt, but the, the open question is it, it certainly feels from my perspective that in the long term, | |
1:16:41.080 --> 1:16:44.080 | |
some of the ideas we've been talking about will be extremely useful. | |
1:16:44.080 --> 1:16:46.080 | |
The question is whether in the short term. | |
1:16:46.080 --> 1:16:51.080 | |
Well, this is a, always what we, I would call the entrepreneur's dilemma. | |
1:16:51.080 --> 1:16:59.080 | |
You have this long term vision, oh, we're going to all be driving electric cars or we're all going to have computers or we're all going to whatever. | |
1:16:59.080 --> 1:17:03.080 | |
And, and you're at some point in time and you say, I can see that long term vision. | |
1:17:03.080 --> 1:17:04.080 | |
I'm sure it's going to happen. | |
1:17:04.080 --> 1:17:07.080 | |
How do I get there without killing myself, you know, without going out of business? | |
1:17:07.080 --> 1:17:09.080 | |
That's the challenge. | |
1:17:09.080 --> 1:17:10.080 | |
That's the dilemma. | |
1:17:10.080 --> 1:17:11.080 | |
That's the really difficult thing to do. | |
1:17:11.080 --> 1:17:13.080 | |
So we're facing that right now. | |
1:17:13.080 --> 1:17:17.080 | |
So ideally what you'd want to do is find some steps along the way that you can get there incrementally. | |
1:17:17.080 --> 1:17:20.080 | |
You don't have to like throw it all out and start over again. | |
1:17:20.080 --> 1:17:25.080 | |
The first thing that we've done is we focus on these sparse representations. | |
1:17:25.080 --> 1:17:30.080 | |
So just in case you don't know what that means or some of the listeners don't know what that means. | |
1:17:30.080 --> 1:17:37.080 | |
In the brain, if I have like 10,000 neurons, what you would see is maybe 2% of them active at a time. | |
1:17:37.080 --> 1:17:41.080 | |
You don't see 50%, you don't see 30%, you might see 2%. | |
1:17:41.080 --> 1:17:42.080 | |
And it's always like that. | |
1:17:42.080 --> 1:17:44.080 | |
For any set of sensory inputs. | |
1:17:44.080 --> 1:17:45.080 | |
It doesn't matter anything. | |
1:17:45.080 --> 1:17:47.080 | |
It doesn't matter any part of the brain. | |
1:17:47.080 --> 1:17:51.080 | |
But which neurons differs? | |
1:17:51.080 --> 1:17:52.080 | |
Which neurons are active? | |
1:17:52.080 --> 1:17:53.080 | |
Yeah. | |
1:17:53.080 --> 1:17:54.080 | |
So let me put this. | |
1:17:54.080 --> 1:17:56.080 | |
Let's say I take 10,000 neurons that are representing something. | |
1:17:56.080 --> 1:17:58.080 | |
They're sitting there in a little block together. | |
1:17:58.080 --> 1:18:00.080 | |
It's a teeny little block of neurons, 10,000 neurons. | |
1:18:00.080 --> 1:18:01.080 | |
And they're representing a location. | |
1:18:01.080 --> 1:18:02.080 | |
They're representing a cop. | |
1:18:02.080 --> 1:18:04.080 | |
They're representing the input from my sensors. | |
1:18:04.080 --> 1:18:05.080 | |
I don't know. | |
1:18:05.080 --> 1:18:06.080 | |
It doesn't matter. | |
1:18:06.080 --> 1:18:07.080 | |
It's representing something. | |
1:18:07.080 --> 1:18:10.080 | |
The way the representations occur, it's always a sparse representation. | |
1:18:10.080 --> 1:18:12.080 | |
Meaning it's a population code. | |
1:18:12.080 --> 1:18:15.080 | |
So which 200 cells are active tells me what's going on. | |
1:18:15.080 --> 1:18:18.080 | |
It's not individual cells aren't that important at all. | |
1:18:18.080 --> 1:18:20.080 | |
It's the population code that matters. | |
1:18:20.080 --> 1:18:23.080 | |
And when you have sparse population codes, | |
1:18:23.080 --> 1:18:26.080 | |
then all kinds of beautiful properties come out of them. | |
1:18:26.080 --> 1:18:29.080 | |
So the brain uses sparse population codes that we've written | |
1:18:29.080 --> 1:18:32.080 | |
and described these benefits in some of our papers. | |
1:18:32.080 --> 1:18:37.080 | |
So they give this tremendous robustness to the systems. | |
1:18:37.080 --> 1:18:39.080 | |
You know, brains are incredibly robust. | |
1:18:39.080 --> 1:18:42.080 | |
Neurons are dying all the time and spasming and synapses falling apart. | |
1:18:42.080 --> 1:18:45.080 | |
And, you know, all the time and it keeps working. | |
1:18:45.080 --> 1:18:52.080 | |
So what Subitai and Louise, one of our other engineers here have done, | |
1:18:52.080 --> 1:18:56.080 | |
have shown that they're introducing sparseness into convolutional neural networks. | |
1:18:56.080 --> 1:18:58.080 | |
Now other people are thinking along these lines, | |
1:18:58.080 --> 1:19:00.080 | |
but we're going about it in a more principled way, I think. | |
1:19:00.080 --> 1:19:06.080 | |
And we're showing that if you enforce sparseness throughout these convolutional neural networks, | |
1:19:06.080 --> 1:19:13.080 | |
in both the sort of which neurons are active and the connections between them, | |
1:19:13.080 --> 1:19:15.080 | |
that you get some very desirable properties. | |
1:19:15.080 --> 1:19:20.080 | |
So one of the current hot topics in deep learning right now are these adversarial examples. | |
1:19:20.080 --> 1:19:23.080 | |
So, you know, you give me any deep learning network | |
1:19:23.080 --> 1:19:27.080 | |
and I can give you a picture that looks perfect and you're going to call it, you know, | |
1:19:27.080 --> 1:19:30.080 | |
you're going to say the monkey is, you know, an airplane. | |
1:19:30.080 --> 1:19:32.080 | |
So that's a problem. | |
1:19:32.080 --> 1:19:36.080 | |
And DARPA just announced some big thing and we're trying to, you know, have some contests for this. | |
1:19:36.080 --> 1:19:40.080 | |
But if you enforce sparse representations here, | |
1:19:40.080 --> 1:19:41.080 | |
many of these problems go away. | |
1:19:41.080 --> 1:19:45.080 | |
They're much more robust and they're not easy to fool. | |
1:19:45.080 --> 1:19:48.080 | |
So we've already shown some of those results, | |
1:19:48.080 --> 1:19:53.080 | |
just literally in January or February, just like last month we did that. | |
1:19:53.080 --> 1:19:59.080 | |
And you can, I think it's on bioarchive right now or on iCry, you can read about it. | |
1:19:59.080 --> 1:20:02.080 | |
But so that's like a baby step. | |
1:20:02.080 --> 1:20:04.080 | |
Okay. That's a take something from the brain. | |
1:20:04.080 --> 1:20:05.080 | |
We know, we know about sparseness. | |
1:20:05.080 --> 1:20:06.080 | |
We know why it's important. | |
1:20:06.080 --> 1:20:08.080 | |
We know what it gives the brain. | |
1:20:08.080 --> 1:20:09.080 | |
So let's try to enforce that onto this. | |
1:20:09.080 --> 1:20:12.080 | |
What's your intuition why sparsity leads to robustness? | |
1:20:12.080 --> 1:20:14.080 | |
Because it feels like it would be less robust. | |
1:20:14.080 --> 1:20:17.080 | |
Why would you feel the rest robust to you? | |
1:20:17.080 --> 1:20:24.080 | |
So it, it just feels like if the fewer neurons are involved, | |
1:20:24.080 --> 1:20:26.080 | |
the more fragile the representation. | |
1:20:26.080 --> 1:20:28.080 | |
Yeah, but I didn't say there was lots of few. | |
1:20:28.080 --> 1:20:30.080 | |
I said, let's say 200. | |
1:20:30.080 --> 1:20:31.080 | |
That's a lot. | |
1:20:31.080 --> 1:20:32.080 | |
There's still a lot. | |
1:20:32.080 --> 1:20:33.080 | |
Yeah. | |
1:20:33.080 --> 1:20:35.080 | |
So here's an intuition for it. | |
1:20:35.080 --> 1:20:37.080 | |
This is a bit technical. | |
1:20:37.080 --> 1:20:41.080 | |
So for, you know, for engineers, machine learning people this be easy, | |
1:20:41.080 --> 1:20:44.080 | |
but God's listeners, maybe not. | |
1:20:44.080 --> 1:20:46.080 | |
If you're trying to classify something, | |
1:20:46.080 --> 1:20:50.080 | |
you're trying to divide some very high dimensional space into different pieces, A and B. | |
1:20:50.080 --> 1:20:55.080 | |
And you're trying to create some point where you say all these points in this high dimensional space are A | |
1:20:55.080 --> 1:20:57.080 | |
and all these points in this high dimensional space are B. | |
1:20:57.080 --> 1:21:03.080 | |
And if you have points that are close to that line, it's not very robust. | |
1:21:03.080 --> 1:21:07.080 | |
It works for all the points you know about, but it's, it's not very robust | |
1:21:07.080 --> 1:21:10.080 | |
because you can just move a little bit and you've crossed over the line. | |
1:21:10.080 --> 1:21:14.080 | |
When you have sparse representations, imagine I pick, I have, | |
1:21:14.080 --> 1:21:18.080 | |
I'm going to pick 200 cells active out of, out of 10,000. | |
1:21:18.080 --> 1:21:19.080 | |
Okay. | |
1:21:19.080 --> 1:21:20.080 | |
So I have 200 cells active. | |
1:21:20.080 --> 1:21:24.080 | |
Now let's say I pick randomly another, a different representation, 200. | |
1:21:24.080 --> 1:21:27.080 | |
The overlap between those is going to be very small, just a few. | |
1:21:27.080 --> 1:21:36.080 | |
I can pick millions of samples randomly of 200 neurons and not one of them will overlap more than just a few. | |
1:21:36.080 --> 1:21:43.080 | |
So one way to think about it is if I want to fool one of these representations to look like one of those other representations, | |
1:21:43.080 --> 1:21:46.080 | |
I can't move just one cell or two cells or three cells or four cells. | |
1:21:46.080 --> 1:21:48.080 | |
I have to move 100 cells. | |
1:21:48.080 --> 1:21:52.080 | |
And that makes them robust. | |
1:21:52.080 --> 1:21:56.080 | |
In terms of further, so you mentioned sparsity. | |
1:21:56.080 --> 1:21:57.080 | |
Will we be the next thing? | |
1:21:57.080 --> 1:21:58.080 | |
Yeah. | |
1:21:58.080 --> 1:21:59.080 | |
Okay. | |
1:21:59.080 --> 1:22:00.080 | |
So we have, we picked one. | |
1:22:00.080 --> 1:22:02.080 | |
We don't know if it's going to work well yet. | |
1:22:02.080 --> 1:22:08.080 | |
So again, we're trying to come up incremental ways of moving from brain theory to add pieces to machine learning, | |
1:22:08.080 --> 1:22:12.080 | |
current machine learning world in one step at a time. | |
1:22:12.080 --> 1:22:20.080 | |
So the next thing we're going to try to do is, is sort of incorporate some of the ideas of the, the thousand brains theory that you have many, | |
1:22:20.080 --> 1:22:22.080 | |
many models and that are voting. | |
1:22:22.080 --> 1:22:23.080 | |
Now that idea is not new. | |
1:22:23.080 --> 1:22:26.080 | |
There's a mixture of models that's been around for a long time. | |
1:22:26.080 --> 1:22:29.080 | |
But the way the brain does it is a little different. | |
1:22:29.080 --> 1:22:36.080 | |
And, and the way it votes is different and the kind of way it represents uncertainty is different. | |
1:22:36.080 --> 1:22:43.080 | |
So we're just starting this work, but we're going to try to see if we can sort of incorporate some of the principles of voting | |
1:22:43.080 --> 1:22:53.080 | |
or principles of a thousand brain theory, like lots of simple models that talk to each other in a, in a very certain way. | |
1:22:53.080 --> 1:23:07.080 | |
And can we build more machines and systems that learn faster and, and also, well, mostly are multimodal and robust to multimodal type of issues. | |
1:23:07.080 --> 1:23:15.080 | |
So one of the challenges there is, you know, the machine learning computer vision community has certain sets of benchmarks. | |
1:23:15.080 --> 1:23:18.080 | |
So it's a test based on which they compete. | |
1:23:18.080 --> 1:23:29.080 | |
And I would argue, especially from your perspective, that those benchmarks aren't that useful for testing the aspects that the brain is good at or intelligent. | |
1:23:29.080 --> 1:23:31.080 | |
They're not really testing intelligence. | |
1:23:31.080 --> 1:23:41.080 | |
They're very fine and has been extremely useful for developing specific mathematical models, but it's not useful in the long term for creating intelligence. | |
1:23:41.080 --> 1:23:46.080 | |
So do you think you also have a role in proposing better tests? | |
1:23:46.080 --> 1:23:50.080 | |
Yeah, this is a very, you've identified a very serious problem. | |
1:23:50.080 --> 1:23:57.080 | |
First of all, the tests that they have are the tests that they want, not the tests of the other things that we're trying to do. | |
1:23:57.080 --> 1:24:01.080 | |
Right. You know, what are the, so on. | |
1:24:01.080 --> 1:24:10.080 | |
The second thing is sometimes these to be competitive in these tests, you have to have huge data sets and huge computing power. | |
1:24:10.080 --> 1:24:13.080 | |
And so, you know, and we don't have that here. | |
1:24:13.080 --> 1:24:18.080 | |
We don't have it as well as other big teams that big companies do. | |
1:24:18.080 --> 1:24:20.080 | |
So there's numerous issues there. | |
1:24:20.080 --> 1:24:26.080 | |
You know, we come at it, you know, we're our approach to this is all based on, in some sense, you might argue elegance. | |
1:24:26.080 --> 1:24:30.080 | |
You know, we're coming at it from like a theoretical base that we think, oh my God, this is so clearly elegant. | |
1:24:30.080 --> 1:24:31.080 | |
This is how brains work. | |
1:24:31.080 --> 1:24:32.080 | |
This is what intelligence is. | |
1:24:32.080 --> 1:24:35.080 | |
But the machine learning world has gotten in this phase where they think it doesn't matter. | |
1:24:35.080 --> 1:24:39.080 | |
Doesn't matter what you think, as long as you do, you know, 0.1% better on this benchmark. | |
1:24:39.080 --> 1:24:41.080 | |
That's what that's all that matters. | |
1:24:41.080 --> 1:24:43.080 | |
And that's a problem. | |
1:24:43.080 --> 1:24:46.080 | |
You know, we have to figure out how to get around that. | |
1:24:46.080 --> 1:24:47.080 | |
That's a challenge for us. | |
1:24:47.080 --> 1:24:50.080 | |
That's one of the challenges we have to deal with. | |
1:24:50.080 --> 1:24:53.080 | |
So I agree you've identified a big issue. | |
1:24:53.080 --> 1:24:55.080 | |
It's difficult for those reasons. | |
1:24:55.080 --> 1:25:02.080 | |
But, you know, part of the reasons I'm talking to you here today is I hope I'm going to get some machine learning people to say, | |
1:25:02.080 --> 1:25:03.080 | |
I'm going to read those papers. | |
1:25:03.080 --> 1:25:04.080 | |
Those might be some interesting ideas. | |
1:25:04.080 --> 1:25:08.080 | |
I'm tired of doing this 0.1% improvement stuff, you know. | |
1:25:08.080 --> 1:25:21.080 | |
Well, that's why I'm here as well, because I think machine learning now as a community is at a place where the next step is needs to be orthogonal to what has received success in the past. | |
1:25:21.080 --> 1:25:27.080 | |
You see other leaders saying this, machine learning leaders, you know, Jeff Hinton with his capsules idea. | |
1:25:27.080 --> 1:25:33.080 | |
Many people have gotten up saying, you know, we're going to hit road, maybe we should look at the brain, you know, things like that. | |
1:25:33.080 --> 1:25:37.080 | |
So hopefully that thinking will occur organically. | |
1:25:37.080 --> 1:25:43.080 | |
And then we're in a nice position for people to come and look at our work and say, well, what can we learn from these guys? | |
1:25:43.080 --> 1:25:49.080 | |
Yeah, MIT is just launching a billion dollar computing college that's centered around this idea. | |
1:25:49.080 --> 1:25:51.080 | |
On this idea of what? | |
1:25:51.080 --> 1:25:59.080 | |
Well, the idea that, you know, the humanities, psychology, neuroscience have to work all together to get to build the S. | |
1:25:59.080 --> 1:26:02.080 | |
Yeah, I mean, Stanford just did this human center today, I think. | |
1:26:02.080 --> 1:26:10.080 | |
I'm a little disappointed in these initiatives because, you know, they're focusing on sort of the human side of it, | |
1:26:10.080 --> 1:26:17.080 | |
and it can very easily slip into how humans interact with intelligent machines, which is nothing wrong with that. | |
1:26:17.080 --> 1:26:20.080 | |
But that's not, that is orthogonal to what we're trying to do. | |
1:26:20.080 --> 1:26:22.080 | |
We're trying to say, like, what is the essence of intelligence? | |
1:26:22.080 --> 1:26:23.080 | |
I don't care. | |
1:26:23.080 --> 1:26:31.080 | |
In fact, I want to build intelligent machines that aren't emotional, that don't smile at you, that, you know, that aren't trying to tuck you in at night. | |
1:26:31.080 --> 1:26:38.080 | |
Yeah, there is that pattern that you, when you talk about understanding humans is important for understanding intelligence. | |
1:26:38.080 --> 1:26:47.080 | |
You start slipping into topics of ethics or, yeah, like you said, the interactive elements as opposed to, no, no, no, let's zoom in on the brain, | |
1:26:47.080 --> 1:26:51.080 | |
study what the human brain, the baby, the... | |
1:26:51.080 --> 1:26:53.080 | |
Let's study what a brain does. | |
1:26:53.080 --> 1:26:57.080 | |
And then we can decide which parts of that we want to recreate in some system. | |
1:26:57.080 --> 1:27:00.080 | |
But until you have that theory about what the brain does, what's the point? | |
1:27:00.080 --> 1:27:03.080 | |
You know, it's just, you're going to be wasting time, I think. | |
1:27:03.080 --> 1:27:09.080 | |
Just to break it down on the artificial neural network side, maybe you can speak to this on the, on the biologic neural network side, | |
1:27:09.080 --> 1:27:13.080 | |
the process of learning versus the process of inference. | |
1:27:13.080 --> 1:27:22.080 | |
Maybe you can explain to me, what, is there a difference between, you know, in artificial neural networks, there's a difference between the learning stage and the inference stage? | |
1:27:22.080 --> 1:27:23.080 | |
Yeah. | |
1:27:23.080 --> 1:27:25.080 | |
Do you see the brain as something different? | |
1:27:25.080 --> 1:27:33.080 | |
One of the big distinctions that people often say, I don't know how correct it is, is artificial neural networks need a lot of data. | |
1:27:33.080 --> 1:27:34.080 | |
They're very inefficient learning. | |
1:27:34.080 --> 1:27:35.080 | |
Yeah. | |
1:27:35.080 --> 1:27:42.080 | |
Do you see that as a correct distinction from the biology of the human brain, that the human brain is very efficient? | |
1:27:42.080 --> 1:27:44.080 | |
Or is that just something we deceive ourselves with? | |
1:27:44.080 --> 1:27:45.080 | |
No, it is efficient, obviously. | |
1:27:45.080 --> 1:27:47.080 | |
We can learn new things almost instantly. | |
1:27:47.080 --> 1:27:50.080 | |
And so what elements do you think... | |
1:27:50.080 --> 1:27:51.080 | |
Yeah, I can talk about that. | |
1:27:51.080 --> 1:27:52.080 | |
You brought up two issues there. | |
1:27:52.080 --> 1:28:00.080 | |
So remember I talked early about the constraints, we always feel, well, one of those constraints is the fact that brains are continually learning. | |
1:28:00.080 --> 1:28:03.080 | |
That's not something we said, oh, we can add that later. | |
1:28:03.080 --> 1:28:11.080 | |
That's something that was upfront, had to be there from the start, made our problems harder. | |
1:28:11.080 --> 1:28:19.080 | |
But we showed, going back to the 2016 paper on sequence memory, we showed how that happens, how the brains infer and learn at the same time. | |
1:28:19.080 --> 1:28:22.080 | |
And our models do that. | |
1:28:22.080 --> 1:28:26.080 | |
They're not two separate phases or two separate sets of time. | |
1:28:26.080 --> 1:28:33.080 | |
I think that's a big, big problem in AI, at least for many applications, not for all. | |
1:28:33.080 --> 1:28:34.080 | |
So I can talk about that. | |
1:28:34.080 --> 1:28:37.080 | |
It gets detailed. | |
1:28:37.080 --> 1:28:46.080 | |
There are some parts of the neocortex in the brain where actually what's going on, there's these cycles of activity in the brain. | |
1:28:46.080 --> 1:28:54.080 | |
And there's very strong evidence that you're doing more of inference on one part of the phase and more of learning on the other part of the phase. | |
1:28:54.080 --> 1:28:58.080 | |
So the brain can actually sort of separate different populations of cells that are going back and forth like this. | |
1:28:58.080 --> 1:29:01.080 | |
But in general, I would say that's an important problem. | |
1:29:01.080 --> 1:29:05.080 | |
We have all of our networks that we've come up with do both. | |
1:29:05.080 --> 1:29:08.080 | |
They're continuous learning networks. | |
1:29:08.080 --> 1:29:10.080 | |
And you mentioned benchmarks earlier. | |
1:29:10.080 --> 1:29:12.080 | |
Well, there are no benchmarks about that. | |
1:29:12.080 --> 1:29:13.080 | |
Exactly. | |
1:29:13.080 --> 1:29:19.080 | |
So we have to like, we get in our little soapbox and hey, by the way, this is important. | |
1:29:19.080 --> 1:29:21.080 | |
And here's the mechanism for doing that. | |
1:29:21.080 --> 1:29:26.080 | |
But until you can prove it to someone in some commercial system or something, it's a little harder. | |
1:29:26.080 --> 1:29:33.080 | |
So one of the things I had to linger on that is in some ways to learn the concept of a coffee cup. | |
1:29:33.080 --> 1:29:38.080 | |
You only need this one coffee cup and maybe some time alone in a room with it. | |
1:29:38.080 --> 1:29:43.080 | |
So the first thing is I imagine I reach my hand into a black box and I'm reaching, I'm trying to touch something. | |
1:29:43.080 --> 1:29:47.080 | |
I don't know up front if it's something I already know or if it's a new thing. | |
1:29:47.080 --> 1:29:50.080 | |
And I have to, I'm doing both at the same time. | |
1:29:50.080 --> 1:29:53.080 | |
I don't say, oh, let's see if it's a new thing. | |
1:29:53.080 --> 1:29:55.080 | |
Oh, let's see if it's an old thing. | |
1:29:55.080 --> 1:29:56.080 | |
I don't do that. | |
1:29:56.080 --> 1:29:59.080 | |
As I go, my brain says, oh, it's new or it's not new. | |
1:29:59.080 --> 1:30:02.080 | |
And if it's new, I start learning what it is. | |
1:30:02.080 --> 1:30:06.080 | |
And by the way, it starts learning from the get go, even if it's going to recognize it. | |
1:30:06.080 --> 1:30:09.080 | |
So they're not separate problems. | |
1:30:09.080 --> 1:30:10.080 | |
And so that's the thing there. | |
1:30:10.080 --> 1:30:13.080 | |
The other thing you mentioned was the fast learning. | |
1:30:13.080 --> 1:30:17.080 | |
So I was just talking about continuous learning, but there's also fast learning. | |
1:30:17.080 --> 1:30:20.080 | |
Literally, I can show you this coffee cup and I say, here's a new coffee cup. | |
1:30:20.080 --> 1:30:21.080 | |
It's got the logo on it. | |
1:30:21.080 --> 1:30:22.080 | |
Take a look at it. | |
1:30:22.080 --> 1:30:23.080 | |
Done. | |
1:30:23.080 --> 1:30:24.080 | |
You're done. | |
1:30:24.080 --> 1:30:27.080 | |
You can predict what it's going to look like, you know, in different positions. | |
1:30:27.080 --> 1:30:29.080 | |
So I can talk about that too. | |
1:30:29.080 --> 1:30:30.080 | |
Yes. | |
1:30:30.080 --> 1:30:34.080 | |
In the brain, the way learning occurs. | |
1:30:34.080 --> 1:30:36.080 | |
I mentioned this earlier, but I mentioned it again. | |
1:30:36.080 --> 1:30:40.080 | |
The way learning occurs, I imagine I have a section of a dendrite of a neuron. | |
1:30:40.080 --> 1:30:43.080 | |
And I want to learn, I'm going to learn something new. | |
1:30:43.080 --> 1:30:44.080 | |
It just doesn't matter what it is. | |
1:30:44.080 --> 1:30:46.080 | |
I'm just going to learn something new. | |
1:30:46.080 --> 1:30:48.080 | |
I need to recognize a new pattern. | |
1:30:48.080 --> 1:30:52.080 | |
So what I'm going to do is I'm going to form new synapses. | |
1:30:52.080 --> 1:30:57.080 | |
New synapses, we're going to rewire the brain onto that section of the dendrite. | |
1:30:57.080 --> 1:31:02.080 | |
Once I've done that, everything else that neuron has learned is not affected by it. | |
1:31:02.080 --> 1:31:06.080 | |
Now, it's because it's isolated to that small section of the dendrite. | |
1:31:06.080 --> 1:31:09.080 | |
They're not all being added together, like a point neuron. | |
1:31:09.080 --> 1:31:13.080 | |
So if I learn something new on this segment here, it doesn't change any of the learning | |
1:31:13.080 --> 1:31:15.080 | |
that occur anywhere else in that neuron. | |
1:31:15.080 --> 1:31:18.080 | |
So I can add something without affecting previous learning. | |
1:31:18.080 --> 1:31:20.080 | |
And I can do it quickly. | |
1:31:20.080 --> 1:31:24.080 | |
Now, let's talk, we can talk about the quickness, how it's done in real neurons. | |
1:31:24.080 --> 1:31:27.080 | |
You might say, well, doesn't it take time to form synapses? | |
1:31:27.080 --> 1:31:30.080 | |
Yes, it can take maybe an hour to form a new synapse. | |
1:31:30.080 --> 1:31:32.080 | |
We can form memories quicker than that. | |
1:31:32.080 --> 1:31:35.080 | |
And I can explain that happens too, if you want. | |
1:31:35.080 --> 1:31:38.080 | |
But it's getting a bit neurosciencey. | |
1:31:38.080 --> 1:31:40.080 | |
That's great. | |
1:31:40.080 --> 1:31:43.080 | |
But is there an understanding of these mechanisms at every level? | |
1:31:43.080 --> 1:31:48.080 | |
So from the short term memories and the forming new connections. | |
1:31:48.080 --> 1:31:51.080 | |
So this idea of synaptogenesis, the growth of new synapses, | |
1:31:51.080 --> 1:31:54.080 | |
that's well described, as well understood. | |
1:31:54.080 --> 1:31:56.080 | |
And that's an essential part of learning. | |
1:31:56.080 --> 1:31:57.080 | |
That is learning. | |
1:31:57.080 --> 1:31:58.080 | |
That is learning. | |
1:31:58.080 --> 1:32:00.080 | |
Okay. | |
1:32:00.080 --> 1:32:04.080 | |
You know, back, you know, going back many, many years, people, you know, | |
1:32:04.080 --> 1:32:08.080 | |
was, what's his name, the psychologist proposed, | |
1:32:08.080 --> 1:32:10.080 | |
Hebb, Donald Hebb. | |
1:32:10.080 --> 1:32:13.080 | |
He proposed that learning was the modification of the strength | |
1:32:13.080 --> 1:32:15.080 | |
of a connection between two neurons. | |
1:32:15.080 --> 1:32:19.080 | |
People interpreted that as the modification of the strength of a synapse. | |
1:32:19.080 --> 1:32:21.080 | |
He didn't say that. | |
1:32:21.080 --> 1:32:24.080 | |
He just said there's a modification between the effect of one neuron and another. | |
1:32:24.080 --> 1:32:28.080 | |
So synaptogenesis is totally consistent with Donald Hebb said. | |
1:32:28.080 --> 1:32:31.080 | |
But anyway, there's these mechanisms, the growth of new synapse. | |
1:32:31.080 --> 1:32:34.080 | |
You can go online, you can watch a video of a synapse growing in real time. | |
1:32:34.080 --> 1:32:36.080 | |
It's literally, you can see this little thing going. | |
1:32:36.080 --> 1:32:38.080 | |
It's pretty impressive. | |
1:32:38.080 --> 1:32:40.080 | |
So that those mechanisms are known. | |
1:32:40.080 --> 1:32:43.080 | |
Now, there's another thing that we've speculated and we've written about, | |
1:32:43.080 --> 1:32:48.080 | |
which is consistent with no neuroscience, but it's less proven. | |
1:32:48.080 --> 1:32:49.080 | |
And this is the idea. | |
1:32:49.080 --> 1:32:51.080 | |
How do I form a memory really, really quickly? | |
1:32:51.080 --> 1:32:53.080 | |
Like instantaneous. | |
1:32:53.080 --> 1:32:56.080 | |
If it takes an hour to grow a synapse, like that's not instantaneous. | |
1:32:56.080 --> 1:33:01.080 | |
So there are types of synapses called silent synapses. | |
1:33:01.080 --> 1:33:04.080 | |
They look like a synapse, but they don't do anything. | |
1:33:04.080 --> 1:33:05.080 | |
They're just sitting there. | |
1:33:05.080 --> 1:33:10.080 | |
It's like if an action potential comes in, it doesn't release any neurotransmitter. | |
1:33:10.080 --> 1:33:12.080 | |
Some parts of the brain have more of these than others. | |
1:33:12.080 --> 1:33:14.080 | |
For example, the hippocampus has a lot of them, | |
1:33:14.080 --> 1:33:18.080 | |
which is where we associate most short term memory with. | |
1:33:18.080 --> 1:33:22.080 | |
So what we speculated, again, in that 2016 paper, | |
1:33:22.080 --> 1:33:26.080 | |
we proposed that the way we form very quick memories, | |
1:33:26.080 --> 1:33:29.080 | |
very short term memories, or quick memories, | |
1:33:29.080 --> 1:33:34.080 | |
is that we convert silent synapses into active synapses. | |
1:33:34.080 --> 1:33:38.080 | |
It's like saying a synapse has a zero weight and a one weight. | |
1:33:38.080 --> 1:33:41.080 | |
But the long term memory has to be formed by synaptogenesis. | |
1:33:41.080 --> 1:33:43.080 | |
So you can remember something really quickly | |
1:33:43.080 --> 1:33:46.080 | |
by just flipping a bunch of these guys from silent to active. | |
1:33:46.080 --> 1:33:49.080 | |
It's not from 0.1 to 0.15. | |
1:33:49.080 --> 1:33:52.080 | |
It doesn't do anything until it releases transmitter. | |
1:33:52.080 --> 1:33:56.080 | |
If I do that over a bunch of these, I've got a very quick short term memory. | |
1:33:56.080 --> 1:34:01.080 | |
So I guess the lesson behind this is that most neural networks today are fully connected. | |
1:34:01.080 --> 1:34:04.080 | |
Every neuron connects every other neuron from layer to layer. | |
1:34:04.080 --> 1:34:06.080 | |
That's not correct in the brain. | |
1:34:06.080 --> 1:34:07.080 | |
We don't want that. | |
1:34:07.080 --> 1:34:08.080 | |
We actually don't want that. | |
1:34:08.080 --> 1:34:09.080 | |
It's bad. | |
1:34:09.080 --> 1:34:13.080 | |
You want a very sparse connectivity so that any neuron connects | |
1:34:13.080 --> 1:34:15.080 | |
to some subset of the neurons in the other layer, | |
1:34:15.080 --> 1:34:18.080 | |
and it does so on a dendrite by dendrite segment basis. | |
1:34:18.080 --> 1:34:21.080 | |
So it's a very parcelated out type of thing. | |
1:34:21.080 --> 1:34:25.080 | |
And that then learning is not adjusting all these weights, | |
1:34:25.080 --> 1:34:29.080 | |
but learning is just saying, OK, connect to these 10 cells here right now. | |
1:34:29.080 --> 1:34:32.080 | |
In that process, you know, with artificial neural networks, | |
1:34:32.080 --> 1:34:37.080 | |
it's a very simple process of back propagation that adjusts the weights. | |
1:34:37.080 --> 1:34:39.080 | |
The process of synaptogenesis. | |
1:34:39.080 --> 1:34:40.080 | |
Synaptogenesis. | |
1:34:40.080 --> 1:34:41.080 | |
Synaptogenesis. | |
1:34:41.080 --> 1:34:42.080 | |
It's even easier. | |
1:34:42.080 --> 1:34:43.080 | |
It's even easier. | |
1:34:43.080 --> 1:34:44.080 | |
It's even easier. | |
1:34:44.080 --> 1:34:49.080 | |
Back propagation requires something that really can't happen in brains. | |
1:34:49.080 --> 1:34:51.080 | |
This back propagation of this error signal. | |
1:34:51.080 --> 1:34:52.080 | |
They really can't happen. | |
1:34:52.080 --> 1:34:55.080 | |
People are trying to make it happen in brains, but it doesn't happen in brain. | |
1:34:55.080 --> 1:34:57.080 | |
This is pure Hebbian learning. | |
1:34:57.080 --> 1:34:59.080 | |
Well, synaptogenesis is pure Hebbian learning. | |
1:34:59.080 --> 1:35:03.080 | |
It's basically saying there's a population of cells over here that are active right now, | |
1:35:03.080 --> 1:35:05.080 | |
and there's a population of cells over here active right now. | |
1:35:05.080 --> 1:35:08.080 | |
How do I form connections between those active cells? | |
1:35:08.080 --> 1:35:11.080 | |
And it's literally saying this guy became active. | |
1:35:11.080 --> 1:35:15.080 | |
These 100 neurons here became active before this neuron became active. | |
1:35:15.080 --> 1:35:17.080 | |
So form connections to those ones. | |
1:35:17.080 --> 1:35:18.080 | |
That's it. | |
1:35:18.080 --> 1:35:20.080 | |
There's no propagation of error, nothing. | |
1:35:20.080 --> 1:35:26.080 | |
All the networks we do, all models we have work on almost completely on Hebbian learning, | |
1:35:26.080 --> 1:35:33.080 | |
but on dendritic segments and multiple synaptoses at the same time. | |
1:35:33.080 --> 1:35:36.080 | |
So now let's turn the question that you already answered, | |
1:35:36.080 --> 1:35:38.080 | |
and maybe you can answer it again. | |
1:35:38.080 --> 1:35:43.080 | |
If you look at the history of artificial intelligence, where do you think we stand? | |
1:35:43.080 --> 1:35:45.080 | |
How far are we from solving intelligence? | |
1:35:45.080 --> 1:35:47.080 | |
You said you were very optimistic. | |
1:35:47.080 --> 1:35:48.080 | |
Yeah. | |
1:35:48.080 --> 1:35:49.080 | |
Can you elaborate on that? | |
1:35:49.080 --> 1:35:55.080 | |
Yeah, it's always the crazy question to ask, because no one can predict the future. | |
1:35:55.080 --> 1:35:56.080 | |
Absolutely. | |
1:35:56.080 --> 1:35:58.080 | |
So I'll tell you a story. | |
1:35:58.080 --> 1:36:02.080 | |
I used to run a different neuroscience institute called the Redwood Neuroscience Institute, | |
1:36:02.080 --> 1:36:07.080 | |
and we would hold these symposiums, and we'd get like 35 scientists from around the world to come together. | |
1:36:07.080 --> 1:36:09.080 | |
And I used to ask them all the same question. | |
1:36:09.080 --> 1:36:13.080 | |
I would say, well, how long do you think it'll be before we understand how the New York Cortex works? | |
1:36:13.080 --> 1:36:17.080 | |
And everyone went around the room, and they had introduced the name, and they had to answer that question. | |
1:36:17.080 --> 1:36:22.080 | |
So I got, the typical answer was 50 to 100 years. | |
1:36:22.080 --> 1:36:24.080 | |
Some people would say 500 years. | |
1:36:24.080 --> 1:36:25.080 | |
Some people said never. | |
1:36:25.080 --> 1:36:27.080 | |
I said, why are you a neuroscience institute? | |
1:36:27.080 --> 1:36:28.080 | |
Never. | |
1:36:28.080 --> 1:36:30.080 | |
It's good pay. | |
1:36:30.080 --> 1:36:33.080 | |
It's interesting. | |
1:36:33.080 --> 1:36:37.080 | |
But it doesn't work like that. | |
1:36:37.080 --> 1:36:39.080 | |
As I mentioned earlier, these are step functions. | |
1:36:39.080 --> 1:36:41.080 | |
Things happen, and then bingo, they happen. | |
1:36:41.080 --> 1:36:43.080 | |
You can't predict that. | |
1:36:43.080 --> 1:36:45.080 | |
I feel I've already passed a step function. | |
1:36:45.080 --> 1:36:53.080 | |
So if I can do my job correctly over the next five years, then meaning I can proselytize these ideas. | |
1:36:53.080 --> 1:36:55.080 | |
I can convince other people they're right. | |
1:36:55.080 --> 1:37:01.080 | |
We can show that machine learning people should pay attention to these ideas. | |
1:37:01.080 --> 1:37:04.080 | |
Then we're definitely in an under 20 year time frame. | |
1:37:04.080 --> 1:37:09.080 | |
If I can do those things, if I'm not successful in that, and this is the last time anyone talks to me, | |
1:37:09.080 --> 1:37:15.080 | |
and no one reads our papers, and I'm wrong or something like that, then I don't know. | |
1:37:15.080 --> 1:37:17.080 | |
But it's not 50 years. | |
1:37:17.080 --> 1:37:22.080 | |
It's the same thing about electric cars. | |
1:37:22.080 --> 1:37:24.080 | |
How quickly are they going to populate the world? | |
1:37:24.080 --> 1:37:27.080 | |
It probably takes about a 20 year span. | |
1:37:27.080 --> 1:37:28.080 | |
It'll be something like that. | |
1:37:28.080 --> 1:37:31.080 | |
But I think if I can do what I said, we're starting it. | |
1:37:31.080 --> 1:37:35.080 | |
Of course, there could be other use of step functions. | |
1:37:35.080 --> 1:37:42.080 | |
It could be everybody gives up on your ideas for 20 years, and then all of a sudden somebody picks it up again. | |
1:37:42.080 --> 1:37:44.080 | |
Wait, that guy was onto something. | |
1:37:44.080 --> 1:37:46.080 | |
That would be a failure on my part. | |
1:37:46.080 --> 1:37:49.080 | |
Think about Charles Babbage. | |
1:37:49.080 --> 1:37:55.080 | |
Charles Babbage used to invented the computer back in the 1800s. | |
1:37:55.080 --> 1:37:59.080 | |
Everyone forgot about it until 100 years later. | |
1:37:59.080 --> 1:38:03.080 | |
This guy figured this stuff out a long time ago, but he was ahead of his time. | |
1:38:03.080 --> 1:38:09.080 | |
As I said, I recognize this is part of any entrepreneur's challenge. | |
1:38:09.080 --> 1:38:11.080 | |
I use entrepreneur broadly in this case. | |
1:38:11.080 --> 1:38:13.080 | |
I'm not meaning like I'm building a business trying to sell something. | |
1:38:13.080 --> 1:38:15.080 | |
I'm trying to sell ideas. | |
1:38:15.080 --> 1:38:20.080 | |
This is the challenge as to how you get people to pay attention to you. | |
1:38:20.080 --> 1:38:24.080 | |
How do you get them to give you positive or negative feedback? | |
1:38:24.080 --> 1:38:27.080 | |
How do you get the people to act differently based on your ideas? | |
1:38:27.080 --> 1:38:30.080 | |
We'll see how well we do on that. | |
1:38:30.080 --> 1:38:34.080 | |
There's a lot of hype behind artificial intelligence currently. | |
1:38:34.080 --> 1:38:43.080 | |
Do you, as you look to spread the ideas that are in your cortical theory, the things you're working on, | |
1:38:43.080 --> 1:38:47.080 | |
do you think there's some possibility we'll hit an AI winter once again? | |
1:38:47.080 --> 1:38:49.080 | |
It's certainly a possibility. | |
1:38:49.080 --> 1:38:51.080 | |
That's something you worry about? | |
1:38:51.080 --> 1:38:54.080 | |
I guess, do I worry about it? | |
1:38:54.080 --> 1:38:58.080 | |
I haven't decided yet if that's good or bad for my mission. | |
1:38:58.080 --> 1:38:59.080 | |
That's true. | |
1:38:59.080 --> 1:39:04.080 | |
That's very true because it's almost like you need the winter to refresh the pallet. | |
1:39:04.080 --> 1:39:08.080 | |
Here's what you want to have it. | |
1:39:08.080 --> 1:39:15.080 | |
To the extent that everyone is so thrilled about the current state of machine learning and AI, | |
1:39:15.080 --> 1:39:20.080 | |
and they don't imagine they need anything else, it makes my job harder. | |
1:39:20.080 --> 1:39:24.080 | |
If everything crashed completely and every student left the field, | |
1:39:24.080 --> 1:39:26.080 | |
and there was no money for anybody to do anything, | |
1:39:26.080 --> 1:39:29.080 | |
and it became an embarrassment to talk about machine intelligence and AI, | |
1:39:29.080 --> 1:39:31.080 | |
that wouldn't be good for us either. | |
1:39:31.080 --> 1:39:33.080 | |
You want the soft landing approach, right? | |
1:39:33.080 --> 1:39:37.080 | |
You want enough people, the senior people in AI and machine learning to say, | |
1:39:37.080 --> 1:39:39.080 | |
you know, we need other approaches. | |
1:39:39.080 --> 1:39:40.080 | |
We really need other approaches. | |
1:39:40.080 --> 1:39:42.080 | |
Damn, we need other approaches. | |
1:39:42.080 --> 1:39:43.080 | |
Maybe we should look to the brain. | |
1:39:43.080 --> 1:39:44.080 | |
Okay, let's look to the brain. | |
1:39:44.080 --> 1:39:45.080 | |
Who's got some brain ideas? | |
1:39:45.080 --> 1:39:49.080 | |
Okay, let's start a little project on the side here trying to do brain idea related stuff. | |
1:39:49.080 --> 1:39:51.080 | |
That's the ideal outcome we would want. | |
1:39:51.080 --> 1:39:53.080 | |
So I don't want a total winter, | |
1:39:53.080 --> 1:39:57.080 | |
and yet I don't want it to be sunny all the time either. | |
1:39:57.080 --> 1:40:02.080 | |
So what do you think it takes to build a system with human level intelligence | |
1:40:02.080 --> 1:40:06.080 | |
where once demonstrated, you would be very impressed? | |
1:40:06.080 --> 1:40:08.080 | |
So does it have to have a body? | |
1:40:08.080 --> 1:40:18.080 | |
Does it have to have the C word we used before consciousness as an entirety in a holistic sense? | |
1:40:18.080 --> 1:40:23.080 | |
First of all, I don't think the goal is to create a machine that is human level intelligence. | |
1:40:23.080 --> 1:40:24.080 | |
I think it's a false goal. | |
1:40:24.080 --> 1:40:26.080 | |
Back to Turing, I think it was a false statement. | |
1:40:26.080 --> 1:40:28.080 | |
We want to understand what intelligence is, | |
1:40:28.080 --> 1:40:31.080 | |
and then we can build intelligent machines of all different scales, | |
1:40:31.080 --> 1:40:33.080 | |
all different capabilities. | |
1:40:33.080 --> 1:40:35.080 | |
You know, a dog is intelligent. | |
1:40:35.080 --> 1:40:38.080 | |
You know, that would be pretty good to have a dog, you know, | |
1:40:38.080 --> 1:40:41.080 | |
but what about something that doesn't look like an animal at all in different spaces? | |
1:40:41.080 --> 1:40:45.080 | |
So my thinking about this is that we want to define what intelligence is, | |
1:40:45.080 --> 1:40:48.080 | |
agree upon what makes an intelligent system. | |
1:40:48.080 --> 1:40:52.080 | |
We can then say, okay, we're now going to build systems that work on those principles, | |
1:40:52.080 --> 1:40:57.080 | |
or some subset of them, and we can apply them to all different types of problems. | |
1:40:57.080 --> 1:41:00.080 | |
And the kind, the idea, it's not computing. | |
1:41:00.080 --> 1:41:05.080 | |
We don't ask, if I take a little, you know, little one chip computer, | |
1:41:05.080 --> 1:41:08.080 | |
I don't say, well, that's not a computer because it's not as powerful as this, you know, | |
1:41:08.080 --> 1:41:09.080 | |
big server over here. | |
1:41:09.080 --> 1:41:11.080 | |
No, no, because we know that what the principles are computing on, | |
1:41:11.080 --> 1:41:14.080 | |
and I can apply those principles to a small problem or into a big problem. | |
1:41:14.080 --> 1:41:16.080 | |
And same, intelligence needs to get there. | |
1:41:16.080 --> 1:41:17.080 | |
We have to say, these are the principles. | |
1:41:17.080 --> 1:41:19.080 | |
I can make a small one, a big one. | |
1:41:19.080 --> 1:41:20.080 | |
I can make them distributed. | |
1:41:20.080 --> 1:41:21.080 | |
I can put them on different sensors. | |
1:41:21.080 --> 1:41:23.080 | |
They don't have to be human like at all. | |
1:41:23.080 --> 1:41:25.080 | |
Now, you did bring up a very interesting question about embodiment. | |
1:41:25.080 --> 1:41:27.080 | |
Does it have to have a body? | |
1:41:27.080 --> 1:41:30.080 | |
It has to have some concept of movement. | |
1:41:30.080 --> 1:41:33.080 | |
It has to be able to move through these reference frames. | |
1:41:33.080 --> 1:41:35.080 | |
I talked about earlier, whether it's physically moving, | |
1:41:35.080 --> 1:41:38.080 | |
like I need, if I'm going to have an AI that understands coffee cups, | |
1:41:38.080 --> 1:41:42.080 | |
it's going to have to pick up the coffee cup and touch it and look at it with its eyes | |
1:41:42.080 --> 1:41:45.080 | |
and hands or something equivalent to that. | |
1:41:45.080 --> 1:41:51.080 | |
If I have a mathematical AI, maybe it needs to move through mathematical spaces. | |
1:41:51.080 --> 1:41:55.080 | |
I could have a virtual AI that lives in the Internet | |
1:41:55.080 --> 1:42:00.080 | |
and its movements are traversing links and digging into files, | |
1:42:00.080 --> 1:42:04.080 | |
but it's got a location that it's traveling through some space. | |
1:42:04.080 --> 1:42:08.080 | |
You can't have an AI that just takes some flash thing input, | |
1:42:08.080 --> 1:42:10.080 | |
you know, we call it flash inference. | |
1:42:10.080 --> 1:42:13.080 | |
Here's a pattern done. | |
1:42:13.080 --> 1:42:16.080 | |
No, it's movement pattern, movement pattern, movement pattern. | |
1:42:16.080 --> 1:42:18.080 | |
Attention, digging, building, building structure, | |
1:42:18.080 --> 1:42:20.080 | |
just trying to figure out the model of the world. | |
1:42:20.080 --> 1:42:25.080 | |
So some sort of embodiment, whether it's physical or not, has to be part of it. | |
1:42:25.080 --> 1:42:28.080 | |
So self awareness in the way to be able to answer where am I? | |
1:42:28.080 --> 1:42:31.080 | |
You bring up self awareness, it's a different topic, self awareness. | |
1:42:31.080 --> 1:42:37.080 | |
No, the very narrow definition, meaning knowing a sense of self enough to know | |
1:42:37.080 --> 1:42:40.080 | |
where am I in the space where essentially. | |
1:42:40.080 --> 1:42:43.080 | |
The system needs to know its location, | |
1:42:43.080 --> 1:42:48.080 | |
where each component of the system needs to know where it is in the world at that point in time. | |
1:42:48.080 --> 1:42:51.080 | |
So self awareness and consciousness. | |
1:42:51.080 --> 1:42:56.080 | |
Do you think, one, from the perspective of neuroscience and neurocortex, | |
1:42:56.080 --> 1:42:59.080 | |
these are interesting topics, solvable topics, | |
1:42:59.080 --> 1:43:04.080 | |
do you have any ideas of why the heck it is that we have a subjective experience at all? | |
1:43:04.080 --> 1:43:05.080 | |
Yeah, I have a lot of questions. | |
1:43:05.080 --> 1:43:08.080 | |
And is it useful, or is it just a side effect of us? | |
1:43:08.080 --> 1:43:10.080 | |
It's interesting to think about. | |
1:43:10.080 --> 1:43:16.080 | |
I don't think it's useful as a means to figure out how to build intelligent machines. | |
1:43:16.080 --> 1:43:21.080 | |
It's something that systems do, and we can talk about what it is, | |
1:43:21.080 --> 1:43:25.080 | |
that are like, well, if I build a system like this, then it would be self aware. | |
1:43:25.080 --> 1:43:28.080 | |
Or if I build it like this, it wouldn't be self aware. | |
1:43:28.080 --> 1:43:30.080 | |
So that's a choice I can have. | |
1:43:30.080 --> 1:43:32.080 | |
It's not like, oh my God, it's self aware. | |
1:43:32.080 --> 1:43:37.080 | |
I heard an interview recently with this philosopher from Yale. | |
1:43:37.080 --> 1:43:38.080 | |
I can't remember his name. | |
1:43:38.080 --> 1:43:39.080 | |
I apologize for that. | |
1:43:39.080 --> 1:43:41.080 | |
But he was talking about, well, if these computers were self aware, | |
1:43:41.080 --> 1:43:43.080 | |
then it would be a crime to unplug them. | |
1:43:43.080 --> 1:43:45.080 | |
And I'm like, oh, come on. | |
1:43:45.080 --> 1:43:46.080 | |
I unplug myself every night. | |
1:43:46.080 --> 1:43:47.080 | |
I go to sleep. | |
1:43:47.080 --> 1:43:49.080 | |
Is that a crime? | |
1:43:49.080 --> 1:43:51.080 | |
I plug myself in again in the morning. | |
1:43:51.080 --> 1:43:53.080 | |
There I am. | |
1:43:53.080 --> 1:43:56.080 | |
People get kind of bent out of shape about this. | |
1:43:56.080 --> 1:44:02.080 | |
I have very detailed understanding or opinions about what it means to be conscious | |
1:44:02.080 --> 1:44:04.080 | |
and what it means to be self aware. | |
1:44:04.080 --> 1:44:07.080 | |
I don't think it's that interesting a problem. | |
1:44:07.080 --> 1:44:08.080 | |
You talked about Christoph Koch. | |
1:44:08.080 --> 1:44:10.080 | |
He thinks that's the only problem. | |
1:44:10.080 --> 1:44:12.080 | |
I didn't actually listen to your interview with him. | |
1:44:12.080 --> 1:44:15.080 | |
But I know him, and I know that's the thing. | |
1:44:15.080 --> 1:44:18.080 | |
He also thinks intelligence and consciousness are disjoint. | |
1:44:18.080 --> 1:44:21.080 | |
So I mean, it's not, you don't have to have one or the other. | |
1:44:21.080 --> 1:44:23.080 | |
I just agree with that. | |
1:44:23.080 --> 1:44:24.080 | |
I just totally disagree with that. | |
1:44:24.080 --> 1:44:26.080 | |
So where's your thoughts and consciousness? | |
1:44:26.080 --> 1:44:28.080 | |
Where does it emerge from? | |
1:44:28.080 --> 1:44:30.080 | |
Then we have to break it down to the two parts. | |
1:44:30.080 --> 1:44:32.080 | |
Because consciousness isn't one thing. | |
1:44:32.080 --> 1:44:34.080 | |
That's part of the problem with that term. | |
1:44:34.080 --> 1:44:36.080 | |
It means different things to different people. | |
1:44:36.080 --> 1:44:38.080 | |
And there's different components of it. | |
1:44:38.080 --> 1:44:40.080 | |
There is a concept of self awareness. | |
1:44:40.080 --> 1:44:44.080 | |
That can be very easily explained. | |
1:44:44.080 --> 1:44:46.080 | |
You have a model of your own body. | |
1:44:46.080 --> 1:44:48.080 | |
The neocortex models things in the world. | |
1:44:48.080 --> 1:44:50.080 | |
And it also models your own body. | |
1:44:50.080 --> 1:44:53.080 | |
And then it has a memory. | |
1:44:53.080 --> 1:44:55.080 | |
It can remember what you've done. | |
1:44:55.080 --> 1:44:57.080 | |
So it can remember what you did this morning. | |
1:44:57.080 --> 1:44:59.080 | |
It can remember what you had for breakfast and so on. | |
1:44:59.080 --> 1:45:02.080 | |
And so I can say to you, okay, Lex, | |
1:45:02.080 --> 1:45:06.080 | |
were you conscious this morning when you had your bagel? | |
1:45:06.080 --> 1:45:08.080 | |
And you'd say, yes, I was conscious. | |
1:45:08.080 --> 1:45:11.080 | |
Now, what if I could take your brain and revert all the synapses | |
1:45:11.080 --> 1:45:13.080 | |
back to the state they were this morning? | |
1:45:13.080 --> 1:45:15.080 | |
And then I said to you, Lex, | |
1:45:15.080 --> 1:45:17.080 | |
were you conscious when you ate the bagel? | |
1:45:17.080 --> 1:45:18.080 | |
And he said, no, I wasn't conscious. | |
1:45:18.080 --> 1:45:20.080 | |
I said, here's a video of eating the bagel. | |
1:45:20.080 --> 1:45:22.080 | |
And he said, I wasn't there. | |
1:45:22.080 --> 1:45:25.080 | |
That's not possible because I must have been unconscious at that time. | |
1:45:25.080 --> 1:45:27.080 | |
So we can just make this one to one correlation | |
1:45:27.080 --> 1:45:30.080 | |
between memory of your body's trajectory through the world | |
1:45:30.080 --> 1:45:32.080 | |
over some period of time. | |
1:45:32.080 --> 1:45:34.080 | |
And the ability to recall that memory | |
1:45:34.080 --> 1:45:36.080 | |
is what you would call conscious. | |
1:45:36.080 --> 1:45:38.080 | |
I was conscious of that. It's self awareness. | |
1:45:38.080 --> 1:45:41.080 | |
And any system that can recall, | |
1:45:41.080 --> 1:45:43.080 | |
memorize what it's done recently | |
1:45:43.080 --> 1:45:46.080 | |
and bring that back and invoke it again | |
1:45:46.080 --> 1:45:48.080 | |
would say, yeah, I'm aware. | |
1:45:48.080 --> 1:45:51.080 | |
I remember what I did. All right, I got it. | |
1:45:51.080 --> 1:45:54.080 | |
That's an easy one. Although some people think that's a hard one. | |
1:45:54.080 --> 1:45:57.080 | |
The more challenging part of consciousness | |
1:45:57.080 --> 1:45:59.080 | |
is this is one that's sometimes used | |
1:45:59.080 --> 1:46:01.080 | |
by the word Aqualia, | |
1:46:01.080 --> 1:46:04.080 | |
which is, you know, why does an object seem red? | |
1:46:04.080 --> 1:46:06.080 | |
Or what is pain? | |
1:46:06.080 --> 1:46:08.080 | |
And why does pain feel like something? | |
1:46:08.080 --> 1:46:10.080 | |
Why do I feel redness? | |
1:46:10.080 --> 1:46:12.080 | |
So why do I feel a little painless in a way? | |
1:46:12.080 --> 1:46:14.080 | |
And then I could say, well, why does sight | |
1:46:14.080 --> 1:46:16.080 | |
seems different than hearing? You know, it's the same problem. | |
1:46:16.080 --> 1:46:18.080 | |
It's really, you know, these are all just neurons. | |
1:46:18.080 --> 1:46:21.080 | |
And so how is it that why does looking at you | |
1:46:21.080 --> 1:46:24.080 | |
feel different than, you know, hearing you? | |
1:46:24.080 --> 1:46:26.080 | |
It feels different, but there's just neurons in my head. | |
1:46:26.080 --> 1:46:28.080 | |
They're all doing the same thing. | |
1:46:28.080 --> 1:46:30.080 | |
So that's an interesting question. | |
1:46:30.080 --> 1:46:32.080 | |
The best treatise I've read about this | |
1:46:32.080 --> 1:46:34.080 | |
is by a guy named Oregon. | |
1:46:34.080 --> 1:46:38.080 | |
He wrote a book called Why Red Doesn't Sound Like a Bell. | |
1:46:38.080 --> 1:46:42.080 | |
It's a little, it's not a trade book, easy to read, | |
1:46:42.080 --> 1:46:45.080 | |
but it, and it's an interesting question. | |
1:46:45.080 --> 1:46:47.080 | |
Take something like color. | |
1:46:47.080 --> 1:46:49.080 | |
Color really doesn't exist in the world. | |
1:46:49.080 --> 1:46:51.080 | |
It's not a property of the world. | |
1:46:51.080 --> 1:46:54.080 | |
Property of the world that exists is light frequency, | |
1:46:54.080 --> 1:46:57.080 | |
and that gets turned into we have certain cells | |
1:46:57.080 --> 1:46:59.080 | |
in the retina that respond to different frequencies | |
1:46:59.080 --> 1:47:00.080 | |
different than others. | |
1:47:00.080 --> 1:47:02.080 | |
And so when they enter the brain, you just have a bunch | |
1:47:02.080 --> 1:47:04.080 | |
of axons that are firing at different rates, | |
1:47:04.080 --> 1:47:06.080 | |
and from that we perceive color. | |
1:47:06.080 --> 1:47:08.080 | |
But there is no color in the brain. | |
1:47:08.080 --> 1:47:11.080 | |
I mean, there's no color coming in on those synapses. | |
1:47:11.080 --> 1:47:14.080 | |
It's just a correlation between some axons | |
1:47:14.080 --> 1:47:17.080 | |
and some property of frequency. | |
1:47:17.080 --> 1:47:19.080 | |
And that isn't even color itself. | |
1:47:19.080 --> 1:47:21.080 | |
Frequency doesn't have a color. | |
1:47:21.080 --> 1:47:23.080 | |
It's just what it is. | |
1:47:23.080 --> 1:47:25.080 | |
So then the question is, well, why does it even | |
1:47:25.080 --> 1:47:27.080 | |
appear to have a color at all? | |
1:47:27.080 --> 1:47:30.080 | |
Just as you're describing it, there seems to be a connection | |
1:47:30.080 --> 1:47:32.080 | |
to those ideas of reference frames. | |
1:47:32.080 --> 1:47:38.080 | |
I mean, it just feels like consciousness having the subject, | |
1:47:38.080 --> 1:47:42.080 | |
assigning the feeling of red to the actual color | |
1:47:42.080 --> 1:47:47.080 | |
or to the wavelength is useful for intelligence. | |
1:47:47.080 --> 1:47:49.080 | |
Yeah, I think that's a good way of putting it. | |
1:47:49.080 --> 1:47:51.080 | |
It's useful as a predictive mechanism, | |
1:47:51.080 --> 1:47:53.080 | |
or useful as a generalization idea. | |
1:47:53.080 --> 1:47:55.080 | |
It's a way of grouping things together to say | |
1:47:55.080 --> 1:47:58.080 | |
it's useful to have a model like this. | |
1:47:58.080 --> 1:48:02.080 | |
Think about the well known syndrome that people | |
1:48:02.080 --> 1:48:06.080 | |
who've lost a limb experience called phantom limbs. | |
1:48:06.080 --> 1:48:11.080 | |
And what they claim is they can have their arms removed | |
1:48:11.080 --> 1:48:13.080 | |
but they feel the arm. | |
1:48:13.080 --> 1:48:15.080 | |
Not only feel it, they know it's there. | |
1:48:15.080 --> 1:48:16.080 | |
It's there. | |
1:48:16.080 --> 1:48:17.080 | |
I know it's there. | |
1:48:17.080 --> 1:48:19.080 | |
They'll swear to you that it's there. | |
1:48:19.080 --> 1:48:20.080 | |
And then they can feel pain in their arm. | |
1:48:20.080 --> 1:48:22.080 | |
And they'll feel pain in their finger. | |
1:48:22.080 --> 1:48:25.080 | |
And if they move their non existent arm behind their back, | |
1:48:25.080 --> 1:48:27.080 | |
then they feel the pain behind their back. | |
1:48:27.080 --> 1:48:30.080 | |
So this whole idea that your arm exists | |
1:48:30.080 --> 1:48:31.080 | |
is a model of your brain. | |
1:48:31.080 --> 1:48:34.080 | |
It may or may not really exist. | |
1:48:34.080 --> 1:48:38.080 | |
And just like, but it's useful to have a model of something | |
1:48:38.080 --> 1:48:40.080 | |
that sort of correlates to things in the world | |
1:48:40.080 --> 1:48:42.080 | |
so you can make predictions about what would happen | |
1:48:42.080 --> 1:48:43.080 | |
when those things occur. | |
1:48:43.080 --> 1:48:44.080 | |
It's a little bit of a fuzzy, | |
1:48:44.080 --> 1:48:46.080 | |
but I think you're getting quite towards the answer there. | |
1:48:46.080 --> 1:48:51.080 | |
It's useful for the model to express things certain ways | |
1:48:51.080 --> 1:48:53.080 | |
that we can then map them into these reference frames | |
1:48:53.080 --> 1:48:55.080 | |
and make predictions about them. | |
1:48:55.080 --> 1:48:57.080 | |
I need to spend more time on this topic. | |
1:48:57.080 --> 1:48:58.080 | |
It doesn't bother me. | |
1:48:58.080 --> 1:49:00.080 | |
Do you really need to spend more time on this? | |
1:49:00.080 --> 1:49:01.080 | |
Yeah. | |
1:49:01.080 --> 1:49:04.080 | |
It does feel special that we have subjective experience, | |
1:49:04.080 --> 1:49:07.080 | |
but I'm yet to know why. | |
1:49:07.080 --> 1:49:08.080 | |
I'm just personally curious. | |
1:49:08.080 --> 1:49:10.080 | |
It's not necessary for the work we're doing here. | |
1:49:10.080 --> 1:49:12.080 | |
I don't think I need to solve that problem | |
1:49:12.080 --> 1:49:14.080 | |
to build intelligent machines at all. | |
1:49:14.080 --> 1:49:15.080 | |
Not at all. | |
1:49:15.080 --> 1:49:19.080 | |
But there is sort of the silly notion that you described briefly | |
1:49:19.080 --> 1:49:22.080 | |
that doesn't seem so silly to us humans is, | |
1:49:22.080 --> 1:49:26.080 | |
you know, if you're successful building intelligent machines, | |
1:49:26.080 --> 1:49:29.080 | |
it feels wrong to then turn them off. | |
1:49:29.080 --> 1:49:32.080 | |
Because if you're able to build a lot of them, | |
1:49:32.080 --> 1:49:36.080 | |
it feels wrong to then be able to, you know, | |
1:49:36.080 --> 1:49:38.080 | |
to turn off the... | |
1:49:38.080 --> 1:49:39.080 | |
Well, why? | |
1:49:39.080 --> 1:49:41.080 | |
Let's break that down a bit. | |
1:49:41.080 --> 1:49:43.080 | |
As humans, why do we fear death? | |
1:49:43.080 --> 1:49:46.080 | |
There's two reasons we fear death. | |
1:49:46.080 --> 1:49:48.080 | |
Well, first of all, I'll say when you're dead, it doesn't matter. | |
1:49:48.080 --> 1:49:49.080 | |
Okay. | |
1:49:49.080 --> 1:49:50.080 | |
You're dead. | |
1:49:50.080 --> 1:49:51.080 | |
So why do we fear death? | |
1:49:51.080 --> 1:49:53.080 | |
We fear death for two reasons. | |
1:49:53.080 --> 1:49:57.080 | |
One is because we are programmed genetically to fear death. | |
1:49:57.080 --> 1:50:02.080 | |
That's a survival and propaganda in the genes thing. | |
1:50:02.080 --> 1:50:06.080 | |
And we also are programmed to feel sad when people we know die. | |
1:50:06.080 --> 1:50:08.080 | |
We don't feel sad for someone we don't know dies. | |
1:50:08.080 --> 1:50:09.080 | |
There's people dying right now. | |
1:50:09.080 --> 1:50:10.080 | |
They're only scared to say, | |
1:50:10.080 --> 1:50:11.080 | |
I don't feel bad about them because I don't know them. | |
1:50:11.080 --> 1:50:13.080 | |
But I knew they might feel really bad. | |
1:50:13.080 --> 1:50:18.080 | |
So again, these are old brain genetically embedded things | |
1:50:18.080 --> 1:50:20.080 | |
that we fear death. | |
1:50:20.080 --> 1:50:23.080 | |
Outside of those uncomfortable feelings, | |
1:50:23.080 --> 1:50:25.080 | |
there's nothing else to worry about. | |
1:50:25.080 --> 1:50:27.080 | |
Well, wait, hold on a second. | |
1:50:27.080 --> 1:50:30.080 | |
Do you know the denial of death by Beckard? | |
1:50:30.080 --> 1:50:36.080 | |
You know, there's a thought that death is, you know, | |
1:50:36.080 --> 1:50:43.080 | |
our whole conception of our world model kind of assumes immortality. | |
1:50:43.080 --> 1:50:47.080 | |
And then death is this terror that underlies it all. | |
1:50:47.080 --> 1:50:50.080 | |
So like, well, some people's world model, not mine. | |
1:50:50.080 --> 1:50:51.080 | |
But okay. | |
1:50:51.080 --> 1:50:54.080 | |
So what, what Becker would say is that you're just living in an illusion. | |
1:50:54.080 --> 1:50:58.080 | |
You've constructed illusion for yourself because it's such a terrible terror. | |
1:50:58.080 --> 1:51:02.080 | |
The fact that this illusion, the illusion that death doesn't matter. | |
1:51:02.080 --> 1:51:05.080 | |
You're still not coming to grips with the illusion of what? | |
1:51:05.080 --> 1:51:08.080 | |
That death is going to happen. | |
1:51:08.080 --> 1:51:10.080 | |
Oh, it's not going to happen. | |
1:51:10.080 --> 1:51:11.080 | |
You're, you're actually operating. | |
1:51:11.080 --> 1:51:13.080 | |
You haven't, even though you said you've accepted it, | |
1:51:13.080 --> 1:51:16.080 | |
you haven't really accepted the notion of death is what you say. | |
1:51:16.080 --> 1:51:21.080 | |
So it sounds like it sounds like you disagree with that notion. | |
1:51:21.080 --> 1:51:22.080 | |
I mean, totally. | |
1:51:22.080 --> 1:51:27.080 | |
Like, I literally every night, every night I go to bed, it's like dying. | |
1:51:27.080 --> 1:51:28.080 | |
Little deaths. | |
1:51:28.080 --> 1:51:29.080 | |
Little deaths. | |
1:51:29.080 --> 1:51:32.080 | |
And if I didn't wake up, it wouldn't matter to me. | |
1:51:32.080 --> 1:51:35.080 | |
Only if I knew that was going to happen would it be bothersome. | |
1:51:35.080 --> 1:51:37.080 | |
If I didn't know it was going to happen, how would I know? | |
1:51:37.080 --> 1:51:39.080 | |
Then I would worry about my wife. | |
1:51:39.080 --> 1:51:40.080 | |
Yeah. | |
1:51:40.080 --> 1:51:44.080 | |
So imagine, imagine I was a loner and I lived in Alaska and I lived them out there and there | |
1:51:44.080 --> 1:51:45.080 | |
was no animals. | |
1:51:45.080 --> 1:51:46.080 | |
Nobody knew I existed. | |
1:51:46.080 --> 1:51:48.080 | |
I was just eating these roots all the time. | |
1:51:48.080 --> 1:51:50.080 | |
And nobody knew I was there. | |
1:51:50.080 --> 1:51:53.080 | |
And one day I didn't wake up. | |
1:51:53.080 --> 1:51:56.080 | |
Where, what, what pain in the world would there exist? | |
1:51:56.080 --> 1:52:01.080 | |
Well, so most people that think about this problem would say that you're just deeply enlightened | |
1:52:01.080 --> 1:52:04.080 | |
or are completely delusional. | |
1:52:04.080 --> 1:52:05.080 | |
Wow. | |
1:52:05.080 --> 1:52:14.080 | |
But I would say, I would say that's a very enlightened way to see the world is that that's the rational one. | |
1:52:14.080 --> 1:52:15.080 | |
Well, I think it's rational. | |
1:52:15.080 --> 1:52:16.080 | |
That's right. | |
1:52:16.080 --> 1:52:22.080 | |
But the fact is we don't, I mean, we really don't have an understanding of why the heck | |
1:52:22.080 --> 1:52:26.080 | |
it is we're born and why we die and what happens after we die. | |
1:52:26.080 --> 1:52:27.080 | |
Well, maybe there isn't a reason. | |
1:52:27.080 --> 1:52:28.080 | |
Maybe there is. | |
1:52:28.080 --> 1:52:30.080 | |
So I'm interested in those big problems too, right? | |
1:52:30.080 --> 1:52:33.080 | |
You know, you, you interviewed Max Tagmark, you know, and there's people like that, right? | |
1:52:33.080 --> 1:52:35.080 | |
I'm interested in those big problems as well. | |
1:52:35.080 --> 1:52:41.080 | |
And in fact, when I was young, I made a list of the biggest problems I could think of. | |
1:52:41.080 --> 1:52:43.080 | |
First, why does anything exist? | |
1:52:43.080 --> 1:52:46.080 | |
Second, why did we have the laws of physics that we have? | |
1:52:46.080 --> 1:52:49.080 | |
Third, is life inevitable? | |
1:52:49.080 --> 1:52:50.080 | |
And why is it here? | |
1:52:50.080 --> 1:52:52.080 | |
Fourth, is intelligence inevitable? | |
1:52:52.080 --> 1:52:53.080 | |
And why is it here? | |
1:52:53.080 --> 1:52:58.080 | |
I stopped there because I figured if you can make a truly intelligent system, we'll be, | |
1:52:58.080 --> 1:53:03.080 | |
that'll be the quickest way to answer the first three questions. | |
1:53:03.080 --> 1:53:04.080 | |
I'm serious. | |
1:53:04.080 --> 1:53:05.080 | |
Yeah. | |
1:53:05.080 --> 1:53:09.080 | |
And so I said, my mission, you know, you asked me earlier, my first mission is to understand | |
1:53:09.080 --> 1:53:12.080 | |
the brain, but I felt that is the shortest way to get to true machine intelligence. | |
1:53:12.080 --> 1:53:16.080 | |
And I want to get to true machine intelligence because even if it doesn't occur in my lifetime, | |
1:53:16.080 --> 1:53:19.080 | |
other people will benefit from it because I think it'll occur in my lifetime. | |
1:53:19.080 --> 1:53:21.080 | |
But, you know, 20 years, you never know. | |
1:53:21.080 --> 1:53:27.080 | |
And, but that will be the quickest way for us to, you know, we can make super mathematicians. | |
1:53:27.080 --> 1:53:29.080 | |
We can make super space explorers. | |
1:53:29.080 --> 1:53:36.080 | |
We can make super physicists brains that do these things and that can run experiments | |
1:53:36.080 --> 1:53:37.080 | |
that we can't run. | |
1:53:37.080 --> 1:53:40.080 | |
We don't have the abilities to manipulate things and so on. | |
1:53:40.080 --> 1:53:42.080 | |
But we can build and tell the machines to do all those things. | |
1:53:42.080 --> 1:53:48.080 | |
And with the ultimate goal of finding out the answers to the other questions. | |
1:53:48.080 --> 1:53:56.080 | |
Let me ask, you know, the depressing and difficult question, which is once we achieve that goal, | |
1:53:56.080 --> 1:54:03.080 | |
do you, of creating, no, of understanding intelligence, do you think we would be happier, | |
1:54:03.080 --> 1:54:05.080 | |
more fulfilled as a species? | |
1:54:05.080 --> 1:54:08.080 | |
The understanding intelligence or understanding the answers to the big questions? | |
1:54:08.080 --> 1:54:09.080 | |
Understanding intelligence. | |
1:54:09.080 --> 1:54:11.080 | |
Oh, totally. | |
1:54:11.080 --> 1:54:12.080 | |
Totally. | |
1:54:12.080 --> 1:54:14.080 | |
It would be far more fun place to live. | |
1:54:14.080 --> 1:54:15.080 | |
You think so? | |
1:54:15.080 --> 1:54:16.080 | |
Oh, yeah. | |
1:54:16.080 --> 1:54:17.080 | |
Why not? | |
1:54:17.080 --> 1:54:22.080 | |
I just put aside this, you know, terminator nonsense and, and, and, and just think about, | |
1:54:22.080 --> 1:54:26.080 | |
you can think about the, we can talk about the risk of AI if you want. | |
1:54:26.080 --> 1:54:27.080 | |
I'd love to. | |
1:54:27.080 --> 1:54:28.080 | |
So let's talk about. | |
1:54:28.080 --> 1:54:30.080 | |
But I think the world is far better knowing things. | |
1:54:30.080 --> 1:54:32.080 | |
We're always better than no things. | |
1:54:32.080 --> 1:54:33.080 | |
Do you think it's better? | |
1:54:33.080 --> 1:54:38.080 | |
Is it a better place to live in that I know that our planet is one of many in the solar system | |
1:54:38.080 --> 1:54:40.080 | |
and the solar system is one of many in the galaxy? | |
1:54:40.080 --> 1:54:44.080 | |
I think it's a more, I dread, I used to, I sometimes think like, God, what would be like | |
1:54:44.080 --> 1:54:47.080 | |
the 300 years ago, I'd be looking up the sky, I can't understand anything. | |
1:54:47.080 --> 1:54:48.080 | |
Oh my God. | |
1:54:48.080 --> 1:54:50.080 | |
I'd be like going to bed every night going, what's going on here? | |
1:54:50.080 --> 1:54:54.080 | |
Well, I mean, in some sense, I agree with you, but I'm not exactly sure. | |
1:54:54.080 --> 1:54:55.080 | |
So I'm also a scientist. | |
1:54:55.080 --> 1:55:01.080 | |
So I have, I share your views, but I'm not, we're, we're like rolling down the hill together. | |
1:55:01.080 --> 1:55:03.080 | |
What's down the hill? | |
1:55:03.080 --> 1:55:05.080 | |
I feel for climbing a hill. | |
1:55:05.080 --> 1:55:07.080 | |
Whatever we're getting, we're getting closer to enlightenment. | |
1:55:07.080 --> 1:55:08.080 | |
Whatever. | |
1:55:08.080 --> 1:55:12.080 | |
We're climbing, we're getting pulled up a hill. | |
1:55:12.080 --> 1:55:14.080 | |
Pulled up by our curiosity. | |
1:55:14.080 --> 1:55:17.080 | |
We're pulling ourselves up the hill by our curiosity. | |
1:55:17.080 --> 1:55:19.080 | |
Yeah, sycophers are doing the same thing with the rock. | |
1:55:19.080 --> 1:55:21.080 | |
Yeah, yeah, yeah. | |
1:55:21.080 --> 1:55:29.080 | |
But okay, our happiness aside, do you have concerns about, you know, you talk about Sam Harris, Elon Musk, | |
1:55:29.080 --> 1:55:32.080 | |
of existential threats of intelligence systems? | |
1:55:32.080 --> 1:55:34.080 | |
No, I'm not worried about existential threats at all. | |
1:55:34.080 --> 1:55:36.080 | |
There are some things we really do need to worry about. | |
1:55:36.080 --> 1:55:38.080 | |
Even today's AI, we have things we have to worry about. | |
1:55:38.080 --> 1:55:43.080 | |
We have to worry about privacy and about how it impacts false beliefs in the world. | |
1:55:43.080 --> 1:55:48.080 | |
And we have real problems that, and things to worry about with today's AI. | |
1:55:48.080 --> 1:55:51.080 | |
And that will continue as we create more intelligent systems. | |
1:55:51.080 --> 1:55:59.080 | |
There's no question, you know, the whole issue about, you know, making intelligent armament and weapons is something that really we have to think about carefully. | |
1:55:59.080 --> 1:56:01.080 | |
I don't think of those as existential threats. | |
1:56:01.080 --> 1:56:09.080 | |
I think those are the kind of threats we always face and we'll have to face them here and we'll have to deal with them. | |
1:56:09.080 --> 1:56:17.080 | |
We can talk about what people think are the existential threats, but when I hear people talking about them, they all sound hollow to me. | |
1:56:17.080 --> 1:56:21.080 | |
They're based on ideas, they're based on people who really have no idea what intelligence is. | |
1:56:21.080 --> 1:56:26.080 | |
And if they knew what intelligence was, they wouldn't say those things. | |
1:56:26.080 --> 1:56:29.080 | |
So those are not experts in the field, you know. | |
1:56:29.080 --> 1:56:31.080 | |
So there's two, right? | |
1:56:31.080 --> 1:56:33.080 | |
So one is like superintelligence. | |
1:56:33.080 --> 1:56:42.080 | |
So a system that becomes far, far superior in reasoning ability than us humans. | |
1:56:42.080 --> 1:56:45.080 | |
And how is that an existential threat? | |
1:56:45.080 --> 1:56:49.080 | |
So there's a lot of ways in which it could be. | |
1:56:49.080 --> 1:57:00.080 | |
One way is us humans are actually irrational, inefficient and get in the way of not happiness, | |
1:57:00.080 --> 1:57:05.080 | |
but whatever the objective function is of maximizing that objective function and superintelligence. | |
1:57:05.080 --> 1:57:07.080 | |
The paperclip problem and things like that. | |
1:57:07.080 --> 1:57:09.080 | |
So the paperclip problem, but with the superintelligence. | |
1:57:09.080 --> 1:57:10.080 | |
Yeah, yeah, yeah. | |
1:57:10.080 --> 1:57:15.080 | |
So we already faced this threat in some sense. | |
1:57:15.080 --> 1:57:17.080 | |
They're called bacteria. | |
1:57:17.080 --> 1:57:21.080 | |
These are organisms in the world that would like to turn everything into bacteria. | |
1:57:21.080 --> 1:57:23.080 | |
And they're constantly morphing. | |
1:57:23.080 --> 1:57:26.080 | |
They're constantly changing to evade our protections. | |
1:57:26.080 --> 1:57:33.080 | |
And in the past, they have killed huge swaths of populations of humans on this planet. | |
1:57:33.080 --> 1:57:38.080 | |
So if you want to worry about something that's going to multiply endlessly, we have it. | |
1:57:38.080 --> 1:57:43.080 | |
And I'm far more worried in that regard, I'm far more worried that some scientists in the laboratory | |
1:57:43.080 --> 1:57:47.080 | |
will create a super virus or a super bacteria that we cannot control. | |
1:57:47.080 --> 1:57:49.080 | |
That is a more existential threat. | |
1:57:49.080 --> 1:57:54.080 | |
Putting an intelligence thing on top of it actually seems to make it less existential to me. | |
1:57:54.080 --> 1:57:56.080 | |
It's like, it limits its power. | |
1:57:56.080 --> 1:57:57.080 | |
It limits where it can go. | |
1:57:57.080 --> 1:57:59.080 | |
It limits the number of things it can do in many ways. | |
1:57:59.080 --> 1:58:02.080 | |
A bacteria is something you can't even see. | |
1:58:02.080 --> 1:58:04.080 | |
So that's only one of those problems. | |
1:58:04.080 --> 1:58:05.080 | |
Yes, exactly. | |
1:58:05.080 --> 1:58:09.080 | |
So the other one, just in your intuition about intelligence, | |
1:58:09.080 --> 1:58:12.080 | |
when you think about intelligence of us humans, | |
1:58:12.080 --> 1:58:14.080 | |
do you think of that as something, | |
1:58:14.080 --> 1:58:18.080 | |
if you look at intelligence on a spectrum from zero to us humans, | |
1:58:18.080 --> 1:58:24.080 | |
do you think you can scale that to something far superior to all the mechanisms we've been talking about? | |
1:58:24.080 --> 1:58:27.080 | |
I want to make another point here, Alex, before I get there. | |
1:58:27.080 --> 1:58:30.080 | |
Intelligence is the neocortex. | |
1:58:30.080 --> 1:58:32.080 | |
It is not the entire brain. | |
1:58:32.080 --> 1:58:36.080 | |
The goal is not to make a human. | |
1:58:36.080 --> 1:58:38.080 | |
The goal is not to make an emotional system. | |
1:58:38.080 --> 1:58:41.080 | |
The goal is not to make a system that wants to have sex and reproduce. | |
1:58:41.080 --> 1:58:42.080 | |
Why would I build that? | |
1:58:42.080 --> 1:58:44.080 | |
If I want to have a system that wants to reproduce and have sex, | |
1:58:44.080 --> 1:58:47.080 | |
make bacteria, make computer viruses. | |
1:58:47.080 --> 1:58:48.080 | |
Those are bad things. | |
1:58:48.080 --> 1:58:49.080 | |
Don't do that. | |
1:58:49.080 --> 1:58:50.080 | |
Those are really bad. | |
1:58:50.080 --> 1:58:51.080 | |
Don't do those things. | |
1:58:51.080 --> 1:58:53.080 | |
Regulate those. | |
1:58:53.080 --> 1:58:56.080 | |
But if I just say, I want an intelligent system, | |
1:58:56.080 --> 1:58:58.080 | |
why doesn't it have to have any human like emotions? | |
1:58:58.080 --> 1:59:00.080 | |
Why does it even care if it lives? | |
1:59:00.080 --> 1:59:02.080 | |
Why does it even care if it has food? | |
1:59:02.080 --> 1:59:04.080 | |
It doesn't care about those things. | |
1:59:04.080 --> 1:59:07.080 | |
It's just in a trance thinking about mathematics, | |
1:59:07.080 --> 1:59:12.080 | |
or it's out there just trying to build the space for it on Mars. | |
1:59:12.080 --> 1:59:15.080 | |
That's a choice we make. | |
1:59:15.080 --> 1:59:17.080 | |
Don't make human like things. | |
1:59:17.080 --> 1:59:18.080 | |
Don't make replicating things. | |
1:59:18.080 --> 1:59:19.080 | |
Don't make things that have emotions. | |
1:59:19.080 --> 1:59:21.080 | |
Just stick to the neocortex. | |
1:59:21.080 --> 1:59:24.080 | |
That's a view, actually, that I share, but not everybody shares, | |
1:59:24.080 --> 1:59:28.080 | |
in the sense that you have faith and optimism about us as engineers | |
1:59:28.080 --> 1:59:31.080 | |
and systems, humans as builders of systems, | |
1:59:31.080 --> 1:59:35.080 | |
to not put in different stupid things. | |
1:59:35.080 --> 1:59:37.080 | |
This is why I mentioned the bacteria one, | |
1:59:37.080 --> 1:59:40.080 | |
because you might say, well, some person's going to do that. | |
1:59:40.080 --> 1:59:42.080 | |
Well, some person today could create a bacteria | |
1:59:42.080 --> 1:59:46.080 | |
that's resistant to all the known antibacterial agents. | |
1:59:46.080 --> 1:59:49.080 | |
So we already have that threat. | |
1:59:49.080 --> 1:59:51.080 | |
We already know this is going on. | |
1:59:51.080 --> 1:59:52.080 | |
It's not a new threat. | |
1:59:52.080 --> 1:59:56.080 | |
So just accept that, and then we have to deal with it, right? | |
1:59:56.080 --> 1:59:59.080 | |
Yeah, so my point is nothing to do with intelligence. | |
1:59:59.080 --> 2:00:02.080 | |
Intelligence is a separate component that you might apply | |
2:00:02.080 --> 2:00:05.080 | |
to a system that wants to reproduce and do stupid things. | |
2:00:05.080 --> 2:00:07.080 | |
Let's not do that. | |
2:00:07.080 --> 2:00:10.080 | |
Yeah, in fact, it is a mystery why people haven't done that yet. | |
2:00:10.080 --> 2:00:14.080 | |
My dad as a physicist believes that the reason, | |
2:00:14.080 --> 2:00:19.080 | |
for example, nuclear weapons haven't proliferated amongst evil people. | |
2:00:19.080 --> 2:00:25.080 | |
So one belief that I share is that there's not that many evil people in the world | |
2:00:25.080 --> 2:00:32.080 | |
that would use whether it's bacteria or nuclear weapons, | |
2:00:32.080 --> 2:00:35.080 | |
or maybe the future AI systems to do bad. | |
2:00:35.080 --> 2:00:37.080 | |
So the fraction is small. | |
2:00:37.080 --> 2:00:40.080 | |
And the second is that it's actually really hard, technically. | |
2:00:40.080 --> 2:00:45.080 | |
So the intersection between evil and competent is small. | |
2:00:45.080 --> 2:00:47.080 | |
And by the way, to really annihilate humanity, | |
2:00:47.080 --> 2:00:51.080 | |
you'd have to have sort of the nuclear winter phenomenon, | |
2:00:51.080 --> 2:00:54.080 | |
which is not one person shooting or even 10 bombs. | |
2:00:54.080 --> 2:00:58.080 | |
You'd have to have some automated system that detonates a million bombs, | |
2:00:58.080 --> 2:01:00.080 | |
or whatever many thousands we have. | |
2:01:00.080 --> 2:01:03.080 | |
So it's extreme evil combined with extreme competence. | |
2:01:03.080 --> 2:01:06.080 | |
And despite building some stupid system that would automatically, | |
2:01:06.080 --> 2:01:10.080 | |
you know, Dr. Strangelup type of thing, you know, | |
2:01:10.080 --> 2:01:14.080 | |
I mean, look, we could have some nuclear bomb go off in some major city in the world. | |
2:01:14.080 --> 2:01:17.080 | |
I think that's actually quite likely, even in my lifetime. | |
2:01:17.080 --> 2:01:20.080 | |
I don't think that's an unlikely thing, and it would be a tragedy. | |
2:01:20.080 --> 2:01:23.080 | |
But it won't be an existential threat. | |
2:01:23.080 --> 2:01:27.080 | |
And it's the same as, you know, the virus of 1917 or whatever it was, | |
2:01:27.080 --> 2:01:29.080 | |
you know, the influenza. | |
2:01:29.080 --> 2:01:33.080 | |
These bad things can happen and the plague and so on. | |
2:01:33.080 --> 2:01:35.080 | |
We can't always prevent it. | |
2:01:35.080 --> 2:01:37.080 | |
We always try, but we can't. | |
2:01:37.080 --> 2:01:41.080 | |
But they're not existential threats until we combine all those crazy things together. | |
2:01:41.080 --> 2:01:45.080 | |
So on the spectrum of intelligence from zero to human, | |
2:01:45.080 --> 2:01:51.080 | |
do you have a sense of whether it's possible to create several orders of magnitude | |
2:01:51.080 --> 2:01:54.080 | |
or at least double that of human intelligence, | |
2:01:54.080 --> 2:01:56.080 | |
to talk about neural cortex? | |
2:01:56.080 --> 2:01:58.080 | |
I think it's the wrong thing to say, double the intelligence. | |
2:01:58.080 --> 2:02:01.080 | |
Break it down into different components. | |
2:02:01.080 --> 2:02:04.080 | |
Can I make something that's a million times faster than a human brain? | |
2:02:04.080 --> 2:02:06.080 | |
Yes, I can do that. | |
2:02:06.080 --> 2:02:10.080 | |
Could I make something that is, has a lot more storage than a human brain? | |
2:02:10.080 --> 2:02:13.080 | |
Yes, I can do that. More copies come. | |
2:02:13.080 --> 2:02:16.080 | |
Can I make something that attaches to different sensors than a human brain? | |
2:02:16.080 --> 2:02:17.080 | |
Yes, I can do that. | |
2:02:17.080 --> 2:02:19.080 | |
Could I make something that's distributed? | |
2:02:19.080 --> 2:02:23.080 | |
We talked earlier about the departure of neural cortex voting. | |
2:02:23.080 --> 2:02:25.080 | |
They don't have to be co located. | |
2:02:25.080 --> 2:02:29.080 | |
They can be all around the places. I could do that too. | |
2:02:29.080 --> 2:02:32.080 | |
Those are the levers I have, but is it more intelligent? | |
2:02:32.080 --> 2:02:35.080 | |
What depends what I train in on? What is it doing? | |
2:02:35.080 --> 2:02:37.080 | |
So here's the thing. | |
2:02:37.080 --> 2:02:46.080 | |
Let's say larger neural cortex and or whatever size that allows for higher and higher hierarchies | |
2:02:46.080 --> 2:02:49.080 | |
to form, we're talking about reference frames and concepts. | |
2:02:49.080 --> 2:02:53.080 | |
So I could, could I have something that's a super physicist or a super mathematician? Yes. | |
2:02:53.080 --> 2:02:59.080 | |
And the question is, once you have a super physicist, will they be able to understand something? | |
2:02:59.080 --> 2:03:03.080 | |
Do you have a sense that it will be orders, like us compared to ants? | |
2:03:03.080 --> 2:03:04.080 | |
Could we ever understand it? | |
2:03:04.080 --> 2:03:05.080 | |
Yeah. | |
2:03:05.080 --> 2:03:11.080 | |
Most people cannot understand general relativity. | |
2:03:11.080 --> 2:03:13.080 | |
It's a really hard thing to get. | |
2:03:13.080 --> 2:03:17.080 | |
I mean, you can paint it in a fuzzy picture, stretchy space, you know? | |
2:03:17.080 --> 2:03:18.080 | |
Yeah. | |
2:03:18.080 --> 2:03:23.080 | |
But the field equations to do that and the deep intuitions are really, really hard. | |
2:03:23.080 --> 2:03:26.080 | |
And I've tried, I'm unable to do it. | |
2:03:26.080 --> 2:03:32.080 | |
It's easy to get special relative, but general relative, man, that's too much. | |
2:03:32.080 --> 2:03:35.080 | |
And so we already live with this to some extent. | |
2:03:35.080 --> 2:03:40.080 | |
The vast majority of people can't understand actually what the vast majority of other people actually know. | |
2:03:40.080 --> 2:03:45.080 | |
We're just either we don't have the effort to or we can't or we don't have time or just not smart enough, whatever. | |
2:03:45.080 --> 2:03:48.080 | |
So, but we have ways of communicating. | |
2:03:48.080 --> 2:03:51.080 | |
Einstein has spoken in a way that I can understand. | |
2:03:51.080 --> 2:03:54.080 | |
He's given me analogies that are useful. | |
2:03:54.080 --> 2:04:00.080 | |
I can use those analogies for my own work and think about, you know, concepts that are similar. | |
2:04:00.080 --> 2:04:02.080 | |
It's not stupid. | |
2:04:02.080 --> 2:04:04.080 | |
It's not like he's existed in some other plane. | |
2:04:04.080 --> 2:04:06.080 | |
There's no connection to my plane in the world here. | |
2:04:06.080 --> 2:04:07.080 | |
So that will occur. | |
2:04:07.080 --> 2:04:09.080 | |
It already has occurred. | |
2:04:09.080 --> 2:04:12.080 | |
That's when my point at this story is it already has occurred. | |
2:04:12.080 --> 2:04:14.080 | |
We live it every day. | |
2:04:14.080 --> 2:04:21.080 | |
One could argue that with we create machine intelligence that think a million times faster than us that it'll be so far we can't make the connections. | |
2:04:21.080 --> 2:04:29.080 | |
But, you know, at the moment, everything that seems really, really hard to figure out in the world when you actually figure it out is not that hard. | |
2:04:29.080 --> 2:04:32.080 | |
You know, almost everyone can understand the multiverses. | |
2:04:32.080 --> 2:04:34.080 | |
Almost everyone can understand quantum physics. | |
2:04:34.080 --> 2:04:38.080 | |
Almost everyone can understand these basic things, even though hardly any people could figure those things out. | |
2:04:38.080 --> 2:04:40.080 | |
Yeah, but really understand. | |
2:04:40.080 --> 2:04:43.080 | |
But you don't need to really, only a few people really understand. | |
2:04:43.080 --> 2:04:49.080 | |
You need to only understand the projections, the sprinkles of the useful insights from that. | |
2:04:49.080 --> 2:04:51.080 | |
That was my example of Einstein, right? | |
2:04:51.080 --> 2:04:55.080 | |
His general theory of relativity is one thing that very, very, very few people can get. | |
2:04:55.080 --> 2:04:59.080 | |
And what if we just said those other few people are also artificial intelligences? | |
2:04:59.080 --> 2:05:01.080 | |
How bad is that? | |
2:05:01.080 --> 2:05:02.080 | |
In some sense they are, right? | |
2:05:02.080 --> 2:05:03.080 | |
Yeah, they say already. | |
2:05:03.080 --> 2:05:05.080 | |
I mean, Einstein wasn't a really normal person. | |
2:05:05.080 --> 2:05:07.080 | |
He had a lot of weird quirks. | |
2:05:07.080 --> 2:05:09.080 | |
And so the other people who work with him. | |
2:05:09.080 --> 2:05:15.080 | |
So, you know, maybe they already were sort of this actual plane of intelligence that we live with it already. | |
2:05:15.080 --> 2:05:17.080 | |
It's not a problem. | |
2:05:17.080 --> 2:05:20.080 | |
It's still useful and, you know. | |
2:05:20.080 --> 2:05:24.080 | |
So do you think we are the only intelligent life out there in the universe? | |
2:05:24.080 --> 2:05:29.080 | |
I would say that intelligent life has and will exist elsewhere in the universe. | |
2:05:29.080 --> 2:05:31.080 | |
I'll say that. | |
2:05:31.080 --> 2:05:39.080 | |
There is a question about contemporaneous intelligence life, which is hard to even answer when we think about relativity and the nature of space time. | |
2:05:39.080 --> 2:05:43.080 | |
We can't say what exactly is this time someplace else in the world. | |
2:05:43.080 --> 2:05:54.080 | |
But I think it's, you know, I do worry a lot about the filter idea, which is that perhaps intelligent species don't last very long. | |
2:05:54.080 --> 2:05:56.080 | |
And so we haven't been around very long. | |
2:05:56.080 --> 2:06:02.080 | |
As a technological species, we've been around for almost nothing, you know, what, 200 years or something like that. | |
2:06:02.080 --> 2:06:08.080 | |
And we don't have any data, a good data point on whether it's likely that we'll survive or not. | |
2:06:08.080 --> 2:06:12.080 | |
So do I think that there have been intelligent life elsewhere in the universe? | |
2:06:12.080 --> 2:06:14.080 | |
Almost certainly, of course. | |
2:06:14.080 --> 2:06:16.080 | |
In the past, in the future, yes. | |
2:06:16.080 --> 2:06:18.080 | |
Does it survive for a long time? | |
2:06:18.080 --> 2:06:19.080 | |
I don't know. | |
2:06:19.080 --> 2:06:25.080 | |
This is another reason I'm excited about our work, is our work meaning the general world of AI. | |
2:06:25.080 --> 2:06:31.080 | |
I think we can build intelligent machines that outlast us. | |
2:06:31.080 --> 2:06:34.080 | |
You know, they don't have to be tied to Earth. | |
2:06:34.080 --> 2:06:39.080 | |
They don't have to, you know, I'm not saying they're recreating, you know, you know, aliens. | |
2:06:39.080 --> 2:06:44.080 | |
I'm just saying, if I asked myself, and this might be a good point to end on here. | |
2:06:44.080 --> 2:06:47.080 | |
If I asked myself, you know, what's special about our species? | |
2:06:47.080 --> 2:06:49.080 | |
We're not particularly interesting physically. | |
2:06:49.080 --> 2:06:51.080 | |
We're not, we don't fly. | |
2:06:51.080 --> 2:06:52.080 | |
We're not good swimmers. | |
2:06:52.080 --> 2:06:53.080 | |
We're not very fast. | |
2:06:53.080 --> 2:06:54.080 | |
We're not very strong, you know. | |
2:06:54.080 --> 2:06:55.080 | |
It's our brain. | |
2:06:55.080 --> 2:06:56.080 | |
That's the only thing. | |
2:06:56.080 --> 2:07:01.080 | |
And we are the only species on this planet that's built the model of the world that extends beyond what we can actually sense. | |
2:07:01.080 --> 2:07:09.080 | |
We're the only people who know about the far side of the moon and other universes and other galaxies and other stars and what happens in the atom. | |
2:07:09.080 --> 2:07:12.080 | |
That knowledge doesn't exist anywhere else. | |
2:07:12.080 --> 2:07:13.080 | |
It's only in our heads. | |
2:07:13.080 --> 2:07:14.080 | |
Cats don't do it. | |
2:07:14.080 --> 2:07:15.080 | |
Dogs don't do it. | |
2:07:15.080 --> 2:07:16.080 | |
Monkeys don't do it. | |
2:07:16.080 --> 2:07:18.080 | |
That is what we've created that's unique. | |
2:07:18.080 --> 2:07:19.080 | |
Not our genes. | |
2:07:19.080 --> 2:07:20.080 | |
It's knowledge. | |
2:07:20.080 --> 2:07:24.080 | |
And if I ask me, what is the legacy of humanity? | |
2:07:24.080 --> 2:07:25.080 | |
What should our legacy be? | |
2:07:25.080 --> 2:07:26.080 | |
It should be knowledge. | |
2:07:26.080 --> 2:07:30.080 | |
We should preserve our knowledge in a way that it can exist beyond us. | |
2:07:30.080 --> 2:07:38.080 | |
And I think the best way of doing that, in fact, you have to do it, is to have to go along with intelligent machines to understand that knowledge. | |
2:07:38.080 --> 2:07:44.080 | |
It's a very broad idea, but we should be thinking, I call it a state planning for humanity. | |
2:07:44.080 --> 2:07:49.080 | |
We should be thinking about what we want to leave behind when as a species we're no longer here. | |
2:07:49.080 --> 2:07:51.080 | |
And that will happen sometime. | |
2:07:51.080 --> 2:07:52.080 | |
Sooner or later, it's going to happen. | |
2:07:52.080 --> 2:07:58.080 | |
And understanding intelligence and creating intelligence gives us a better chance to prolong. | |
2:07:58.080 --> 2:08:00.080 | |
It does give us a better chance to prolong life. | |
2:08:00.080 --> 2:08:01.080 | |
Yes. | |
2:08:01.080 --> 2:08:03.080 | |
It gives us a chance to live on other planets. | |
2:08:03.080 --> 2:08:07.080 | |
But even beyond that, I mean, our solar system will disappear one day. | |
2:08:07.080 --> 2:08:09.080 | |
It's given enough time. | |
2:08:09.080 --> 2:08:10.080 | |
So I don't know. | |
2:08:10.080 --> 2:08:14.080 | |
I doubt we will ever be able to travel to other things. | |
2:08:14.080 --> 2:08:18.080 | |
But we could tell the stars, but we could send intelligent machines to do that. | |
2:08:18.080 --> 2:08:29.080 | |
Do you have an optimistic, a hopeful view of our knowledge of the echoes of human civilization living through the intelligent systems we create? | |
2:08:29.080 --> 2:08:30.080 | |
Oh, totally. | |
2:08:30.080 --> 2:08:32.080 | |
Well, I think the intelligent systems are greater. | |
2:08:32.080 --> 2:08:39.080 | |
In some sense, the vessel for bringing them beyond Earth or making them last beyond humans themselves. | |
2:08:39.080 --> 2:08:41.080 | |
So how do you feel about that? | |
2:08:41.080 --> 2:08:44.080 | |
That they won't be human, quote unquote. | |
2:08:44.080 --> 2:08:48.080 | |
Human, what is human? Our species are changing all the time. | |
2:08:48.080 --> 2:08:52.080 | |
Human today is not the same as human just 50 years ago. | |
2:08:52.080 --> 2:08:54.080 | |
What is human? Do we care about our genetics? | |
2:08:54.080 --> 2:08:56.080 | |
Why is that important? | |
2:08:56.080 --> 2:08:59.080 | |
As I point out, our genetics are no more interesting than a bacterium's genetics. | |
2:08:59.080 --> 2:09:01.080 | |
It's no more interesting than a monkey's genetics. | |
2:09:01.080 --> 2:09:07.080 | |
What we have, what's unique and what's valuable is our knowledge, what we've learned about the world. | |
2:09:07.080 --> 2:09:09.080 | |
And that is the rare thing. | |
2:09:09.080 --> 2:09:11.080 | |
That's the thing we want to preserve. | |
2:09:11.080 --> 2:09:15.080 | |
Who cares about our genes? | |
2:09:15.080 --> 2:09:17.080 | |
It's the knowledge. | |
2:09:17.080 --> 2:09:19.080 | |
That's a really good place to end. | |
2:09:19.080 --> 2:09:42.080 | |
Thank you so much for talking to me. | |