lexicap / vtt /episode_012_small.vtt
Shubham Gupta
Add files to lfs
323a418
WEBVTT
00:00.000 --> 00:05.840
The following is a conversation with Thomas Sanholm. He's a professor at SameU and cocreator of
00:05.840 --> 00:11.440
Labratus, which is the first AI system to beat top human players in the game of Heads Up No Limit
00:11.440 --> 00:17.840
Texas Holdem. He has published over 450 papers on game theory and machine learning, including a
00:17.840 --> 00:25.360
best paper in 2017 at NIPS, now renamed to New Rips, which is where I caught up with him for this
00:25.360 --> 00:31.520
conversation. His research and companies have had wide reaching impact in the real world,
00:32.080 --> 00:38.720
especially because he and his group not only propose new ideas, but also build systems to prove
00:38.720 --> 00:44.800
that these ideas work in the real world. This conversation is part of the MIT course on
00:44.800 --> 00:50.400
artificial journal intelligence and the artificial intelligence podcast. If you enjoy it, subscribe
00:50.400 --> 00:58.720
on YouTube, iTunes, or simply connect with me on Twitter at Lex Friedman, spelled FRID. And now
00:58.720 --> 01:06.960
here's my conversation with Thomas Sanholm. Can you describe at the high level the game of poker,
01:06.960 --> 01:13.200
Texas Holdem, Heads Up Texas Holdem, for people who might not be familiar with this card game?
01:13.200 --> 01:18.720
Yeah, happy to. So Heads Up No Limit Texas Holdem has really emerged in the AI community
01:18.720 --> 01:24.160
as a main benchmark for testing these application independent algorithms for
01:24.160 --> 01:31.200
imperfect information game solving. And this is a game that's actually played by humans. You don't
01:31.200 --> 01:38.560
see that much on TV or casinos because, well, for various reasons, but you do see it in some
01:38.560 --> 01:44.000
expert level casinos and you see it in the best poker movies of all time. It's actually an event
01:44.000 --> 01:50.400
in the world series of poker, but mostly it's played online and typically for pretty big sums
01:50.400 --> 01:57.200
of money. And this is a game that usually only experts play. So if you go to your home game
01:57.200 --> 02:01.680
on a Friday night, it probably is not going to be Heads Up No Limit Texas Holdem. It might be
02:02.800 --> 02:08.560
No Limit Texas Holdem in some cases, but typically for a big group and it's not as competitive.
02:08.560 --> 02:14.320
Well, Heads Up means it's two players, so it's really like me against you. Am I better or are you
02:14.320 --> 02:20.240
better, much like chess or go in that sense, but an imperfect information game which makes it much
02:20.240 --> 02:25.920
harder because I have to deal with issues of you knowing things that I don't know and I know
02:25.920 --> 02:30.960
things that you don't know instead of pieces being nicely laid on the board for both of us to see.
02:30.960 --> 02:38.400
So in Texas Holdem, there's two cards that you only see that belong to you and there is
02:38.400 --> 02:44.080
they gradually lay out some cards that add up overall to five cards that everybody can see.
02:44.080 --> 02:48.800
Yeah, the imperfect nature of the information is the two cards that you're holding up front.
02:48.800 --> 02:54.160
Yeah. So as you said, you know, you first get two cards in private each, and then you there's a
02:54.160 --> 02:59.440
betting round, then you get three cards in public on the table, then there's a betting round, then
02:59.440 --> 03:03.760
you get the fourth card in public on the table, there's a betting round, then you get the fifth
03:03.760 --> 03:08.480
card on the table, there's a betting round. So there's a total of four betting rounds and four
03:08.480 --> 03:14.480
tranches of information revelation, if you will. The only the first tranche is private and then
03:14.480 --> 03:25.200
it's public from there. And this is probably by far the most popular game in AI and just
03:25.200 --> 03:30.080
the general public in terms of imperfect information. So it's probably the most popular
03:30.080 --> 03:38.000
spectator game to watch, right? So, which is why it's a super exciting game tackle. So it's on
03:38.000 --> 03:44.400
the order of chess, I would say in terms of popularity, in terms of AI setting it as the bar
03:44.400 --> 04:01.280
of what is intelligence. So in 2017, Libratus beats a few four expert human players. Can you
04:01.280 --> 04:06.720
describe that event? What you learned from it? What was it like? What was the process in general
04:06.720 --> 04:12.960
for people who have not read the papers in the study? Yeah. So the event was that we invited
04:12.960 --> 04:17.600
four of the top 10 players with these specialist players in Heads Up No Limit Texas Holden,
04:17.600 --> 04:22.960
which is very important because this game is actually quite different than the multiplayer
04:22.960 --> 04:29.360
version. We brought them in to Pittsburgh to play at the reverse casino for 20 days. We wanted to
04:29.360 --> 04:37.440
get 120,000 hands in because we wanted to get statistical significance. So it's a lot of hands
04:37.440 --> 04:44.000
for humans to play, even for these top pros who play fairly quickly normally. So we couldn't just
04:44.000 --> 04:49.760
have one of them play so many hands. 20 days, they were playing basically morning to evening.
04:50.320 --> 04:57.920
And you raised 200,000 as a little incentive for them to play. And the setting was so that
04:57.920 --> 05:05.280
they didn't all get 50,000. We actually paid them out based on how they did against the AI each.
05:05.280 --> 05:11.040
So they had an incentive to play as hard as they could, whether they're way ahead or way behind
05:11.040 --> 05:16.400
or right at the mark of beating the AI. And you don't make any money, unfortunately. Right. No,
05:16.400 --> 05:22.560
we can't make any money. So originally, a couple of years earlier, I actually explored whether we
05:22.560 --> 05:28.400
could actually play for money because that would be, of course, interesting as well to play against
05:28.400 --> 05:33.280
the top people for money. But the Pennsylvania Gaming Board said no. So we couldn't. So this
05:33.280 --> 05:40.160
is much like an exhibit, like for a musician or a boxer or something like that. Nevertheless,
05:40.160 --> 05:49.040
they were keeping track of the money and brought us one close to $2 million, I think. So if it was
05:49.040 --> 05:55.200
for real money, if you were able to earn money, that was a quite impressive and inspiring achievement.
05:55.200 --> 06:00.720
Just a few details. What were the players looking at? Were they behind a computer? What
06:00.720 --> 06:06.080
was the interface like? Yes, they were playing much like they normally do. These top players,
06:06.080 --> 06:11.600
when they play this game, they play mostly online. So they used to playing through a UI.
06:11.600 --> 06:16.080
And they did the same thing here. So there was this layout, you could imagine. There's a table
06:16.880 --> 06:22.880
on a screen. There's the human sitting there. And then there's the AI sitting there. And the
06:22.880 --> 06:27.440
screen shows everything that's happening. The cards coming out and shows the bets being made.
06:27.440 --> 06:32.320
And we also had the betting history for the human. So if the human forgot what had happened in the
06:32.320 --> 06:39.040
hand so far, they could actually reference back and so forth. Is there a reason they were given
06:39.040 --> 06:46.880
access to the betting history? Well, it didn't really matter. They wouldn't have forgotten
06:46.880 --> 06:52.160
anywhere. These are top quality people. But we just wanted to put out there so it's not the question
06:52.160 --> 06:56.560
of the human forgetting and the AI somehow trying to get that advantage of better memory.
06:56.560 --> 07:01.040
So what was that like? I mean, that was an incredible accomplishment. So what did it feel like
07:01.040 --> 07:07.440
before the event? Did you have doubt, hope? Where was your confidence at?
07:08.080 --> 07:13.760
Yeah, that's great. Great question. So 18 months earlier, I had organized a similar
07:13.760 --> 07:19.200
brains versus AI competition with a previous AI called Cloudical. And we couldn't beat the humans.
07:20.480 --> 07:26.400
So this time around, it was only 18 months later. And I knew that this new AI, Libratus,
07:26.400 --> 07:32.320
was way stronger. But it's hard to say how you'll do against the top humans before you try.
07:32.320 --> 07:39.040
So I thought we had about a 50 50 shot. And the international betting sites put us as a
07:39.040 --> 07:45.440
four to one or five to one underdog. So it's kind of interesting that people really believe in people
07:45.440 --> 07:50.560
and over AI, not just people, people don't just believe over believing themselves,
07:50.560 --> 07:54.800
but they have overconfidence in other people as well, compared to the performance of AI.
07:54.800 --> 08:00.800
And yeah, so we were a four to one or five to one underdog. And even after three days of beating
08:00.800 --> 08:05.680
the humans in a row, we were still 50 50 on the international betting sites.
08:05.680 --> 08:11.120
Do you think there's something special and magical about poker in the way people think about it?
08:11.120 --> 08:19.120
In the sense you have, I mean, even in chess, there's no Hollywood movies. Poker is the star
08:19.120 --> 08:29.760
of many movies. And there's this feeling that certain human facial expressions and body language,
08:29.760 --> 08:34.880
eye movement, all these tells are critical to poker. Like you can look into somebody's soul
08:34.880 --> 08:41.760
and understand their betting strategy and so on. So that's probably why the possibly do you think
08:41.760 --> 08:48.000
that is why people have a confidence that humans will outperform because AI systems cannot in this
08:48.000 --> 08:53.360
construct perceive these kinds of tells, they're only looking at betting patterns and
08:55.600 --> 09:03.920
nothing else, the betting patterns and statistics. So what's more important to you if you step back
09:03.920 --> 09:12.960
on human players, human versus human? What's the role of these tells of these ideas that we romanticize?
09:12.960 --> 09:21.760
Yeah, so I'll split it into two parts. So one is why do humans trust humans more than AI and
09:21.760 --> 09:27.040
have overconfidence in humans? I think that's not really related to the tell question. It's
09:27.040 --> 09:32.480
just that they've seen these top players how good they are and they're really fantastic. So it's just
09:33.120 --> 09:39.200
hard to believe that the AI could beat them. So I think that's where that comes from. And that's
09:39.200 --> 09:44.080
actually maybe a more general lesson about AI that until you've seen it overperform a human,
09:44.080 --> 09:52.080
it's hard to believe that it could. But then the tells, a lot of these top players, they're so good
09:52.080 --> 10:00.320
at hiding tells that among the top players, it's actually not really worth it for them to invest a
10:00.320 --> 10:06.560
lot of effort trying to find tells in each other because they're so good at hiding them. So yes,
10:06.560 --> 10:12.960
at the kind of Friday evening game, tells are going to be a huge thing. You can read other people
10:12.960 --> 10:17.760
and if you're a good reader, you'll read them like an open book. But at the top levels of poker,
10:17.760 --> 10:23.120
no, the tells become a less much smaller and smaller aspect of the game as you go to the top
10:23.120 --> 10:32.960
levels. The amount of strategies, the amounts of possible actions is very large, 10 to the power
10:32.960 --> 10:39.280
of 100 plus. So there has to be some, I've read a few of the papers related that has,
10:40.400 --> 10:46.880
it has to form some abstractions of various hands and actions. So what kind of abstractions are
10:46.880 --> 10:52.800
effective for the game of poker? Yeah, so you're exactly right. So when you go from a game tree
10:52.800 --> 10:59.280
that's 10 to the 161, especially in an imperfect information game, it's way too large to solve
10:59.280 --> 11:06.160
directly. Even with our fastest equilibrium finding algorithms. So you want to abstract it
11:06.160 --> 11:13.840
first. And abstraction in games is much trickier than abstraction in MDPs or other single agent
11:13.840 --> 11:19.440
settings. Because you have these abstraction pathologies. But if I have a finer grained abstraction,
11:20.560 --> 11:25.120
the strategy that I can get from that for the real game might actually be worse than the strategy
11:25.120 --> 11:29.440
I can get from the coarse grained abstraction. So you have to be very careful. Now, the kinds
11:29.440 --> 11:34.880
of abstractions just to zoom out, we're talking about there's the hands abstractions and then there
11:34.880 --> 11:42.480
is betting strategies. Yeah, betting actions. So there's information abstraction to talk about
11:42.480 --> 11:47.760
general games, information abstraction, which is the abstraction of what chance does. And this
11:47.760 --> 11:54.400
would be the cards in the case of poker. And then there's action abstraction, which is abstracting
11:54.400 --> 12:00.640
the actions of the actual players, which would be bets in the case of poker, yourself, any other
12:00.640 --> 12:10.240
players, yes, yourself and other players. And for information abstraction, we were completely
12:10.240 --> 12:16.640
automated. So these were these are algorithms, but they do what we call potential aware abstraction,
12:16.640 --> 12:20.880
where we don't just look at the value of the hand, but also how it might materialize in the
12:20.880 --> 12:26.320
good or bad hands over time. And it's a certain kind of bottom up process with integer programming
12:26.320 --> 12:32.640
there and clustering and various aspects, how do you build this abstraction. And then in the
12:32.640 --> 12:40.720
action abstraction, there, it's largely based on how humans other and other AI's have played
12:40.720 --> 12:46.560
this game in the past. But in the beginning, we actually use an automated action abstraction
12:46.560 --> 12:53.840
technology, which is provably convergent, that it finds the optimal combination of bet sizes,
12:53.840 --> 12:58.320
but it's not very scalable. So we couldn't use it for the whole game, but we use it for the first
12:58.320 --> 13:03.760
couple of betting actions. So what's more important, the strength of the hand, so the
13:03.760 --> 13:12.320
information abstraction or the how you play them, the actions, does it, you know, the romanticized
13:12.320 --> 13:17.120
notion again, is that it doesn't matter what hands you have, that the actions, the betting,
13:18.000 --> 13:22.240
maybe the way you win, no matter what hands you have. Yeah, so that's why you have to play a lot
13:22.240 --> 13:29.440
of hands, so that the role of luck gets smaller. So you could otherwise get lucky and get some good
13:29.440 --> 13:34.240
hands, and then you're going to win the match. Even with thousands of hands, you can get lucky.
13:35.120 --> 13:40.720
Because there's so much variance in no limit, Texas hold them, because if we both go all in,
13:40.720 --> 13:47.920
it's a huge stack of variance. So there are these massive swings in no limit Texas hold them. So
13:47.920 --> 13:53.600
that's why you have to play not just thousands, but over a hundred thousand hands to get statistical
13:53.600 --> 14:00.400
significance. So let me ask another way this question. If you didn't even look at your hands,
14:01.920 --> 14:05.840
but they didn't know that the opponents didn't know that, how well would you be able to do?
14:05.840 --> 14:11.440
That's a good question. There's actually, I heard the story that there's this Norwegian female poker
14:11.440 --> 14:17.040
player called Annette Oberstad, who's actually won a tournament by doing exactly that. But that
14:17.040 --> 14:26.160
would be extremely rare. So you cannot really play well that way. Okay, so the hands do have
14:26.160 --> 14:34.320
some role to play. So Lebrados does not use, as far as I understand, use learning methods,
14:34.320 --> 14:42.880
deep learning. Is there room for learning in, you know, there's no reason why Lebrados doesn't,
14:42.880 --> 14:48.640
you know, combine with an alpha go type approach for estimating the quality for function estimator.
14:49.520 --> 14:55.040
What are your thoughts on this? Maybe as compared to another algorithm, which I'm not
14:55.040 --> 15:01.120
that familiar with deep stack, the engine that does use deep learning that is unclear how well
15:01.120 --> 15:05.680
it does, but nevertheless uses deep learning. So what are your thoughts about learning methods to
15:05.680 --> 15:11.520
aid in the way that Lebrados plays the game of poker? Yeah, so as you said, Lebrados did not
15:11.520 --> 15:17.760
use learning methods and played very well without them. Since then, we have actually actually here,
15:17.760 --> 15:25.200
we have a couple of papers on things that do use learning techniques. Excellent. So and deep learning
15:25.200 --> 15:32.560
in particular, and sort of the way you're talking about where it's learning an evaluation function.
15:33.360 --> 15:42.400
But in imperfect information games, unlike, let's say in go war, now also in chess and shogi,
15:42.400 --> 15:52.160
it's not sufficient to learn an evaluation for a state because the value of an information set
15:52.160 --> 16:00.080
depends not only on the exact state, but it also depends on both players beliefs. Like if I have
16:00.080 --> 16:05.360
a bad hand, I'm much better off if the opponent thinks I have a good hand. And vice versa,
16:05.360 --> 16:12.640
if I have a good hand, I'm much better off if the opponent believes I have a bad hand. So the value
16:12.640 --> 16:18.720
of a state is not just a function of the cards. It depends on if you will, the path of play,
16:18.720 --> 16:25.440
but only to the extent that it's captured in the belief distributions. So that's why it's not as
16:25.440 --> 16:31.040
simple as it is in perfect information games. And I don't want to say it's simple there either.
16:31.040 --> 16:35.680
It's of course, very complicated computationally there too. But at least conceptually, it's very
16:35.680 --> 16:39.520
straightforward. There's a state, there's an evaluation function, you can try to learn it.
16:40.080 --> 16:48.640
Here, you have to do something more. And what we do is in one of these papers, we're looking at
16:48.640 --> 16:54.000
allowing where we allow the opponent to actually take different strategies at the leaf of the
16:54.000 --> 17:01.600
search tree, if you will. And that is a different way of doing it. And it doesn't assume therefore
17:01.600 --> 17:07.200
a particular way that the opponent plays. But it allows the opponent to choose from a set of
17:07.200 --> 17:14.560
different continuation strategies. And that forces us to not be too optimistic in a lookahead
17:14.560 --> 17:20.720
search. And that's that's one way you can do sound lookahead search in imperfect information
17:20.720 --> 17:26.400
games, which is very different, difficult. And in US, you were asking about deep stack, what they
17:26.400 --> 17:30.960
did, it was very different than what we do, either in Libertadores or in this new work.
17:31.920 --> 17:37.120
They were generally randomly generating various situations in the game. Then they were doing
17:37.120 --> 17:42.960
the lookahead from there to the end of the game, as if that was the start of a different game. And
17:42.960 --> 17:48.800
then they were using deep learning to learn those values of those states. But the states were not
17:48.800 --> 17:54.160
just the physical states, they include the belief distributions. When you talk about lookahead,
17:55.680 --> 18:01.600
or deep stack, or with Libertadores, does it mean considering every possibility that the game can
18:01.600 --> 18:08.160
evolve? Are we talking about extremely sort of this exponential growth of a tree? Yes. So we're
18:08.160 --> 18:15.760
talking about exactly that. Much like you doing alpha beta search or Monte Carlo tree search,
18:15.760 --> 18:19.920
but with different techniques. So there's a different search algorithm. And then we have to
18:19.920 --> 18:24.400
deal with the leaves differently. So if you think about what Libertadores did, we didn't have to
18:24.400 --> 18:30.880
worry about this, because we only did it at the end of the game. So we would always terminate into a
18:30.880 --> 18:36.720
real situation. And we would know what the payout is. It didn't do these depth limited lookaheads.
18:36.720 --> 18:42.000
But now in this new paper, which is called depth limited, I think it's called depth limited search
18:42.000 --> 18:47.360
for imperfect information games, we can actually do sound depth limited lookaheads. So we can
18:47.360 --> 18:52.080
actually start to do the lookahead from the beginning of the game on, because that's too
18:52.080 --> 18:57.680
complicated to do for this whole long game. So in Libertadores, we were just doing it for the end.
18:57.680 --> 19:03.040
So and then the other side, this belief distribution. So is it explicitly modeled
19:03.040 --> 19:10.800
what kind of beliefs that the opponent might have? Yeah, it is explicitly modeled, but it's not
19:10.800 --> 19:18.720
assumed that beliefs are actually output, not input. Of course, the starting beliefs are input,
19:18.720 --> 19:23.440
but they just fall from the rules of the game, because we know that the dealer deals uniformly
19:23.440 --> 19:29.600
from the deck. So I know that every pair of cards that you might have is equally likely.
19:29.600 --> 19:33.600
I know that for a fact, that just follows from the rules of the game. Of course,
19:33.600 --> 19:38.320
except the two cards that I have, I know you don't have those. You have to take that into
19:38.320 --> 19:42.480
account. That's called card removal, and that's very important. Is the dealing always coming
19:42.480 --> 19:50.000
from a single deck in heads up? Yes. So you can assume single deck. So you know that if I have
19:50.000 --> 19:55.680
the ace of spades, I know you don't have an ace of spades. So in the beginning, your belief is
19:55.680 --> 20:02.640
basically the fact that it's a fair dealing of hands. But how do you start to adjust that belief?
20:02.640 --> 20:08.960
Well, that's where this beauty of game theory comes. So Nash equilibrium, which John Nash
20:08.960 --> 20:14.960
introduced in 1950, introduces what rational play is when you have more than one player.
20:15.920 --> 20:21.200
And these are pairs of strategies where strategies are contingency plans, one for each player.
20:21.200 --> 20:27.920
So that neither player wants to deviate to a different strategy, given that the other
20:27.920 --> 20:34.800
doesn't deviate. But as a side effect, you get the beliefs from base rule. So Nash equilibrium
20:34.800 --> 20:39.760
really isn't just deriving in these imperfect information games. Nash equilibrium doesn't
20:39.760 --> 20:47.920
just define strategies. It also defines beliefs for both of us, and it defines beliefs for each state.
20:47.920 --> 20:54.720
So at each state, each, if they call information sets, at each information set in the game,
20:54.720 --> 20:59.840
there's a set of different states that we might be in, but I don't know which one we're in.
21:00.960 --> 21:05.440
Nash equilibrium tells me exactly what is the probability distribution over those real
21:05.440 --> 21:11.360
wall states in my mind. How does Nash equilibrium give you that distribution? So why?
21:11.360 --> 21:17.280
I'll do a simple example. So you know the game Rock Paper Scissors? So we can draw it
21:17.280 --> 21:23.600
as player one moves first, and then player two moves. But of course, it's important that player
21:23.600 --> 21:28.560
two doesn't know what player one moved. Otherwise player two would win every time. So we can draw
21:28.560 --> 21:33.440
that as an information set where player one makes one or three moves first. And then there's an
21:33.440 --> 21:40.480
information set for player two. So player two doesn't know which of those nodes the world is in.
21:40.480 --> 21:46.400
But once we know the strategy for player one, Nash equilibrium will say that you play one third
21:46.400 --> 21:52.160
rock, one third paper, one third scissors. From that I can derive my beliefs on the information
21:52.160 --> 21:58.480
set that they're one third, one third, one third. So Bayes gives you that. But is that specific
21:58.480 --> 22:06.800
to a particular player? Or is it something you quickly update with those? No, the game theory
22:06.800 --> 22:12.560
isn't really player specific. So that's also why we don't need any data. We don't need any history
22:12.560 --> 22:17.840
how these particular humans played in the past or how any AI or even had played before. It's all
22:17.840 --> 22:24.160
about rationality. So we just think the AI just thinks about what would a rational opponent do?
22:24.720 --> 22:30.960
And what would I do if I were I am rational and what that that's that's the idea of game theory.
22:30.960 --> 22:38.080
So it's really a data free opponent free approach. So it comes from the design of the game as opposed
22:38.080 --> 22:43.600
to the design of the player. Exactly. There's no opponent modeling per se. I mean, we've done
22:43.600 --> 22:47.760
some work on combining opponent modeling with game theory. So you can exploit weak players
22:47.760 --> 22:53.440
even more. But that's another strand and in Libra, there's wouldn't turn that on. So I decided that
22:53.440 --> 22:59.520
these players are too good. And when you start to exploit an opponent, you typically open yourself
22:59.520 --> 23:04.720
up self up to exploitation. And these guys have so few holes to exploit and they're world's leading
23:04.720 --> 23:09.120
experts in counter exploitation. So I decided that we're not going to turn that stuff on.
23:09.120 --> 23:14.400
Actually, I saw a few your papers exploiting opponents sound very interesting to explore.
23:15.600 --> 23:19.120
Do you think there's room for exploitation, generally outside of the broadest?
23:19.840 --> 23:27.840
Is there a subject or people differences that could be exploited? Maybe not just in poker,
23:27.840 --> 23:33.360
but in general interactions and negotiations all these other domains that you're considering?
23:33.360 --> 23:39.760
Yeah, definitely. We've done some work on that. And I really like the work that hybridizes the two.
23:39.760 --> 23:45.200
So you figure out what would a rational opponent do. And by the way, that's safe in these zero
23:45.200 --> 23:49.440
sum games to players, zero sum games, because if the opponent does something irrational,
23:49.440 --> 23:56.320
yes, it might show a throw of my beliefs. But the amount that the player can gain by throwing
23:56.320 --> 24:04.240
of my belief is always less than they lose by playing poorly. So it's safe. But still,
24:04.240 --> 24:09.280
if somebody's weak as a player, you might want to play differently to exploit them more.
24:10.160 --> 24:14.560
So you can think about it this way, a game theoretic strategy is unbeatable,
24:15.600 --> 24:22.720
but it doesn't maximally beat the other opponent. So the winnings per hand might be better
24:22.720 --> 24:27.120
with a different strategy. And the hybrid is that you start from a game theoretic approach,
24:27.120 --> 24:33.040
and then as you gain data about the opponent in certain parts of the game tree, then in those
24:33.040 --> 24:39.280
parts of the game tree, you start to tweak your strategy more and more towards exploitation,
24:39.280 --> 24:44.160
while still staying fairly close to the game theoretic strategy so as to not open yourself
24:44.160 --> 24:53.600
up to exploitation too much. How do you do that? Do you try to vary up strategies, make it unpredictable?
24:53.600 --> 24:59.520
It's like, what is it, tit for tat strategies in Prisoner's Dilemma or?
25:00.560 --> 25:07.440
Well, that's a repeated game, simple Prisoner's Dilemma, repeated games. But even there,
25:07.440 --> 25:13.120
there's no proof that says that that's the best thing. But experimentally, it actually does well.
25:13.120 --> 25:17.360
So what kind of games are there, first of all? I don't know if this is something that you could
25:17.360 --> 25:21.760
just summarize. There's perfect information games with all the information on the table.
25:22.320 --> 25:27.680
There is imperfect information games. There's repeated games that you play over and over.
25:28.480 --> 25:36.960
There's zero sum games. There's non zero sum games. And then there's a really important
25:36.960 --> 25:44.640
distinction you're making to player versus more players. So what are what other games are there?
25:44.640 --> 25:49.920
And what's the difference, for example, with this two player game versus more players? Yeah,
25:49.920 --> 25:54.720
what are the key differences? Right. So let me start from the the basics. So
25:56.320 --> 26:02.160
a repeated game is a game where the same exact game is played over and over.
26:02.160 --> 26:08.160
In these extensive form games, where you think about three form, maybe with these
26:08.160 --> 26:14.000
information sets to represent incomplete information, you can have kind of repetitive
26:14.000 --> 26:18.480
interactions and even repeated games are a special case of that, by the way. But
26:19.680 --> 26:24.160
the game doesn't have to be exactly the same. It's like in sourcing options. Yes, we're going to
26:24.160 --> 26:29.040
see the same supply base year to year. But what I'm buying is a little different every time.
26:29.040 --> 26:34.080
And the supply base is a little different every time and so on. So it's not really repeated.
26:34.080 --> 26:39.680
So to find a purely repeated game is actually very rare in the world. So they're really a very
26:40.960 --> 26:47.360
coarse model of what's going on. Then if you move up from just repeated, simple,
26:47.360 --> 26:52.560
repeated matrix games, not all the way to extensive form games, but in between,
26:52.560 --> 26:59.360
there's stochastic games, where you know, there's these, you think about it like these little
26:59.360 --> 27:06.080
matrix games. And when you take an action and your own takes an action, they determine not which
27:06.080 --> 27:11.280
next state I'm going to next game, I'm going to, but the distribution over next games,
27:11.280 --> 27:15.760
where I might be going to. So that's the stochastic game. But it's like
27:15.760 --> 27:22.160
like matrix games, repeated stochastic games, extensive form games, that is from less to more
27:22.160 --> 27:28.080
general. And poker is an example of the last one. So it's really in the most general setting,
27:29.440 --> 27:34.720
extensive form games. And that's kind of what the AI community has been working on and being
27:34.720 --> 27:39.680
benchmarked on with this heads up no limit, Texas Holden. Can you describe extensive form games?
27:39.680 --> 27:45.600
What's the model here? So if you're familiar with the tree form, so it's really the tree form,
27:45.600 --> 27:51.280
like in chess, there's a search tree versus a matrix versus a matrix. Yeah. And that's the
27:51.280 --> 27:56.640
matrix is called the matrix form or by matrix form or normal form game. And here you have the tree
27:56.640 --> 28:01.680
form. So you can actually do certain types of reasoning there, that you lose the information
28:02.320 --> 28:07.840
when you go to normal form. There's a certain form of equivalence, like if you go from three
28:07.840 --> 28:13.840
form and you say it every possible contingency plan is a strategy, then I can actually go back
28:13.840 --> 28:19.600
to the normal form, but I lose some information from the lack of sequentiality. Then the multiplayer
28:19.600 --> 28:29.520
versus two player distinction is an important one. So two player games in zero sum are conceptually
28:29.520 --> 28:37.040
easier and computationally easier. They're still huge like this one, this one. But they're conceptually
28:37.040 --> 28:42.160
easier and computationally easier. In that conceptually, you don't have to worry about
28:42.160 --> 28:47.440
which equilibrium is the other guy going to play when there are multiple, because any equilibrium
28:47.440 --> 28:52.400
strategy is the best response to any other equilibrium strategy. So I can play a different
28:52.400 --> 28:57.600
equilibrium from you and we'll still get the right values of the game. That falls apart even
28:57.600 --> 29:02.960
with two players when you have general sum games. Even without cooperation, just even without
29:02.960 --> 29:09.040
cooperation. So there's a big gap from two player zero sum to two player general sum, or even to
29:09.040 --> 29:17.600
three players zero sum. That's a big gap, at least in theory. Can you maybe non mathematically
29:17.600 --> 29:23.040
provide the intuition why it all falls apart with three or more players? It seems like you should
29:23.040 --> 29:32.640
still be able to have a Nash equilibrium that's instructive, that holds. Okay, so it is true
29:32.640 --> 29:39.840
that all finite games have a Nash equilibrium. So this is what your Nash actually proved.
29:40.880 --> 29:45.600
So they do have a Nash equilibrium. That's not the problem. The problem is that there can be many.
29:46.480 --> 29:52.000
And then there's a question of which equilibrium to select. So and if you select your strategy
29:52.000 --> 30:00.960
from a different equilibrium and I select mine, then what does that mean? And in these non zero
30:00.960 --> 30:07.760
sum games, we may lose some joint benefit by being just simply stupid, we could actually both be
30:07.760 --> 30:12.960
better off if we did something else. And in three player, you get other problems also like collusion.
30:12.960 --> 30:19.600
Like maybe you and I can gang up on a third player, and we can do radically better by colluding.
30:19.600 --> 30:25.440
So there are lots of issues that come up there. So No Brown, the student you work with on this,
30:25.440 --> 30:31.120
has mentioned, I looked through the AMA on Reddit, he mentioned that the ability of poker players
30:31.120 --> 30:36.320
to collaborate would make the game. He was asked the question of, how would you make the game of
30:36.320 --> 30:42.320
poker? Or both of you were asked the question, how would you make the game of poker beyond
30:43.440 --> 30:51.440
being solvable by current AI methods? And he said that there's not many ways of making poker more
30:51.440 --> 30:59.600
difficult, but a collaboration or cooperation between players would make it extremely difficult.
30:59.600 --> 31:05.200
So can you provide the intuition behind why that is, if you agree with that idea?
31:05.200 --> 31:11.840
Yeah, so we've done a lot of work on coalitional games. And we actually have a paper here with
31:11.840 --> 31:17.040
my other student Gabriella Farina and some other collaborators on at NIPPS on that actually just
31:17.040 --> 31:22.080
came back from the poster session where we presented this. So when you have a collusion,
31:22.080 --> 31:29.440
it's a different problem. And it typically gets even harder then. Even the game representations,
31:29.440 --> 31:35.840
some of the game representations don't really allow good computation. So we actually introduced a new
31:35.840 --> 31:43.840
game representation for that. Is that kind of cooperation part of the model? Do you have
31:43.840 --> 31:48.720
information about the fact that other players are cooperating? Or is it just this chaos that
31:48.720 --> 31:53.760
where nothing is known? So there's some some things unknown. Can you give an example of a
31:53.760 --> 31:59.920
collusion type game? Or is it usually? So like bridge. Yeah, so think about bridge. It's like
31:59.920 --> 32:06.960
when you and I are on a team, our payoffs are the same. The problem is that we can't talk. So when
32:06.960 --> 32:13.360
I get my cards, I can't whisper to you what my cards are. That would not be allowed. So we have
32:13.360 --> 32:20.480
to somehow coordinate our strategies ahead of time. And only ahead of time. And then there's
32:20.480 --> 32:25.920
certain signals we can talk about. But they have to be such that the other team also understands
32:25.920 --> 32:31.920
that. So so that that's that's an example where the coordination is already built into the rules
32:31.920 --> 32:38.400
of the game. But in many other situations, like auctions or negotiations or diplomatic
32:38.400 --> 32:44.320
relationships, poker, it's not really built in. But it still can be very helpful for the
32:44.320 --> 32:51.440
colliders. I've read you write somewhere, the negotiations, you come to the table with prior
32:52.640 --> 32:58.160
like a strategy that, like that you're willing to do and not willing to do those kinds of things.
32:58.160 --> 33:04.320
So how do you start to now moving away from poker, moving beyond poker into other applications
33:04.320 --> 33:10.960
like negotiations? How do you start applying this to other to other domains, even real world
33:10.960 --> 33:15.360
domains that you've worked on? Yeah, I actually have two startup companies doing exactly that.
33:15.360 --> 33:21.520
One is called Strategic Machine. And that's for kind of business applications, gaming, sports,
33:21.520 --> 33:29.040
all sorts of things like that. Any applications of this to business and to sports and to gaming,
33:29.040 --> 33:34.800
to various types of things for in finance, electricity markets and so on. And the other
33:34.800 --> 33:41.360
is called Strategy Robot, where we are taking these to military security, cybersecurity,
33:41.360 --> 33:45.760
and intelligence applications. I think you worked a little bit in
33:47.920 --> 33:54.560
how do you put it, advertisement, sort of suggesting ads kind of thing.
33:54.560 --> 33:59.040
Yeah, that's another company, Optimized Markets. But that's much more about a
33:59.040 --> 34:03.840
combinatorial market and optimization based technology. That's not using these
34:04.720 --> 34:12.400
game theory decreasing technologies. I see. Okay, so what sort of high level do you think about
34:13.040 --> 34:20.320
our ability to use game theoretic concepts to model human behavior? Do you think human behavior is
34:20.320 --> 34:25.680
amenable to this kind of modeling? So outside of the poker games and where have you seen it
34:25.680 --> 34:32.000
done successfully in your work? I'm not sure. The goal really is modeling humans.
34:33.520 --> 34:40.240
Like for example, if I'm playing a zero sum game, I don't really care that the opponent is actually
34:40.240 --> 34:45.680
following my model of rational behavior. Because if they're not, that's even better for me.
34:45.680 --> 34:55.440
All right, so see with the opponents in games, the prerequisite is that you've formalized
34:56.240 --> 35:02.720
the interaction in some way that can be amenable to analysis. I mean, you've done this amazing work
35:02.720 --> 35:12.160
with mechanism design, designing games that have certain outcomes. But so I'll tell you an example
35:12.160 --> 35:19.360
for my world of autonomous vehicles. We're studying pedestrians and pedestrians and cars
35:19.360 --> 35:25.040
negotiate in this nonverbal communication. There's this weird game dance of tension where
35:25.760 --> 35:30.400
pedestrians are basically saying, I trust that you won't kill me. And so as a J Walker,
35:30.400 --> 35:34.560
I will step onto the road even though I'm breaking the law and there's this tension.
35:34.560 --> 35:40.480
And the question is, we really don't know how to model that well in trying to model intent.
35:40.480 --> 35:46.240
And so people sometimes bring up ideas of game theory and so on. Do you think that aspect
35:47.120 --> 35:53.520
of human behavior can use these kinds of imperfect information approaches, modeling?
35:54.800 --> 36:00.800
How do you start to attack a problem like that when you don't even know how to design the game
36:00.800 --> 36:06.640
to describe the situation in order to solve it? Okay, so I haven't really thought about J walking.
36:06.640 --> 36:12.080
But one thing that I think could be a good application in autonomous vehicles is the
36:12.080 --> 36:18.240
following. So let's say that you have fleets of autonomous cars operating by different companies.
36:18.240 --> 36:23.520
So maybe here's the Waymo fleet and here's the Uber fleet. If you think about the rules of the road,
36:24.160 --> 36:29.920
they define certain legal rules, but that still leaves a huge strategy space open.
36:29.920 --> 36:33.760
Like as a simple example, when cars merge, you know, how humans merge, you know,
36:33.760 --> 36:40.800
they slow down and look at each other and try to merge. Wouldn't it be better if these situations
36:40.800 --> 36:46.240
would already be prenegotiated so we can actually merge at full speed and we know that this is the
36:46.240 --> 36:51.680
situation, this is how we do it and it's all going to be faster. But there are way too many
36:51.680 --> 36:57.600
situations to negotiate manually. So you could use automated negotiation. This is the idea at least.
36:57.600 --> 37:04.160
You could use automated negotiation to negotiate all of these situations or many of them in advance.
37:04.160 --> 37:09.040
And of course, it might be that, hey, maybe you're not going to always let me go first.
37:09.040 --> 37:13.520
Maybe you said, okay, well, in these situations, I'll let you go first. But in exchange, you're
37:13.520 --> 37:18.240
going to give me two hours, you're going to let me go first in these situations. So it's this huge
37:18.240 --> 37:24.240
combinatorial negotiation. And do you think there's room in that example of merging to
37:24.240 --> 37:28.080
model this whole situation as an imperfect information game? Or do you really want to
37:28.080 --> 37:33.520
consider it to be a perfect? No, that's a good question. Yeah. That's a good question. Do you
37:33.520 --> 37:41.120
pay the price of assuming that you don't know everything? Yeah, I don't know. It's certainly
37:41.120 --> 37:47.040
much easier. Games with perfect information are much easier. So if you can get away with it,
37:48.240 --> 37:52.960
you should. But if the real situation is of imperfect information, then you're going to
37:52.960 --> 37:58.480
have to deal with imperfect information. Great. So what lessons have you learned the annual
37:58.480 --> 38:04.560
computer poker competition? An incredible accomplishment of AI. You look at the history
38:04.560 --> 38:12.240
of the blue AlphaGo, these kind of moments when AI stepped up in an engineering effort and a
38:12.240 --> 38:18.240
scientific effort combined to beat the best human player. So what do you take away from
38:18.240 --> 38:23.200
this whole experience? What have you learned about designing AI systems that play these kinds
38:23.200 --> 38:30.000
of games? And what does that mean for AI in general, for the future of AI development?
38:30.720 --> 38:34.160
Yeah, so that's a good question. So there's so much to say about it.
38:35.280 --> 38:40.320
I do like this type of performance oriented research, although in my group, we go all the
38:40.320 --> 38:46.160
way from like idea to theory to experiments to big system building to commercialization. So we
38:46.160 --> 38:52.560
span that spectrum. But I think that in a lot of situations in AI, you really have to build the
38:52.560 --> 38:58.400
big systems and evaluate them at scale before you know what works and doesn't. And we've seen
38:58.400 --> 39:03.280
that in the computational game theory community, that there are a lot of techniques that look good
39:03.280 --> 39:08.640
in the small, but then they cease to look good in the large. And we've also seen that there are a
39:08.640 --> 39:15.600
lot of techniques that look superior in theory. And I really mean in terms of convergence rates,
39:15.600 --> 39:20.720
better like first order methods, better convergence rates like the CFR based algorithms,
39:20.720 --> 39:26.080
yet the CFR based algorithms are the first fastest in practice. So it really tells me that you have
39:26.080 --> 39:32.400
to test these in reality, the theory isn't tight enough, if you will, to tell you which
39:32.400 --> 39:38.480
algorithms are better than the others. And you have to look at these things that in the large,
39:38.480 --> 39:43.600
because any sort of projections you do from the small can at least in this domain be very misleading.
39:43.600 --> 39:49.040
So that's kind of from a kind of science and engineering perspective, from a personal perspective,
39:49.040 --> 39:55.200
it's been just a wild experience in that with the first poker competition, the first
39:56.160 --> 40:00.640
brains versus AI man machine poker competition that we organized. There had been, by the way,
40:00.640 --> 40:04.880
for other poker games, there had been previous competitions, but this was for heads up no limit,
40:04.880 --> 40:11.200
this was the first. And I probably became the most hated person in the world of poker. And I
40:11.200 --> 40:18.400
didn't mean to sigh. Why is that for cracking the game for? Yeah, it was a lot of people
40:18.400 --> 40:24.160
felt that it was a real threat to the whole game, the whole existence of the game. If AI becomes
40:24.160 --> 40:29.680
better than humans, people would be scared to play poker, because there are these superhuman
40:29.680 --> 40:35.120
AIs running around taking their money and you know, all of that. So I just it was really aggressive.
40:35.120 --> 40:40.640
Interesting. The comments were super aggressive. I got everything just short of death threats.
40:42.000 --> 40:45.760
Do you think the same was true for chess? Because right now, they just completed the
40:45.760 --> 40:50.720
world championships in chess, and humans just started ignoring the fact that there's AI systems
40:50.720 --> 40:55.360
now that outperform humans and they still enjoy the game is still a beautiful game.
40:55.360 --> 41:00.160
That's what I think. And I think the same thing happened in poker. And so I didn't
41:00.160 --> 41:03.760
think of myself as somebody was going to kill the game. And I don't think I did.
41:03.760 --> 41:07.360
I've really learned to love this game. I wasn't a poker player before, but
41:07.360 --> 41:12.400
learn so many nuances about it from these AIs. And they've really changed how the game is played,
41:12.400 --> 41:17.600
by the way. So they have these very Martian ways of playing poker. And the top humans are now
41:17.600 --> 41:24.880
incorporating those types of strategies into their own play. So if anything, to me, our work has made
41:25.600 --> 41:31.760
poker a richer, more interesting game for humans to play, not something that is going to steer
41:31.760 --> 41:36.240
humans away from it entirely. Just a quick comment on something you said, which is,
41:37.440 --> 41:44.800
if I may say so, in academia is a little bit rare sometimes. It's pretty brave to put your ideas
41:44.800 --> 41:50.000
to the test in the way you described, saying that sometimes good ideas don't work when you actually
41:50.560 --> 41:57.600
try to apply them at scale. So where does that come from? I mean, if you could do advice for
41:57.600 --> 42:04.000
people, what drives you in that sense? Were you always this way? I mean, it takes a brave person,
42:04.000 --> 42:09.280
I guess is what I'm saying, to test their ideas and to see if this thing actually works against human
42:09.840 --> 42:14.000
top human players and so on. I don't know about brave, but it takes a lot of work.
42:14.800 --> 42:20.960
It takes a lot of work and a lot of time to organize, to make something big and to organize
42:20.960 --> 42:26.080
an event and stuff like that. And what drives you in that effort? Because you could still,
42:26.080 --> 42:31.280
I would argue, get a Best Paper Award at NIPS as you did in 17 without doing this.
42:31.280 --> 42:32.160
That's right, yes.
42:34.400 --> 42:40.960
So in general, I believe it's very important to do things in the real world and at scale.
42:41.600 --> 42:48.320
And that's really where the pudding, if you will, proves in the pudding. That's where it is.
42:48.320 --> 42:54.720
In this particular case, it was kind of a competition between different groups.
42:54.720 --> 43:00.400
And for many years, as to who can be the first one to beat the top humans at heads up,
43:00.400 --> 43:09.440
no limit takes us hold them. So it became kind of like a competition who can get there.
43:09.440 --> 43:13.840
Yeah, so a little friendly competition could do wonders for progress.
43:13.840 --> 43:14.960
Yes, absolutely.
43:16.240 --> 43:21.440
So the topic of mechanism design, which is really interesting, also kind of new to me,
43:21.440 --> 43:27.440
except as an observer of, I don't know, politics and any, I'm an observer of mechanisms,
43:27.440 --> 43:32.960
but you write in your paper, an automated mechanism design that I quickly read.
43:33.840 --> 43:36.960
So mechanism design is designing the rules of the game,
43:37.760 --> 43:44.400
so you get a certain desirable outcome. And you have this work on doing so in an automatic
43:44.400 --> 43:49.440
fashion as opposed to fine tuning it. So what have you learned from those efforts?
43:49.440 --> 43:57.040
If you look, say, I don't know, at complex, it's like our political system. Can we design our
43:57.040 --> 44:04.480
political system to have in an automated fashion, to have outcomes that we want? Can we design
44:04.480 --> 44:11.680
something like traffic lights to be smart, where it gets outcomes that we want?
44:11.680 --> 44:14.880
So what are the lessons that you draw from that work?
44:14.880 --> 44:19.280
Yeah, so I still very much believe in the automated mechanism design direction.
44:19.280 --> 44:19.520
Yes.
44:20.640 --> 44:26.400
But it's not a panacea. There are impossibility results in mechanism design,
44:26.400 --> 44:32.800
saying that there is no mechanism that accomplishes objective X in class C.
44:33.840 --> 44:40.000
So it's not going up, there's no way, using any mechanism design tools, manual or automated,
44:40.800 --> 44:42.720
to do certain things in mechanism design.
44:42.720 --> 44:46.960
Can you describe that again? So meaning there, it's impossible to achieve that?
44:46.960 --> 44:55.120
Yeah, there's also an impossible. So these are not statements about human ingenuity,
44:55.120 --> 45:00.320
who might come up with something smart. These are proofs that if you want to accomplish properties
45:00.320 --> 45:06.080
X in class C, that is not doable with any mechanism. The good thing about automated
45:06.080 --> 45:12.800
mechanism design is that we're not really designing for a class, we're designing for specific settings
45:12.800 --> 45:18.800
at a time. So even if there's an impossibility result for the whole class, it just doesn't
45:18.800 --> 45:23.520
mean that all of the cases in the class are impossible, it just means that some of the
45:23.520 --> 45:28.800
cases are impossible. So we can actually carve these islands of possibility within these
45:28.800 --> 45:34.640
non impossible classes. And we've actually done that. So one of the famous results in mechanism
45:34.640 --> 45:40.640
design is a Meyers and Settled Weight theorem by Roger Meyers and Mark Settled Weight from 1983.
45:40.640 --> 45:45.760
So it's an impossibility of efficient trade under imperfect information. We show that
45:46.880 --> 45:50.480
you can in many settings avoid that and get efficient trade anyway.
45:51.360 --> 45:56.640
Depending on how you design the game. Depending how you design the game. And of course,
45:56.640 --> 46:03.760
it doesn't in any way contradict the impossibility result. The impossibility result is still there,
46:03.760 --> 46:11.200
but it just finds spots within this impossible class where in those spots you don't have the
46:11.200 --> 46:17.600
impossibility. Sorry if I'm going a bit philosophical, but what lessons do you draw towards like I
46:17.600 --> 46:23.920
mentioned politics or the human interaction and designing mechanisms for outside of just
46:24.800 --> 46:27.120
these kinds of trading or auctioning or
46:27.120 --> 46:37.200
purely formal games or human interaction like a political system. Do you think it's applicable
46:37.200 --> 46:47.920
to politics or to business, to negotiations, these kinds of things, designing rules that have
46:47.920 --> 46:53.360
certain outcomes? Yeah, yeah, I do think so. Have you seen success that successfully done?
46:53.360 --> 46:58.880
There hasn't really. Oh, you mean mechanism design or automated mechanism? Automated mechanism design.
46:58.880 --> 47:07.440
So mechanism design itself has had fairly limited success so far. There are certain cases,
47:07.440 --> 47:13.600
but most of the real world situations are actually not sound from a mechanism design
47:13.600 --> 47:18.640
perspective. Even in those cases where they've been designed by very knowledgeable mechanism
47:18.640 --> 47:24.080
design people, the people are typically just taking some insights from the theory and applying
47:24.080 --> 47:29.920
those insights into the real world rather than applying the mechanisms directly. So one famous
47:29.920 --> 47:36.880
example of is the FCC spectrum auctions. So I've also had a small role in that and
47:38.560 --> 47:45.040
very good economists have been working on that with no game theory. Yet the rules that are
47:45.040 --> 47:50.960
designed in practice there, they're such that bidding truthfully is not the best strategy.
47:51.680 --> 47:57.040
Usually mechanism design, we try to make things easy for the participants. So telling the truth
47:57.040 --> 48:02.480
is the best strategy. But even in those very high stakes auctions where you have tens of billions
48:02.480 --> 48:07.840
of dollars worth of spectrum being auctioned, truth telling is not the best strategy.
48:09.200 --> 48:14.000
And by the way, nobody knows even a single optimal bidding strategy for those auctions.
48:14.000 --> 48:17.600
What's the challenge of coming up with an optimal bid? Because there's a lot of players and there's
48:17.600 --> 48:23.440
imperfections. It's not so much there, a lot of players, but many items for sale. And these
48:23.440 --> 48:29.200
mechanisms are such that even with just two items or one item, bidding truthfully wouldn't be
48:29.200 --> 48:37.040
the best strategy. If you look at the history of AI, it's marked by seminal events and an
48:37.040 --> 48:42.160
AlphaGo beating a world champion, human go player, I would put Lebrados winning the heads
48:42.160 --> 48:51.280
up no limit hold them as one of such event. Thank you. And what do you think is the next such event?
48:52.400 --> 48:58.880
Whether it's in your life or in the broadly AI community that you think might be out there
48:58.880 --> 49:04.800
that would surprise the world. So that's a great question and I really know the answer. In terms
49:04.800 --> 49:12.880
of game solving heads up no limit takes us all and really was the one remaining widely agreed
49:12.880 --> 49:18.400
upon benchmark. So that was the big milestone. Now, are there other things? Yeah, certainly
49:18.400 --> 49:23.440
there are, but there there is not one that the community has kind of focused on. So what could
49:23.440 --> 49:29.680
be other things? There are groups working on Starcraft. There are groups working on Dota 2.
49:29.680 --> 49:36.560
These are video games. Yes, or you could have like diplomacy or Hanabi, you know, things like
49:36.560 --> 49:42.640
that. These are like recreational games, but none of them are really acknowledged as kind of the
49:42.640 --> 49:49.920
main next challenge problem. Like chess or go or heads up no limit takes us hold them was.
49:49.920 --> 49:55.600
So I don't really know in the game solving space what is or what will be the next benchmark. I
49:55.600 --> 49:59.920
kind of hope that there will be a next benchmark because really the different groups working on
49:59.920 --> 50:06.160
the same problem really drove these application independent techniques forward very quickly
50:06.160 --> 50:11.120
over 10 years. Do you think there's an open problem that excites you that you start moving
50:11.120 --> 50:18.240
away from games into real world games like say the stock market trading? Yeah, so that's kind
50:18.240 --> 50:27.120
of how I am. So I am probably not going to work as hard on these recreational benchmarks.
50:27.760 --> 50:32.960
I'm doing two startups on game solving technology strategic machine and strategy robot and we're
50:32.960 --> 50:39.680
really interested in pushing this stuff into practice. What do you think would be really,
50:39.680 --> 50:50.400
you know, a powerful result that would be surprising that would be if you can say, I mean,
50:50.400 --> 50:56.880
it's, you know, five years, 10 years from now, something that statistically would say is not
50:56.880 --> 51:03.200
very likely, but if there's a breakthrough would achieve. Yeah, so I think that overall,
51:03.200 --> 51:11.600
we're in a very different situation in game theory than we are in, let's say, machine learning. Yes.
51:11.600 --> 51:16.720
So in machine learning, it's a fairly mature technology and it's very broadly applied and
51:17.280 --> 51:22.480
proven success in the real world. In game solving, there are almost no applications yet.
51:24.320 --> 51:29.440
We have just become superhuman, which machine learning you could argue happened in the 90s,
51:29.440 --> 51:35.120
if not earlier, and at least on supervised learning, certain complex supervised learning
51:35.120 --> 51:40.960
applications. Now, I think the next challenge problem, I know you're not asking about it this
51:40.960 --> 51:45.360
way, you're asking about technology breakthrough. But I think that big breakthrough is to be able
51:45.360 --> 51:50.720
to show that, hey, maybe most of, let's say, military planning or most of business strategy
51:50.720 --> 51:55.600
will actually be done strategically using computational game theory. That's what I would
51:55.600 --> 52:01.120
like to see as a next five or 10 year goal. Maybe you can explain to me again, forgive me if this
52:01.120 --> 52:07.360
is an obvious question, but machine learning methods and neural networks suffer from not
52:07.360 --> 52:12.800
being transparent, not being explainable. Game theoretic methods, Nash Equilibria,
52:12.800 --> 52:18.960
do they generally, when you see the different solutions, are they, when you talk about military
52:18.960 --> 52:24.640
operations, are they, once you see the strategies, do they make sense? Are they explainable or do
52:24.640 --> 52:29.680
they suffer from the same problems as neural networks do? So that's a good question. I would say
52:30.400 --> 52:36.560
a little bit yes and no. And what I mean by that is that these game theoretic strategies,
52:36.560 --> 52:42.320
let's say, Nash Equilibrium, it has provable properties. So it's unlike, let's say, deep
52:42.320 --> 52:47.040
learning where you kind of cross your fingers, hopefully it'll work. And then after the fact,
52:47.040 --> 52:52.560
when you have the weights, you're still crossing your fingers, and I hope it will work. Here,
52:52.560 --> 52:57.840
you know that the solution quality is there. There's provable solution quality guarantees.
52:58.480 --> 53:03.360
Now, that doesn't necessarily mean that the strategies are human understandable.
53:03.360 --> 53:08.560
That's a whole other problem. So I think that deep learning and computational game theory
53:08.560 --> 53:12.320
are in the same boat in that sense, that both are difficult to understand.
53:13.680 --> 53:16.160
But at least the game theoretic techniques, they have this
53:16.160 --> 53:22.800
guarantees of solution quality. So do you see business operations, strategic operations,
53:22.800 --> 53:31.440
even military in the future being at least the strong candidates being proposed by automated
53:31.440 --> 53:39.520
systems? Do you see that? Yeah, I do. I do. But that's more of a belief than a substantiated fact.
53:39.520 --> 53:43.840
Depending on where you land, an optimism or pessimism, that's a really, to me, that's an
53:43.840 --> 53:52.560
exciting future, especially if there's provable things in terms of optimality. So looking into
53:52.560 --> 54:01.120
the future, there's a few folks worried about the, especially you look at the game of poker,
54:01.120 --> 54:06.800
which is probably one of the last benchmarks in terms of games being solved. They worry about
54:06.800 --> 54:11.760
the future and the existential threats of artificial intelligence. So the negative impact
54:11.760 --> 54:18.800
in whatever form on society, is that something that concerns you as much? Or are you more optimistic
54:18.800 --> 54:24.640
about the positive impacts of AI? I am much more optimistic about the positive impacts.
54:24.640 --> 54:29.920
So just in my own work, what we've done so far, we run the nationwide kidney exchange.
54:29.920 --> 54:36.000
Hundreds of people are walking around alive today, who would it be? And it's increased employment.
54:36.000 --> 54:42.720
You have a lot of people now running kidney exchanges and at transplant centers, interacting
54:42.720 --> 54:50.240
with the kidney exchange. You have some extra surgeons, nurses, anesthesiologists, hospitals,
54:50.240 --> 54:55.200
all of that. So employment is increasing from that and the world is becoming a better place.
54:55.200 --> 55:03.120
Another example is combinatorial sourcing auctions. We did 800 large scale combinatorial
55:03.120 --> 55:09.280
sourcing auctions from 2001 to 2010 in a previous startup of mine called Combinet.
55:09.280 --> 55:18.480
And we increased the supply chain efficiency on that $60 billion of spend by 12.6%. So that's
55:18.480 --> 55:24.000
over $6 billion of efficiency improvement in the world. And this is not like shifting value from
55:24.000 --> 55:28.960
somebody to somebody else, just efficiency improvement, like in trucking, less empty
55:28.960 --> 55:35.040
driving. So there's less waste, less carbon footprint and so on. This is a huge positive
55:35.040 --> 55:42.080
impact in the near term. But sort of to stay in it for a little longer, because I think game theory
55:42.080 --> 55:46.720
is a role to play here. Let me actually come back on that. That's one thing. I think AI is also going
55:46.720 --> 55:52.960
to make the world much safer. So that's another aspect that often gets overlooked.
55:53.920 --> 55:58.560
Well, let me ask this question. Maybe you can speak to the safer. So I talked to Max Tagmark
55:58.560 --> 56:03.200
and Stuart Russell, who are very concerned about existential threats of AI. And often,
56:03.200 --> 56:12.960
the concern is about value misalignment. So AI systems basically working, operating towards
56:12.960 --> 56:19.680
goals that are not the same as human civilization, human beings. So it seems like game theory has
56:19.680 --> 56:28.800
a role to play there to make sure the values are aligned with human beings. I don't know if that's
56:28.800 --> 56:36.160
how you think about it. If not, how do you think AI might help with this problem? How do you think
56:36.160 --> 56:44.800
AI might make the world safer? Yeah, I think this value misalignment is a fairly theoretical
56:44.800 --> 56:52.880
worry. And I haven't really seen it in because I do a lot of real applications. I don't see it
56:52.880 --> 56:58.080
anywhere. The closest I've seen it was the following type of mental exercise, really,
56:58.080 --> 57:03.200
where I had this argument in the late 80s when we were building these transportation optimization
57:03.200 --> 57:08.160
systems. And somebody had heard that it's a good idea to have high utilization of assets.
57:08.160 --> 57:13.840
So they told me that, hey, why don't you put that as objective? And we didn't even put it as an
57:13.840 --> 57:19.360
objective, because I just showed him that if you had that as your objective, the solution would be
57:19.360 --> 57:23.360
to load your trucks full and drive in circles. Nothing would ever get delivered. You'd have
57:23.360 --> 57:30.480
100% utilization. So yeah, I know this phenomenon. I've known this for over 30 years. But I've never
57:30.480 --> 57:35.920
seen it actually be a problem in reality. And yes, if you have the wrong objective, the AI will
57:35.920 --> 57:40.720
optimize that to the hilt. And it's going to hurt more than some human who's kind of trying to
57:40.720 --> 57:47.200
solve it in a half baked way with some human insight, too. But I just haven't seen that
57:47.200 --> 57:51.600
materialize in practice. There's this gap that you've actually put your finger on
57:52.880 --> 57:59.760
very clearly just now between theory and reality that's very difficult to put into words, I think.
57:59.760 --> 58:06.720
It's what you can theoretically imagine, the worst possible case or even, yeah, I mean,
58:06.720 --> 58:12.720
bad cases. And what usually happens in reality. So for example, to me, maybe it's something you
58:12.720 --> 58:19.680
can comment on, having grown up and I grew up in the Soviet Union. You know, there's currently
58:19.680 --> 58:28.320
10,000 nuclear weapons in the world. And for many decades, it's theoretically surprising to me
58:28.320 --> 58:34.240
that the nuclear war is not broken out. Do you think about this aspect from a game
58:34.240 --> 58:41.520
theoretic perspective in general? Why is that true? Why? In theory, you could see how things
58:41.520 --> 58:46.480
go terribly wrong. And somehow yet they have not. Yeah, how do you think so? So I do think that
58:46.480 --> 58:51.120
about that a lot. I think the biggest two threats that we're facing as mankind, one is climate
58:51.120 --> 58:57.200
change, and the other is nuclear war. So so those are my main two worries that they worry about.
58:57.200 --> 59:01.840
And I've tried to do something about climate thought about trying to do something for climate
59:01.840 --> 59:07.920
change twice. Actually, for two of my startups, I've actually commissioned studies of what we
59:07.920 --> 59:12.320
could do on those things. And we didn't really find a sweet spot, but I'm still keeping an eye out
59:12.320 --> 59:17.280
on that if there's something where we could actually provide a market solution or optimization
59:17.280 --> 59:24.080
solution or some other technology solution to problems. Right now, like for example, pollution
59:24.080 --> 59:30.000
critic markets was what we were looking at then. And it was much more the lack of political will
59:30.000 --> 59:35.520
by those markets were not so successful, rather than bad market design. So I could go in and
59:35.520 --> 59:39.920
make a better market design. But that wouldn't really move the needle on the world very much
59:39.920 --> 59:44.800
if there's no political will and in the US, you know, the market, at least the Chicago market
59:44.800 --> 59:50.560
was just shut down, and so on. So then it doesn't really help how great your market design was.
59:50.560 --> 59:59.920
And on the nuclear side, it's more so global warming is a more encroaching problem.
1:00:00.560 --> 1:00:05.680
You know, nuclear weapons have been here. It's an obvious problem has just been sitting there.
1:00:05.680 --> 1:00:12.240
So how do you think about what is the mechanism design there that just made everything seem stable?
1:00:12.240 --> 1:00:18.560
And are you still extremely worried? I am still extremely worried. So you probably know the simple
1:00:18.560 --> 1:00:25.280
game theory of mad. So this was a mutually assured destruction. And it doesn't require any
1:00:25.280 --> 1:00:29.600
computation with small matrices, you can actually convince yourself that the game is such that
1:00:29.600 --> 1:00:36.000
nobody wants to initiate. Yeah, that's a very coarse grained analysis. And it really works in
1:00:36.000 --> 1:00:40.960
a situation where you have two superpowers or small numbers of superpowers. Now things are
1:00:40.960 --> 1:00:47.920
very different. You have a smaller nuke. So the threshold of initiating is smaller. And you have
1:00:47.920 --> 1:00:54.800
smaller countries and non non nation actors who may get nukes and so on. So it's I think it's
1:00:54.800 --> 1:01:04.240
riskier now than it was maybe ever before. And what idea application of AI, you've talked about
1:01:04.240 --> 1:01:09.440
a little bit, but what is the most exciting to you right now? I mean, you're here at NIPS,
1:01:09.440 --> 1:01:16.640
NewRips. Now, you have a few excellent pieces of work. But what are you thinking into the future
1:01:16.640 --> 1:01:20.640
with several companies you're doing? What's the most exciting thing or one of the exciting things?
1:01:21.200 --> 1:01:26.960
The number one thing for me right now is coming up with these scalable techniques for
1:01:26.960 --> 1:01:32.800
game solving and applying them into the real world. I'm still very interested in market design
1:01:32.800 --> 1:01:37.200
as well. And we're doing that in the optimized markets. But I'm most interested if number one
1:01:37.200 --> 1:01:42.320
right now is strategic machine strategy robot getting that technology out there and seeing
1:01:42.320 --> 1:01:47.920
as you're in the trenches doing applications, what needs to be actually filled, what technology
1:01:47.920 --> 1:01:52.720
gaps still need to be filled. So it's so hard to just put your feet on the table and imagine what
1:01:52.720 --> 1:01:57.440
needs to be done. But when you're actually doing real applications, the applications tell you
1:01:58.000 --> 1:02:03.040
what needs to be done. And I really enjoy that interaction. Is it a challenging process to
1:02:03.040 --> 1:02:13.520
apply some of the state of the art techniques you're working on, and having the various players in
1:02:13.520 --> 1:02:19.280
industry or the military, or people who could really benefit from it actually use it? What's
1:02:19.280 --> 1:02:23.920
that process like of, you know, in autonomous vehicles, we work with automotive companies and
1:02:23.920 --> 1:02:31.120
they're in many ways are a little bit old fashioned, it's difficult. They really want to use this
1:02:31.120 --> 1:02:37.520
technology, there's clearly will have a significant benefit. But the systems aren't quite in place
1:02:37.520 --> 1:02:43.280
to easily have them integrated in terms of data in terms of compute in terms of all these kinds
1:02:43.280 --> 1:02:49.200
of things. So do you, is that one of the bigger challenges that you're facing? And how do you
1:02:49.200 --> 1:02:54.000
tackle that challenge? Yeah, I think that's always a challenge that that's kind of slowness and inertia
1:02:54.000 --> 1:02:59.360
really of let's do things the way we've always done it. You just have to find the internal
1:02:59.360 --> 1:03:04.480
champions that the customer who understand that hey, things can't be the same way in the future,
1:03:04.480 --> 1:03:09.120
otherwise bad things are going to happen. And it's in autonomous vehicles, it's actually very
1:03:09.120 --> 1:03:13.200
interesting that the car makers are doing that, and they're very traditional. But at the same time,
1:03:13.200 --> 1:03:18.320
you have tech companies who have nothing to do with cars or transportation, like Google and Baidu,
1:03:19.120 --> 1:03:25.360
really pushing on autonomous cars. I find that fascinating. Clearly, you're super excited about
1:03:25.360 --> 1:03:31.120
how actually these ideas have an impact in the world. In terms of the technology, in terms of
1:03:31.120 --> 1:03:38.320
ideas and research, are there directions that you're also excited about, whether that's on the
1:03:39.440 --> 1:03:43.360
some of the approaches you talked about for imperfect information games, whether it's applying
1:03:43.360 --> 1:03:46.640
deep learning to some of these problems? Is there something that you're excited in
1:03:47.360 --> 1:03:52.240
in the research side of things? Yeah, yeah, lots of different things in the game solving.
1:03:52.240 --> 1:04:00.240
So solving even bigger games, games where you have more hidden action of the player
1:04:00.240 --> 1:04:06.560
actions as well. poker is a game where really chance actions are hidden, or some of them are
1:04:06.560 --> 1:04:14.480
hidden, but the player actions are public. multiplayer games or various sorts, collusion,
1:04:14.480 --> 1:04:22.960
opponent exploitation, all and even longer games. So games that basically go forever,
1:04:22.960 --> 1:04:29.520
but they're not repeated. So see extensive fun games that go forever. What would that even look
1:04:29.520 --> 1:04:33.200
like? How do you represent that? How do you solve that? What's an example of a game like that?
1:04:33.920 --> 1:04:37.680
This is some of the stochastic games that you mentioned. Let's say business strategy. So it's
1:04:37.680 --> 1:04:42.880
and not just modeling like a particular interaction, but thinking about the business from here to
1:04:42.880 --> 1:04:50.960
eternity. Or I see, or let's let's say military strategy. So it's not like war is going to go away.
1:04:50.960 --> 1:04:58.000
How do you think about military strategy that's going to go forever? How do you even model that?
1:04:58.000 --> 1:05:05.760
How do you know whether a move was good that you use somebody made? And so on. So that that's
1:05:05.760 --> 1:05:11.920
kind of one direction. I'm also very interested in learning much more scalable techniques for
1:05:11.920 --> 1:05:18.560
integer programming. So we had an ICML paper this summer on that, the first automated algorithm
1:05:18.560 --> 1:05:24.400
configuration paper that has theoretical generalization guarantees. So if I see these many
1:05:24.400 --> 1:05:30.480
training examples, and I tool my algorithm in this way, it's going to have good performance
1:05:30.480 --> 1:05:35.280
on the real distribution, which I've not seen. So which is kind of interesting that, you know,
1:05:35.280 --> 1:05:42.560
algorithm configuration has been going on now for at least 17 years seriously. And there has not
1:05:42.560 --> 1:05:48.800
been any generalization theory before. Well, this is really exciting. And it's been, it's a huge
1:05:48.800 --> 1:05:52.720
honor to talk to you. Thank you so much, Tomas. Thank you for bringing Lebrates to the world
1:05:52.720 --> 1:06:07.440
and all the great work you're doing. Well, thank you very much. It's been fun. Good questions.