id
stringclasses
15 values
title
stringclasses
15 values
url
stringclasses
15 values
published
stringclasses
15 values
text
stringlengths
2
633
start
float64
0
4.86k
end
float64
2
4.89k
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
So we really emphasize a lot in our group just being able to make it so people can do experiments as fast as as reasonable.
2,662.8
2,674.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
So the two main things we do are model parallelism and data parallelism. I'll talk about both. You've talked about this a little bit or.
2,674.8
2,683.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Great. So the best way you can decrease time training time is decrease the step time. So one of the really nice properties most neural nets have is there's lots and lots of inherent parallelism.
2,683.8
2,695.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Like if you think about a convolutional model, there's lots of parallelism at each of the layers because all the spatial positions are mostly independent.
2,695.8
2,703.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
You can just run them in parallel on different devices. The problem is figuring out how to communicate how to distribute that computation in such a way that communication doesn't kill you.
2,703.8
2,716.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
A few things help you. So one is local kind of activity like convolutional neural nets have this nice property that they're generally looking at like a five by five patch of data below them.
2,716.8
2,726.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
And they don't need anything else. And the neural next to it has a lot of overlap with the data it needs for for that first neuron.
2,726.8
2,735.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
You can have towers with little or no connectivity between the towers. So every few layers you might communicate a little bit, but mostly you don't.
2,735.8
2,743.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
The original Alex net paper did that. So it essentially had two separate towers that mostly ran independently on GPUs, two different GPUs and occasionally exchanged some information. You can have specialized parts of the model that are active only for some examples.
2,743.8
2,759.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
There's lots of ways to exploit parallelism. So when you're just naively compiling matrix, multiple I code with GCC or something, it'll probably already take advantage of instruction parallelism present on Intel CPUs across cores.
2,759.8
2,776.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
You can use thread parallelism and spread things that way across devices, communicating between GPUs is often pretty limited. So you have like a factor of 30 to 40 better bandwidth to the local GPU memory than you do to like another GPU cards memory on the same machine and across machine boundaries is generally even worse.
2,776.8
2,797.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
So it's pretty important to kind of keep as much data local as you can and avoid needing too much communication bandwidth, but model parallelism, the basic idea is you're just going to partition the computation of the models somehow, maybe spatially like this, maybe layer by layer.
2,797.8
2,817.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
And then in this case, for example, the only communication I need to do is at this boundary, you know, some of the data from partition two is needed for the input of that partition one, but mostly all the data is local.
2,817.8
2,832.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
The other technique you can use for speeding up convergence is data parallelism. So in that case, you're going to use many different replicas of the same model structure and they're all going to collaborate to update parameters. So in some shared set of servers that hold the parameter state speedups depend a lot on the kind of model, you know, it could be 10 to 40x speed up for 50 replicas.
2,832.8
2,857.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Sparse models with like really large embeddings for every vocabulary word, no demand. Generally, you can support more parallelism because most updates only update a handful of the embedding entries, you know, if you have a sentence that has like 10 unique words in and out of a million.
2,857.8
2,873.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
And you can have millions of millions or thousands of replicas doing lots of work. So the basic idea in data parallelism is you have these different model replicas, you're going to have this centralized system that keeps track of the parameters that may not just be a single machine, it may be a lot of machines because you need a lot of network bandwidth.
2,873.8
2,892.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Sometimes to keep all these model replicas fed with parameters, so that might, you know, in our big set ups, that might be 127 machines at the top. And then, you know, you might have five inter replicas of the models down there.
2,892.8
2,906.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
And before every model replica does a mini batch, it's going to grab the parameters. So it says, okay, you 127 machines give me the parameters. And then it does a computation of a random mini batch and figures out what the gradient should be.
2,906.8
2,923.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
It doesn't apply the gradient locally, it sends the gradient back to the parameter servers, parameter servers that update the current parameter values.
2,923.8
2,931.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
And then before the next step, we do the same thing. Fairly network intensive, depending on your model, things that help here are models that don't have very many parameters.
2,931.8
2,942.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
And then, you know, if you have a convolutional model, you're going to get an additional factor of reuse of maybe like 10,000 different positions in a layer that you're going to reuse it.
2,942.8
2,970.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
And then, you know, you're going to have a layer that you're going to reuse it. And then, you know, if you unroll 100 time steps, you're going to reuse it 100 times just for the enrolling.
2,970.8
2,980.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
So those kinds of things that have models that have lots of computation and fewer parameters to sort of drive that computation generally will work better in data parallel environments.
2,980.8
2,993.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
There's an obvious issue, depending on how you do this. So one way you can do this is completely asynchronously. Every model replica is just sitting in a loop and fetching the parameters, doing a mini batch, computing gradient, sending it up there.
2,993.8
3,006.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
And if you do that asynchronously, then the gradient at a computer may be completely stale with respect to where the parameters are now.
3,006.8
3,013.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
You've computed it with respect to this parameter value. But meanwhile, 10 other replicas have made calls the parameters to meander over to here. And now you apply the gradient that you thought was for here to this value.
3,013.8
3,025.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
So it makes theoreticians incredibly uncomfortable. They're already uncomfortable because it's completely nonconvex problems.
3,025.8
3,032.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
But the good news is it works.
3,032.8
3,035.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Up to a certain level. It would be really good to understand the conditions under which this works in a more theoretical basis, but in practice, it does seem to work pretty well.
3,035.8
3,048.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
The other thing you can do is do this completely synchronously. So you can have one driving loop that says, OK, everyone go, they all get the parameters, they all compute gradients, and then you wait for all the gradients to show up and do something with the gradients.
3,048.8
3,059.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
And then you can average them or add them together. And that effectively just looks like a giant batch. If you have R replicas, that looks like R times each individual ones batch size, which sometimes works.
3,059.8
3,074.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
You kind of get diminishing returns from larger and larger batch sizes. But the more training examples you have, the more tolerant you are of a bigger batch size generally.
3,074.8
3,085.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
And training examples, you know, batch size of 8,000 is sort of OK. If you have a million training examples, batch size of 8,000 is not so great.
3,085.8
3,095.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Right. I think I said this. There's even more complicated choices where you can have like M asynchronous groups of N-singular replicas.
3,095.8
3,106.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Right. I've said that. Convolutions of recurrent models are good because they reuse the parameters a lot. So data parallelism is actually really, really important for almost all of our models.
3,106.8
3,119.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
That's how we get to the point of training models in like half a day or a day, generally.
3,119.8
3,127.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
And so, you know, you see some of the rough kinds of setup for use. And here is an example training graph of image net model, one GPU, 10 GPUs, 50 GPUs. And there's the kind of speed up yet.
3,127.8
3,145.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Right. Like sometimes these graphs are deceiving. Like the difference between 10 and 50 is doesn't seem that big. Like because lines are kind of close to each other sort of. But an actual fact, the difference between 10 and 50 is like a factor of 4.1 or something.
3,145.8
3,160.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
So that doesn't look like a factor of 4.1 difference. Does it? But it is.
3,160.8
3,166.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Anyway, yeah, the way you do it is you like see where that one crosses 0.6 and see where that one crosses 0.6.
3,166.8
3,177.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Okay. So let me show you some of the slight tweaks you make to TensorFlow models to exploit these different kinds of parallelism.
3,177.8
3,186.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
One of the things we wanted was for these kinds of parallelism notions to be pretty easy to express. So one of the things we like about TensorFlow is it maps pretty well to the kind of the things you might see in a research paper.
3,186.8
3,199.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
So it's not, you don't have to read all that, but it's not too different than what you would see in a research paper.
3,199.8
3,209.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Which is kind of nice. So that's like a simple LSTM cell. This is the sequence to sequence model.
3,209.8
3,216.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
The earlier Satsukiva, Oreo, Vennuals, and Quackly published in Nips 2014.
3,216.8
3,221.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
We're essentially trying to take an input sequence and map it to an output sequence.
3,221.8
3,226.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
This is a really big area of research. It turns out these kinds of models are applicable for lots and lots of kinds of problems.
3,226.8
3,234.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
There's lots of different groups doing interesting and active work in this area.
3,234.8
3,244.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Yeah. So here's just some examples of recent work in the last year and a half in this area from lots of different labs around the world.
3,244.8
3,255.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
You've already talked about captioning. Yes. Cool. Yes. So instead of a sequence, you can put in pixels.
3,255.8
3,267.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
So you put in pixels, you run it through a CNN. That's your initial state. And then you can generate captions. It's pretty amazing.
3,267.8
3,275.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
So if you'd asked me five years ago, can computer do that? I would have said, I don't think so. Not for a while. Here we are.
3,275.8
3,282.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
It can actually do. And the nice thing is it's a generative model. So you can generate different sentences by exploring the distribution.
3,282.8
3,289.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
I think both of those are nice captions. It's not quite as sophisticated as the human one. You'll often see this.
3,289.8
3,299.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
One of the things is if you train the model a little bit, it's really important to retrain your models to convergence.
3,299.8
3,308.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Because like, that's not so good. But if you train that model longer, it's the same model. It just got a lot better.
3,308.8
3,320.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Same thing here, right? Train that is sitting on the tracks. Yes. That's true. But that one's better.
3,320.8
3,327.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
But you still see the human has a lot more sophistication, right? Like they know that they're across the tracks near a depot. And that's sort of a more subtle thing that the models don't pick up on.
3,327.8
3,340.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Another kind of cute use of LSTMs. You can actually use them to solve all kinds of cool graph problems. So,
3,340.8
3,347.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Oriol Vinyal's, Mayor of Fortinato and Navi DiPiata did this work, which you start with that set of points. And then you try to predict
3,347.8
3,359.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
the traveling salesman tour for that, that works best. Or the convex hull. Or Delone triangulation of graphs. It's kind of cool.
3,359.8
3,370.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
It's just a sequence to sequence problem, where you feed in the sequence of points, and then the output is the right set of points for whatever problem you care about.
3,370.8
3,380.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
I already talked about smart reply. OK. So, LSTMs. So once you have that LSTMs cell code that I showed you on there, you can unroll it in time.
3,380.8
3,391.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
20 time steps. Let's say you wanted four layers per time step instead of one. Well, you would make a little bit of change to your code, and you would do that.
3,391.8
3,401.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Now you have four layers of computations, instead of one. One of the things you might want to do is run each of those layers on a different GPU.
3,401.8
3,409.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
So that's the change you would make to your TensorFlow code to do that. And that then allows you to have a model like this. So this is my sequence.
3,409.8
3,417.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
These are the different deep LSTM layers I have per time step. And after the first little bit, I can start getting more and more GPUs kind of involved in the process.
3,417.8
3,430.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
And you essentially pipeline the entire thing. There's a giant softmax at the top, but you can split across GPUs pretty easily.
3,430.8
3,438.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
So that's model parallelism. Right. We've now got six GPUs in this picture. We actually use eight. We split the softmax across four GPUs.
3,438.8
3,448.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
And then so every replica would be eight GPU cards on the same machine, all kind of humming along. And then you might use data parallelism in addition to that to train a bunch of eight GPU card replicas to train quickly.
3,448.8
3,464.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
So we have this notion of queues. So you can kind of have TensorFlow graphs that do a bunch of stuff and then stuff at an queue. And then later you have another bit of TensorFlow graph that starts with de-cuing some stuff and then does some things.
3,464.8
3,478.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
So one example is you might want to prefetch inputs and then like do the JPEG decoding to convert them into sort of arrays and maybe do some whitening and cropping and random crop selection and that stuff from an queue. And then you can then de-cue on say different GPU cards or something.
3,478.8
3,497.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
We also can group similar examples. So for translation work, we actually bucket by length of sentence so that your batch has a bunch of examples that are all roughly the same sentence length all 13 to 16 word sentences or something.
3,497.8
3,512.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
That just means we then need to only execute exactly that many unrolled steps rather than you know arbitrary max sentence length. It's good for randomization and shuffling. So we have a shuffling queue. You can just stuff a whole bunch of examples and then get random ones out.
3,512.8
3,531.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Oh yeah, I don't know what I'm saying. Data parallelism, right? So again, we want to be able to have many replicas of this thing. And so you make modest amounts of changes to your code. We're not quite as happy with this amount of change.
3,531.8
3,550.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
But this is kind of what you would do. There's a supervisor that has a bunch of things. You now say there's parameter devices and then prepare the session and then each one of these runs a local loop.
3,550.8
3,563.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
And you now keep track of how many steps have been applied globally across all the different replicas. And as soon as the cumulative sum of all those is big enough, you they all exit.
3,563.8
3,574.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
So asynchronous training looks kind of like that. You have three separate client threads driving three separate replicas all with parameter devices. So one of the big simplifications from disbelief to tensor flow is we don't have a separate parameter server notion anymore.
3,574.8
3,588.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
We have tensors and variables variables that contain tensors and they're just other parts of the graph. And typically you map them onto a small set of devices. They're going to hold you parameters. But it's all kind of unified in the same framework, whether I'm sending a tensor that's parameters or activations or whatever it doesn't matter.
3,588.8
3,610.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
This is kind of a synchronous view. I have one client and I just split my batch across three replicas and add the gradient and apply them.
3,610.8
3,621.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Normal mats turn out to be pretty tolerant of reduced precision. So you know, convert to FPE 16. There's actually an IEEE standard for 16 bit floating points now floating point values now most CPUs don't quite support that yet. So we implement in our own 16 bit format, which is essentially we have a 32 bit float and we lock off two bytes of mantissa.
3,621.8
3,647.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
And you should kind of do stochastic probabilistic rounding, but we don't. So sort of OK. It's just noise. And then you can convert it to 32 bits on the other side by filling in zeros.
3,647.8
3,662.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
It's very CPU friendly.
3,662.8
3,670.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Oh, typo. Well, so model and data parallelism in combined really lets you train models quickly. And that's what this is all really about is being able to take a research idea, try it out on a large data set that's representative of a problem you care about.
3,670.8
3,688.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Figure out did that work, figure out what the next set of experiments is.
3,688.8
3,692.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
It's pretty easy to express in TensorFlow, the data parallelism we're not so happy with for asynchronous parallelism, but in general, it's not too bad.
3,692.8
3,703.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
We've open source TensorFlow because we think that'll make it easier to share research ideas. We think, you know, having lots of people using the system outside of Google is a good thing to improve it and bring ideas that we don't necessarily have.
3,703.8
3,721.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
It makes it pretty easy to deploy machine learning systems into real products because you can go from a research idea into something running on a phone relatively easily.
3,721.8
3,730.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
The community of TensorFlow users outside Google is growing, which is nice. They're doing all kinds of cool things. So I picked a few random examples of things people have done that are posted on GitHub.
3,730.8
3,741.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
This is one that's like Andre has this this convent JS, which runs neural nets in your browser using JavaScript. And one of the things he has is a little game.
3,741.8
3,752.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
These reinforcement learning were the yellow dot learns to think it learns to eat the, oh yeah, learns to eat the green dots and avoid the red dots.
3,752.8
3,760.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
So someone re-implemented that in TensorFlow and actually added orange dots that are really bad. And someone implemented this really nice paper from University of Toburgen and the Max Planck Institute.
3,760.8
3,774.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
If you've seen this work, where you take an image, a picture, and typically a painting, and then renders the picture in the style of that painter. And you end up with cool stuff like that.
3,774.8
3,790.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
So someone implemented that, you know, there's a character or an end model. The Kira says the popular sort of higher level library and make it easier to express neural nets.
3,790.8
3,802.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Someone implemented the neural captioning model in TensorFlow. There's an effort underway to translate it into Mandarin.
3,802.8
3,812.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
It's cool. It's great.
3,812.8
3,815.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
The last thing I will talk about is the brain residency program. So we've started this program. It's a bit of an experiment this year. So this is more as an FYI for next year because our applications were closed, we're actually selecting our final candidates this week. And then the idea is the people will spend a year in our group doing deep learning research.
3,815.8
3,836.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
And the hope is they'll come out and have published a couple of papers on archive or submitted to conferences and learn a lot about doing sort of interesting machine learning research.
3,836.8
3,850.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
And we're looking for people for next year, obviously, that are strong in anyone taking this class, probably that's right, that's the bill. We'll reopen applications in the fall. So if you're graduating like next year, this could be a good opportunity.
3,850.8
3,871.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
There you go. There's a bunch more reading there. Start here because I did a lot of work in the TensorFlow white paper to make the whole set of references clickable.
3,871.8
3,886.8