id
stringclasses
15 values
title
stringclasses
15 values
url
stringclasses
15 values
published
stringclasses
15 values
text
stringlengths
2
633
start
float64
0
4.86k
end
float64
2
4.89k
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
And so you can click your way through to 50 other papers.
3,886.8
3,890.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Okay, that's all I have. I'm done early. 165.
3,890.8
3,905.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Yeah, so it seems like models like smart reply get better with the more data you have. Google probably has the biggest data set of email in the world, but people are proud of might be uncomfortable with having a private email being used for the sort of like.
3,905.8
3,921.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
How do you guys handle this sort of privacy accuracy?
3,921.8
3,925.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Yes.
3,925.8
3,926.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
So those kinds of things are actually tricky and we have an actually a pretty extensive detailed process for things that are, you know, talking about, you know, using a user's private data for these kinds of things.
3,926.8
3,941.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
So for smart reply, essentially all the replies that were that it ever will generate are things that have been said by thousands of users.
3,941.8
3,950.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
So the input to the model for training is an email, which is typically not said by thousands of people, but the only things we'll ever suggest are things that are generated in response by, you know, a sufficient number of unique users to protect the privacy of the users.
3,950.8
3,966.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
So that's the kind of things you're thinking about when designing products like that and there's actually a lot of care and thought going into, you know, we think this would be a great feature, but how can we do this in a way that ensures that people's privacy is protected.
3,966.8
3,983.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
I want to ask a question, I expect to, some of the work we do check engine non-tell facilitation. I remember from the paper that as we're working on this massive GFT dataset, and you guys at some point you said it, but we needed a lot of different specialist networks, nor a shared network of some of the more difficult aspects of the classification.
3,983.8
4,007.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
And at some point, the paper you guys mentioned technically considering the possibility of them just filling all these different specialists into a single larger network.
4,007.8
4,016.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
I think there's, you know, in a ICLR, there's very some work done by Rustin through the use of wanted test learning in such a form as well, but they have multiple specialists for different games and just one single thing.
4,016.8
4,028.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
But that was still a relative to small scale, and I was wondering, you know, have you guys considered, has there been some further work done on really large scale distillations, especially the networks in the season number?
4,028.8
4,040.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
So we haven't pursued the distillation work as much as we probably should have, it's just kind of been one of the things on the back burner compared to all the other things we've been working on.
4,040.8
4,052.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
I do think the notion of specialists, so I didn't talk about that at all, but essentially we had a model that was sort of an arbitrary image net classification model, or like JFT, which is like 17,000 glasses or something, it's an internal dataset.
4,052.8
4,068.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
So we trained a good general model that could deal with all those classes, and then we found interesting, confused, confused classes that algorithmically, like all the kinds of mushrooms in the world, and we would train specialists on data sets that were enriched with only mushroom data primarily, and then occasional random images.
4,068.8
4,089.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
And we could train 50 such models that were each good at different kinds of things, and get pretty significant accuracy increases.
4,089.8
4,100.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
At the time we were able to distill it into a single model pretty well, but we haven't really pursued that too much.
4,100.8
4,107.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Turns out just the mechanics of then training 50 separate models, and then distilling them is a bit unwieldy.
4,107.8
4,116.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
I had a question about the part you explained with the big model, the hit or model, but isn't it a bit worrying to think that a small model when we optimize it with specific techniques we only get 40% accuracy, and when we optimize it with other techniques we get 60% accuracy, because the capacity of the model is enough to solve the text, but we don't know how to optimize properly.
4,116.8
4,143.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
So isn't it like doesn't it mean that we fail at optimizing everything in some way?
4,143.8
4,151.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
I will first say, I think the area of optimization is a ripe one for exploration and further research, because as you say this clearly demonstrates that we're, I mean it's a different objective we're telling the model to do, right?
4,151.8
4,166.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
We're telling it to use this hard label, or use this hard label, and also get this incredibly rich gradient, which says like here's 100 other signals of information.
4,166.8
4,177.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
So in some sense it's an unfair comparison, right? You're telling it a lot more stuff about every example, in that case.
4,177.8
4,185.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Sometimes it's not so much an optimization failing, it's maybe we should be figuring out how to feed richer signals than just a single binary label to our models.
4,185.8
4,196.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
I think that's probably an interesting area to pursue. We've thought about ideas of having a big ensemble of models all training collectively, and sort of exchanging information in the form of their predictions rather than in their parameters, because that might be a much cheaper, more network friendly way of collaboratively.
4,196.8
4,214.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Collaboratively training on a really big dataset, or each train at 1% of the day or something and swap predictions.
4,214.8
4,222.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Could you just do a simple thing where you take the captions, like this is a word called the banana, could you convert that to a more rich non one hot classification label?
4,222.8
4,234.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
So something like banana, 50% or 100% was to group the entire data. So then you can train on your labels.
4,234.8
4,243.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Yeah, I mean I think all these kinds of ideas are worth pursuing. The captioning work is interesting, but it tends to have many fewer labels with captions than we have images with sort of hard labels like GDR Jaguar.
4,243.8
4,261.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
At least that are prepared in a clean way. I think actually on the web there's a lot of images with sentences written about them. The trick is identifying which sentence is about which image.
4,261.8
4,274.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Yeah.
4,274.8
4,276.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
What extent do your models like train online, whether or not you took the challenge with that?
4,276.8
4,282.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
So it depends on the problem, some problems you don't need to really train online. Speech recognition is a good example. It's not like human vocal cords change that often.
4,282.8
4,292.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
The words you say change a little bit. So query distributions tend to be not very stationary. Right? Like the words everyone collectively says tomorrow are pretty similar to the ones they say today, but subtly different.
4,292.8
4,307.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Like Long Island, Chocolate Festival might suddenly become more and more prominent over the next two weeks or something.
4,307.8
4,313.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
And those kinds of things you know you need to be cognizant of the fact that you want to capture those kinds of effects. And one of the ways to do it is to train your model in an online manner.
4,313.8
4,324.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Sometimes it doesn't need to be so online that you like get an example and immediately update your model, but you know depending on the problem every five minutes or 10 minutes or hour or day is sufficient.
4,324.8
4,336.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
For most problems, but it is pretty important to do that for non stationary problems like ads or search queries or things that change over time like that.
4,336.8
4,351.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
You mentioned that rank brain was the third most important vehicle for search? What are one and two if you can tell us?
4,351.8
4,358.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
I can't say.
4,358.8
4,361.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Yes. Yeah.
4,361.8
4,366.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
How do you deal with the noise in your training data?
4,366.8
4,372.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Yeah, I mean noise and training data that actually happens all the time. Right? Like even if you look at the image net examples occasionally you'll come across one and you're like, actually I was just sitting in a meeting with some people who are working on visualization techniques and one of the things that were visualizing was
4,372.8
4,389.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
the C-FAR input data. And they had this kind of cool representation of all the C-FAR examples all mapped down to like four by four pixels each on their screen. So 60,000 images. And like you could kind of pick things out and select and sort and you're like, oh here's one that like the model predicted with high confidence, but it got wrong.
4,389.8
4,410.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
And it said airplane as the model said airplane and you look at the image and it's an airplane. And the label is not airplane. You're like, oh, I understand why I got it wrong.
4,410.8
4,425.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
So it's, you know, you want to make sure your data set is as clean as possible because training on noisy data is generally not as good as clean data. But on the other hand, expending too much effort to clean the data is often more effort than it's worth.
4,425.8
4,443.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
So you kind of do some filtering kinds of things to, you know, throw out the obvious bad stuff and generally more noisy data is often better than less clean data.
4,443.8
4,455.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
It depends on the problem, but it's certainly about one thing to try. And then if you're unhappy with the results, then investigate why. Other questions? Okay. Cool. Thank you.
4,455.8
4,479.8
T7YkPWpwFD4
CS231n Winter 2016: Lecture 15: Invited Talk by Jeff Dean
https://youtu.be/T7YkPWpwFD4
2016-03-10T00:00:00.000000
Thank you.
4,479.8
4,488.8