CHANNEL_NAME
stringclasses
2 values
URL
stringclasses
12 values
TITLE
stringclasses
12 values
DESCRIPTION
stringclasses
12 values
TRANSCRIPTION
stringclasses
12 values
SEGMENTS
stringclasses
12 values
Neural Networks: Zero to Hero
https://www.youtube.com/watch?v=TCH_1BHY58I
Building makemore Part 2: MLP
We implement a multilayer perceptron (MLP) character-level language model. In this video we also introduce many basics of machine learning (e.g. model training, learning rate tuning, hyperparameters, evaluation, train/dev/test splits, under/overfitting, etc.). Links: - makemore on github: https://github.com/karpathy/makemore - jupyter notebook I built in this video: https://github.com/karpathy/nn-zero-to-hero/blob/master/lectures/makemore/makemore_part2_mlp.ipynb - collab notebook (new)!!!: https://colab.research.google.com/drive/1YIfmkftLrz6MPTOO9Vwqrop2Q5llHIGK?usp=sharing - Bengio et al. 2003 MLP language model paper (pdf): https://www.jmlr.org/papers/volume3/bengio03a/bengio03a.pdf - my website: https://karpathy.ai - my twitter: https://twitter.com/karpathy - (new) Neural Networks: Zero to Hero series Discord channel: https://discord.gg/Hp2m3kheJn , for people who'd like to chat more and go beyond youtube comments Useful links: - PyTorch internals ref http://blog.ezyang.com/2019/05/pytorch-internals/ Exercises: - E01: Tune the hyperparameters of the training to beat my best validation loss of 2.2 - E02: I was not careful with the intialization of the network in this video. (1) What is the loss you'd get if the predicted probabilities at initialization were perfectly uniform? What loss do we achieve? (2) Can you tune the initialization to get a starting loss that is much more similar to (1)? - E03: Read the Bengio et al 2003 paper (link above), implement and try any idea from the paper. Did it work? Chapters: 00:00:00 intro 00:01:48 Bengio et al. 2003 (MLP language model) paper walkthrough 00:09:03 (re-)building our training dataset 00:12:19 implementing the embedding lookup table 00:18:35 implementing the hidden layer + internals of torch.Tensor: storage, views 00:29:15 implementing the output layer 00:29:53 implementing the negative log likelihood loss 00:32:17 summary of the full network 00:32:49 introducing F.cross_entropy and why 00:37:56 implementing the training loop, overfitting one batch 00:41:25 training on the full dataset, minibatches 00:45:40 finding a good initial learning rate 00:53:20 splitting up the dataset into train/val/test splits and why 01:00:49 experiment: larger hidden layer 01:05:27 visualizing the character embeddings 01:07:16 experiment: larger embedding size 01:11:46 summary of our final code, conclusion 01:13:24 sampling from the model 01:14:55 google collab (new!!) notebook advertisement
Hi everyone. Today we are continuing our implementation of Makemore. Now in the last lecture we implemented the bi-gram language model and we implemented it both using counts and also using a super simple neural network that has single linear layer. Now this is the Jupyter Notebook that we built out last lecture and we saw that the way we approached this is that we looked at only the single previous character and we predicted the distribution for the character that would go next in the sequence and we did that by taking counts and normalizing them into probabilities so that each row here sums to 1. Now this is all well and good if you only have one character of previous context and this works and it's approachable. The problem with this model of course is that the predictions from this model are not very good because you only take one character of context so the model didn't produce very name like sounding things. Now the problem with this approach though is that if we are to take more context into account when predicting the next character in a sequence things quickly blow up and this table the size of this table grows and in fact it grows exponentially with the length of the context because if we only take a single character at a time that's 27 possibilities of context but if we take two characters in the past and try to predict the third one suddenly the number of rows in this matrix you can look at it that way is 27 times 27 so there's 729 possibilities for what could have come in the context. If we take three characters as the context suddenly we have 20 thousand possibilities of context and so there's just way too many rows of this matrix it's way too few counts for each possibility and the whole thing just kind of explodes and doesn't work very well. So that's why today we're going to move on to this bullet point here and we're going to implement a multi-layer perceptron model to predict the next character in a sequence and this modeling approach that we're going to adopt follows this paper Benjue et al. 2003 so I have the paper pulled up here. Now this isn't the very first paper that proposed the use of multi-layer perceptrons or neural networks to predict the next character or token in a sequence but it's definitely one that is was very influential around that time it is very often cited to stand in for this idea and I think it's a very nice write-up and so this is the paper that we're going to first look at and then implement. Now this paper has 19 pages so we don't have time to go into the full detail of this paper but I invite you to read it it's very readable interesting and has a lot of interesting ideas in it as well. In the introduction they described the exact same problem I just described and then to address it they proposed the following model. Now keep in mind that we are building a character level language model so we're working on the level of characters. In this paper we have a vocabulary of 17,000 possible words and they instead build a word level language model but we're going to still stick with the characters but we'll take the same modeling approach. Now what they do is basically they propose to take every one of these words 17,000 words and they're going to associate to each word a say 30-dimensional feature vector. So every word is now embedded into a 30-dimensional space you can think of it that way. So we have 17,000 points or vectors in a 30-dimensional space and that's you might imagine that's very crowded that's a lot of points for a very small space. Now in the beginning these words are initialized completely randomly so there's pride out that random but then we're going to tune these embeddings of these words using that propagation. So during the course of training of this neural network these points or vectors are going to basically move around in this space and you might imagine that for example words that have very similar meanings or there are indeed synonyms of each other might end up in a very similar part of the space and conversely words that mean very different things would go somewhere else in the space. Now their modeling approach otherwise is identical to ours. They are using a multi-linear neural network to predict the next word given the previous words and to train the neural network they are maximizing the log-black limit of the training data just like we did. So the modeling approach itself is identical. Now here they have a concrete example of this intuition. Why does it work? Basically suppose that for example you are trying to predict a dog was running in a blank. Now suppose that the exact phrase a dog was running in a has never occurred in a training data and here you are at sort of test time later when the model is deployed somewhere and it's trying to make a sentence and it's saying dog was running in a blank and because it's never encountered this exact phrase in the training set you're out of distribution as we say. Like you don't have fundamentally any reason to suspect what might come next but this approach actually allows you to get around that because maybe you didn't see the exact phrase a dog was running in a something but maybe you've seen similar phrases maybe you've seen the phrase the dog was running in a blank and maybe your network has learned that a and the are like frequently are interchangeable with each other and so maybe it took the embedding for a and the embedding for the and it actually put them like nearby each other in the space and so you can transfer knowledge through that embedding and you can generalize in that way. Similarly the network could know that cats and dogs are animals and they co-occur in lots of very similar contexts and so even though you haven't seen this exact phrase or if you haven't seen exactly walking or running you can through the embedding space transfer knowledge and you can generalize to novel scenarios. So let's now scroll down to the diagram of the neural network they have a nice diagram here and in this example we are taking three previous words and we are trying to predict the fourth word in a sequence. Now these three previous words as I mentioned we have a vocabulary of 17,000 possible words so every one of these basically are the index of the incoming word and because there are 17,000 words this is an integer between 0 and 16,999. Now there's also a lookup table that they call C. This lookup table is a matrix that is 17,000 by say 30 and basically what we're doing here is we're treating this as a lookup table and so every index is plucking out a row of this embedding matrix so that each index is converted to the 30-dimensional vector that corresponds to the embedding vector for that word. So here we have the input layer of 30 neurons for three words making up 90 neurons in total and here they're saying that this matrix C is shared across all the words so we're always indexing it to the same matrix C over and over for each one of these words. Next up is the hidden layer of this neural network. The size of this hidden neural layer of this neural net is a hop parameter. So we use the word hyper parameter when it's kind of like a design choice up to the designer of the neural net and this can be as large as you'd like or as small as you'd like so for example the size could be a hundred and we are going to go over multiple choices of the size of this hidden layer and we're going to evaluate how well they work. So say there were a hundred neurons here all of them would be fully connected to the 90 words or 90 numbers that make up these three words. So this is a fully connected layer and there's a 10-inch long linearity and then there's this output layer and because our 17,000 possible words that could come next this layer has 17,000 neurons and all of them are fully connected to all of these neurons in the hidden layer. So there's a lot of parameters here because there's a lot of words so most computation is here. This is the expensive layer. Now there are 17,000 logits here so on top of there we have the softmax layer which we've seen in our previous video as well. So every one of these logits is expedited and then everything is normalized to sum to one so that we have a nice probability distribution for the next word in the sequence. Now of course during training we actually have the label. We have the identity of the next word in the sequence. That word or its index is used to pluck out the probability of that word and then we are maximizing the probability of that word with respect to the parameters of this neural net. So the parameters are the weights and biases of this output layer, the weights and biases of this in the layer and the embedding lookup table C and all of that is optimized using backpropagation and these dashed arrows ignore those. That represents a variation of a neural net that we are not going to explore in this video. So that's the setup and now let's implement it. Okay so I started a brand new notebook for this lecture. We are importing by torch and we are importing matplotlibs so we can create figures. Then I am reading all the names into a list of words like I did before and I'm showing the first eight right here. Keep in mind that we have a 32,000 in total. These are just the first eight and then here I'm building out the vocabulary of characters and all the mappings from the characters as strings to integers and vice versa. Now the first thing we want to do is we want to compile the dataset for the neural network and I had to rewrite this code. I'll show you in a second what it looks like. So this is the code that I created for the dataset creation so let me first run it and then I'll briefly explain how this works. So first we're going to define something called block size and this is basically the context length of how many characters do we take to predict the next one. So here in this example we're taking three characters to predict the fourth one so we have a block size of three. That's the size of the block that supports the prediction. Then here I'm building out the x and y. The x are the input to the neural net and the y are the labels for each example inside x. Then I'm area over the first five words. I'm doing first five just four efficiency while we are developing all the code but then later we're going to come here and erase this so that we use the entire training set. So here I'm printing the word m up and here I'm basically showing the examples that we can generate the five examples that we can generate out of the single sort of word m up. So when we are given the context of just dot dot dot the first character in a sequence is E in this context the label SM when the context is this the label SM and so forth. And so the way I build this out is first I start with a padded context of just zero tokens. Then I iterate over all the characters I get the character in the sequence and I basically build out the array y of this current character and the array x which stores the current running context. And then here see I print everything and here I crop the context and enter the new character in a sequence. So this is kind of like a roll in the window of context. Now we can change the block size here to for example four. And in that case we would be predicting the fifth character given the previous four or it can be five and then it would look like this or it can be say 10 and then it would look something like this. We're taking 10 characters to predict the 11th one and we're always padding with dots. So let me bring this back to three just so that we have what we have here in the paper. And finally the data set right now looks as follows. From these five words we have created a data set of 32 examples and each input is a neural net is three integers and we have a label that is also an integer y. So x looks like this. These are the individual examples and then y are the labels. So given this let's now write a neural network that takes these x's and predicts to y's. First let's build the embedding lookup table C. So we have 27 possible characters and we're going to embed them in a lower dimensional space. In the paper they have 17,000 words and they embed them in spaces as small dimensional as 30. So they cram 17,000 words into 30 dimensional space. In our case we have only 27 possible characters. So let's cram them in something as small as to start with for example a two dimensional space. So this lookup table will be random numbers and we'll have 27 rows and we'll have two columns. Right so each 20 each one of 27 characters will have a two-dimensional embedding. So that's our matrix C of embeddings in the beginning initialized randomly. Now before we embed all of the integers inside the input x using this lookup table C let me actually just try to embed a single individual integer like say five. So we get a sense of how this works. Now one way this works of course is we can just take the C and we can index into row five and that gives us a vector the fifth row of C and this is one way to do it. The other way that I presented in the previous lecture is actually seemingly different but actually identical. So in the previous lecture what we did is we took these integers and we used the one-hot encoding to first encode them. So if that one hot we want to encode integer five and we want to tell it that their number of classes is 27. So that's the 26-dimensional vector of all zeros except the fifth bit is turned on. Now this actually doesn't work. The reason is that this input actually must be a two-shot tensor. And I'm making some of these errors intentionally just so you get to see some errors and how to fix them. So this must be a tensor not an int, fairly straightforward to fix. We get a one-hot vector. The fifth dimension is one and the shape of this is 27. And now notice that just as I briefly alluded to in a previous video if we take this one-hot vector and we multiply it by C then what would you expect? Well number one first you'd expect an error because expected scalar type long but found float. So a little bit confusing but the problem here is that one hot the data type of it is long. It's a 64-bit integer but this is a float tensor. And so PyTorch doesn't know how to multiply an int with a float and that's why we had to explicitly cast this to a float so that we can multiply. Now the output actually here is identical and that it's identical because of the way the matrix multiplication here works. We have the one-hot vector multiplying columns of C and because of all the zeros they actually end up masking out everything in C except for the fifth row which is blocked out. And so we actually arrive at the same result and that tells you that here we can interpret this first piece here this embedding of the integer. We can either think of it as the integer indexing into a lookup table C but equivalently we can also think of this little piece here as a first layer of this bigger neural net. This layer here has neurons that have no nonlinearity there's no 10H there are just linear neurons and their wake matrix is C. And then we are encoding integers into one hot and feeding those into a neural net and this first layer basically embeds them. So those are two equivalent ways of doing the same thing. We're just going to index because it's much much faster and we're going to discard this interpretation of one-hot inputs into neural nets and we're just going to index integers and create and use embedding tables. Now embedding a single integer like five is easy enough. We can simply ask by torch to retrieve the fifth row of C or the row index five of C. But how do we simultaneously embed all of these 32 by three integers stored in array X? Wattly by torch indexing is fairly flexible and quite powerful. So it doesn't just work to ask for a single element five like this. You can actually index using lists. So for example we can get the rows five six and seven and this will just work like this. We can index with a list. It doesn't just have to be a list it can also be a actually a tensor of integers. And we can index with that. So this is a integer tensor five six seven and this will just work as well. In fact we can also for example repeat row seven and retrieve it multiple times and that same index will just get embedded multiple times here. So here we are indexing with a one-dimensional tensor of integers. But it turns out that you can also index with multi-dimensional tensors of integers. Here we have a two-dimensional tensor of integers. So we can simply just do C at X and this just works. And the shape of this is 32 by 3 which is the original shape. And now for every one of those three two by three integers we've retrieved the embedding vector here. So basically we have that as an example the 13th or example index 13 the second dimension is the integer one as an example. And so here if we do C of X which gives us that array and then we index into 13 by 2 of that array then we get the embedding here. And you can verify that C at one which is the integer at that location is indeed equal to this. You see they're equal. So basically a long story short PyTorch indexing is awesome and to embed simultaneously all of the integers in X we can simply do C of X and that is our embedding and that just works. Now let's construct this layer here the hidden layer. So we have that W1 as I'll call it are these weights which we will initialize randomly. Now the number of inputs to this layer is going to be three times two right because we have two dimensional embeddings and we have three of them. So the number of inputs is six and the number of neurons in this layer is a variable up to us. Let's use 100 neurons as an example and then biases will be also initialized randomly as an example and let's and we just need 100 of them. Now the problem with this is we can't simply normally we would take the input in this case that's embedding and we'd like to multiply it with these weights and then we would like to add the bias. This is roughly what we want to do but the problem here is that these embeddings are stacked up in the dimensions of this impotenture. So this will not work this matrix multiplication because this is a shape 32 by 3 by 2 and I can't multiply that by 6 by 100. So somehow we need to concatenate these inputs here together so that we can do something along these lines which currently does not work. So how do we transform this 32 by 3 by 2 into a 32 by 6 so that we can actually perform this multiplication over here. I'd like to show you that there are usually many ways of implementing what you'd like to do in Torch and some of them will be faster, better, shorter, etc. And that's because Torch is a very large library and it's got lots and lots of functions. So if we just go to the documentation and click on Torch you'll see that my slider here is very tiny and that's because there are so many functions that you can call on these tensors to transform them, create them, multiply them, add them, perform all kinds of different operations on them. And so this is kind of like the space of possibility if you will. Now one of the things that you can do is we can control here, control off for concatenate and we see that there's a function torqued.cat, short for concatenate. And this concatenate is given sequence of tensors in a given dimension and these tensors must have the same shape, etc. So we can use the concatenate operation to in a naive way concatenate these three embeddings for each input. So in this case we have m of m of the shape. And really what we want to do is we want to retrieve these three parts and concatenate them. So we want to grab all the examples. We want to grab first the zero index and then all of this. So this plugs out the 32 by two embeddings of just the first word here. And so basically we want this guy. We want the first dimension and we want the second dimension. And these are the three pieces individually. And then we want to treat this as a sequence and we want to torqued.cat on that sequence. So this is the list torqued.cat takes a sequence of tensors. And then we have to tell it along which dimension to concatenate. So in this case all these are 32 by two and we want to concatenate not across dimension zero but across dimension one. So passing in one gives us a result that the shape of this is 32 by six exactly as we'd like. So that basically took 32 and squashed these back and concatenate them into 32 by six. Now this is kind of ugly because this code would not generalize if we want to later change the block size. Right now we have three inputs three words. But what if we had five then here we would have to change the code because I'm indexing directly. Well torqued comes to rescue again because that turns out to be a function called unbind and it removes a tensor dimension. So removes a tensor dimension returns a tuple of all slices along the given dimension without it. So this is exactly what we need. And basically when we call tors.unbind tors.unbind of m and passing dimension one index one. This gives us a list of a list of tensors exactly equivalent to this. So running this gives us a line three and it's exactly this list. And so we can call torched out cat on it and along the first dimension. And this works and this shape is the same. But now this is it doesn't matter if we have block size three or five or ten this will just work. So this is one way to do it. But it turns out that in this case there's actually a significantly better and more efficient way. And this gives me an opportunity to hint at some of the internals of torched out tensor. So let's create an array here of elements from zero to 17. And the shape of this is just 18. It's a single picture of 18 numbers. It turns out that we can very quickly we represent this as different sized and dimensional tensors. We do this by calling a view. And we can say that actually this is not a single vector of 18. This is a two by nine tensor. Or alternatively this is a nine by two tensor. Or this is actually a three by three by two tensor. As long as the total number of elements here multiply to be the same this will just work. And in PyTorch this operation calling that view is extremely efficient. And the reason for that is that in each tensor there's something called the underlying storage. And the storage is just the numbers always as a one dimensional vector. And this is how this tensor has represented in the computer memory. It's always a one dimensional vector. But when we call that view we are manipulating some of attributes of that tensor that dictate how this one dimensional sequence is interpreted to be an end-dimensional tensor. And so what's happening here is that no memory is being changed, copied, moved, or created when we call that view. The storage is identical. But when you call that view some of the internal attributes of the view of this tensor are being manipulated and changed. In particular that's something there's something called storage offset, strides, and shapes. And those are manipulated so that this one dimensional sequence of bytes is seen as different and dimensional arrays. There's a blog post here from Eric called PyTorch internals where he goes into some of this with respect to tensor and how the view of a tensor is represented. And this is really just like a logical construct of representing the physical memory. And so this is a pretty good blog post that you can go into. I might also create an entire video on the internals of Torch tensor and how this works. For here we just note that this is an extremely efficient operation. And if I delete this and come back to our end we see that the shape of our end is 3 2 by 3 by 2. But we can simply ask for PyTorch to view this instead as a 3 2 by 6. And the way that gets flattened into a 3 2 by 6 array just happens that these two get stacked up in a single row. And so that's basically the concatenation operation that we're after. And you can verify that this actually gives the exact same result as what we had before. So this is an element y equals and you can see that all the elements of these two tensors are the same. And so we get the exact same result. So long story short we can actually just come here. And if we just view this as a 3 2 by 6 instead then this multiplication will work and give us the hidden states that were after. So if this is h then h dot shape is now the 100 dimensional activations for every one of our 32 examples. And this gives the desired result. Let me do two things here. Number one let's not use 32. We can for example do something like m dot shape at zero so that we don't hard code these numbers and this would work for any size of this m or alternatively we can also do negative one. When we do negative one, PytroTroll and Fur what this should be. Because the number of elements must be the same and we're saying that this is 6. PytroTroll derived that this must be 32 or whatever else it is if m is of different size. The other thing is here one more thing I'd like to point out is here when we do the concatenation this actually is much less efficient because this concatenation would create a whole new tensor with a whole new storage so new memory is being created because there's no way to concatenate tensors just by manipulating the view attributes. So this is inefficient and creates all kinds of new memory. So let me repeat this now. We don't need this and here to calculate H we want to also dot 10 H of this ticket our. Oops to get our H. So these are now numbers between negative one and one because of the 10 H and we have that the shape is 32 by 100 and that is basically this hidden layer of activations here for every one of our 32 examples. Now there's one more thing I've lost over that we have to be very careful with and that this and that's this plus here. In particular we want to make sure that the broadcasting will do what we like. The shape of this is 32 by 100 and the one's shape is 100. So we see that the addition here will broadcast these two and in particular we have 32 by 100 broadcasting to 100. So broadcasting will align on the right create a fake dimension here. So this will become a one by 100 row vector and then it will copy vertically for every one of these rows of 32 and do an element wise addition. So in this case the correct thing will be happening because the same bias vector will be added to all the rows of this matrix. So that is correct. That's what we'd like and it's always good practice just make sure so that you don't treat yourself in the foot. And finally let's create the final layer here. So let's create W2 and V2. The input now is 100 and the output number of neurons will be for us 27 because we have 27 possible characters that come next. So the biases will be 27 as well. So therefore the low jits which are the outputs of this neural net are going to be H multiplied by W2 plus B2. Loads is that shape is 32 by 27 and the low jits look good. Now exactly as we saw in the previous video we want to take these low jits and we want to first experiment shape them to get our fake counts and then we want to normalize them into a probability. So prob is counts divide and now counts that sum along the first dimension and keep them as true exactly as in the previous video. And so prob that shape now is the R2 by 27 and you'll see that every row of prob sums to one so it's normalized. So that gives us the probabilities. Now of course we have the actual letter that comes next and that comes from this array why which we created during the data separation. So why is this last piece here which is the unethically of the next character in a sequence that we'd like to now predict. So what we'd like to do now is just as in the previous video we'd like to index into the rows of prob and each row we'd like to pluck out the probability assigned to the correct character as given here. So first we have torshtot range of 32 which is kind of like an iterator over numbers from 0 to 31 and then we can index into prob in the following way. Prob in torshtot range of 32 which it erased the roads and then each row we'd like to grab this column as given by why. So this gives the current probabilities as assigned by this neural network with this setting of its weights to the correct character in the sequence. And you can see here that this looks okay for some of these characters like this is basically point two but it doesn't look very good at all for many other characters. Like this is 0.0701 probability and so the network thinks that some of these are extremely unlikely but of course we haven't trained the neural network yet. So this will improve and ideally all of these numbers here of course are one because then we are correctly predicting the next character. Now just as in the previous video we want to take these probabilities. We want to look at the lock probability and then we want to look at the average rock probability and the negative of it to create the negative log likelihood loss. So the loss here is 17 and this is the loss that we'd like to minimize to get the network to predict the correct character in the sequence. Okay so I rewrote everything here and made it a bit more respectable. So here's our data set. Here's all the parameters that we defined. I'm now using a generator to make it reproducible. I clustered all the primers into a single list of primers so that for example it's easy to count them and see that in total we currently have about 3,400 primers and this is the forward pass as we developed it and we arrive at a single number here the loss that is currently expressing how well this neural network works with the current setting of primers. Now I would like to make it even more respectable. So in particular see these lines here where we take the logits and we calculate a loss. We're not actually reinventing the wheel here. This is just classification and many people use classification and that's why there is a functional dot cross entropy function in PyTorch to calculate this much more efficiently. So we could just simply call f dot cross entropy and we can pass in the logits and we can pass in the array of targets. Why? And this calculates the exact same loss. So in fact we can simply put this here and erase these three lines and we're going to get the exact same result. Now there are actually many good reasons to prefer f dot cross entropy over rolling your own implementation like this. I did this for educational reasons but you'd never use this in practice. Why is that? Number one when you use f dot cross entropy PyTorch will not actually create all these intermediate tensors because these are all new tensors in memory and all this is fairly inefficient to run like this. Instead PyTorch will cluster up all these operations and very often create fused kernels that very efficiently evaluate these expressions that are sort of like clustered mathematical operations. Number two the backward pass can be made much more efficient and not just because it's a fused kernel but also analytically and mathematically it's much it's often a very much simpler backward pass to implement. We actually sell this with micrograd. You see here when we implemented 10h the forward pass of this operation to calculate the 10h was actually fairly complicated mathematical expression but because it's a clustered mathematical expression when we did the backward pass we didn't individually backward through the x and the two times and the minus one and division etc. We just said it's 1 minus t squared and that's a much simpler mathematical expression and we were able to do this because we're able to reuse calculations and because we are able to mathematically and analytically derive the derivative and often that expression simplifies mathematically and so there's much less to implement. So not only can it be made more efficient because it runs in a fused kernel but also because the expressions can take a much simpler form mathematically. So that's number one. Number two under the hood f dot cross entropy can also be significantly more numerically well behaved. Let me show you an example of how this works. Suppose we have a logit of negative two three negative three zero and five and then we are taking the exponent of it and normalizing it to sum to one. So when logits take on this values everything is well and good and we get a nice probability distribution. Now consider what happens when some of these logits take on more extreme values and that can happen during optimization of neural network. Suppose that some of these numbers grow very negative like say negative 100 then actually everything will come out fine. We still get a probabilities that you know are well behaved and they sum to one and everything is great but because of the way the exports if you have very positive logits like say positive 100 in here you actually start to run into trouble and we get not a number here and the reason for that is that these counts have an inf here. So if you pass in a very negative number two pecs you just get a very negative, sorry not negative but very small number very near zero and that's fine. But if you pass in a very positive number suddenly we run out of range in our floating point number that represents these counts. So basically we're taking E and we're raising it to the power of 100 and that gives us inf because we run out of dynamic range on this floating point number that is count. And so we cannot pass very large logits through this expression. Now let me reset these numbers to something reasonable. The way PyTorch solved this is that you see how we have a really well behaved result here. It turns out that because of the normalization here you can actually offset logits by any arbitrary constant value that you want. So if I add one here you actually get the exact same result or if I add two or if I subtract three any offset will produce the exact same probabilities. So because negative numbers are okay but positive numbers can actually overflow this exp. What PyTorch does is it internally calculates the maximum value that occurs in the logits and it subtracts it. So in this case it would subtract five. And so therefore the greatest number in logits will become zero and all the other numbers will become some negative numbers. And then the result of this is always well behaved. So even if we have 100 here previously not good but because PyTorch will subtract 100 this will work. And so there's many good reasons to call cross entropy. Number one the forward pass can be much more efficient. The backward pass can be much more efficient and also thinks it can be much more numerically well behaved. Okay so let's now set up the training of this neural net. We have the forward pass. We don't need these because that we have that loss is equal to half that cross entropy. That's the forward pass. Then we need the backward pass. First we want to set the gradients to be zero. So for P in parameters we want to make sure that P dot grad is none which is the same as setting it to zero in PyTorch. And then lost a backward to populate those gradients. Once we have the gradients we can do the parameter update. So for P in parameters we want to take all the dear and we want to nudge it learning rate times P dot grad. And then we want to repeat this a few times. And let's print the loss here as well. Now this once the vice and it will create an error because we also have to go for P in parameters. And we have to make sure that P dot requires grad is set to true in PyTorch. And this should just work. Okay so we started off with loss of 17 and we're decreasing it. Lots run longer. And you see how the loss decreases a lot here. So if we just run for a thousand times we get a very very low loss. And that means that we're making very good predictions. Now the reason that this is so straightforward right now is because we're only overfitting 32 examples. So we only have 32 examples of the first five words. And therefore it's very easy to make this neural net fit only these 32 examples because we have 3,400 parameters and only 32 examples. So we're doing what's called overfitting a single batch of the data and getting a very low loss and good predictions. But that's just because we have so many parameters for so few examples. So it's easy to make this be very low. Now we're not able to achieve exactly zero. And the reason for that is we can for example look at low juts which are being predicted. And we can look at the max along the first dimension and in PyTorch max reports both the actual values that take on the maximum number but also the indices of ease. And you'll see that the indices are very close to the labels. But in some cases they differ. For example in this very first example the predicted index is 19 but the label is 5. And we're not able to make loss be zero. And fundamentally that's because here the very first or the 0th index is the example where dot dot dot is supposed to predict E. But you see how dot dot dot is also supposed to predict and O. And dot dot dot is also supposed to predict in the eye and then S as well. And so basically E O A or S are all possible outcomes in a training set for the exact same input. So we're not able to completely overfit and and make the last big exactly zero. But we're getting very close in the cases where there's a unique input for a unique output. In those cases we do what's called overfit and we basically get the exact same and the exact correct result. So now all we have to do is we just need to make sure that we read in the full data set and optimize the neural line. Okay so let's swing back up where we created the data set and we see that here we only use the first five words. So let me now erase this and let me erase the print statements otherwise be be printing way too much. And so when we process the full data set of all the words we now had 228,000 examples instead of just 32. So let's now scroll back down to this as much larger. We initialize the weights the same number of parameters they all require gradients. And then let's push this print our lost item to be here and let's just see how the optimization goes if we run this. Okay so we started with a fairly high loss and then as we're optimizing the loss is coming down. But you'll notice that it takes quite a bit of time for every single iteration. So let's actually address that because we're doing way too much work forwarding and backwarding 220,000 examples. In practice what people usually do is they perform forward and backward pass an update on many batches of the data. So what we will want to do is we want to randomly select some portion of the data set and that's a mini batch and then only forward backward and update on that little mini batch. And then we erase on those mini batches. So in PyTorch we can for example use tors.randent. We can generate numbers between 0 and 5 and make 32 of them. I believe the size has to be a tuple in PyTorch. So we can have a tuple 32 of numbers between 0 and 5. But actually we want x. shape of 0 here. And so this creates integers that index into our data set and there's 32. So if our mini batch size is 32 then we can come here and we can first do mini batch construct. So integers that we want to optimize in this single iteration are in the Ix and then we want to index into x with Ix to only grab those rows. So we're only getting 32 rows of x and therefore embeddings will again be 32 by 3 by 2. Not 200,000 by 3 by 2. And then this Ix has to be used not just to index into x but also to index into y. And now this should be mini batches and this should be much much faster. So okay so it's instant almost. So this way we can run many many examples, nearly instantly and decrease the loss much much faster. Now because we're only doing with many batches the quality of our gradient is lower. So the direction is not as reliable. It's not the actual gradient direction. But the gradient direction is good enough even when it's estimating on only 32 examples that it is useful. And so it's much better to have an approximate gradient and just make more steps than it is to evaluate the exact gradient and take fewer steps. So that's why in practice this works quite well. So let's now continue the optimization. Let me take out this lost item from here and place it over here at the end. Okay so we're hovering around 2.5 or so. However this is only the loss for that mini batch. So let's actually evaluate the loss here for all of x and for all of y. Just so we have a full sense of exactly how well the model is doing right now. So right now we're at about 2.7 on the entire training set. So let's run the optimization for a while. Okay we're at 2.6, 2.5, 7, 2.5, 3. Okay so one issue of course is we don't know if we're stepping too slow or too fast. So this point one I just guessed it. So one question is how do you determine this learning rate? And how do we gain confidence that we're stepping in the right sort of speed? So I'll show you one way to determine a reasonable learning rate. It works as follows. Let's reset our parameters to the initial settings. And now let's print an every step. But let's only do 10 steps or so or maybe maybe 100 steps. We want to find like a very reasonable set the search range if you will. So for example this is like very low. Then we see that the loss is barely decreasing. So that's not that's like too low basically. So let's try this one. Okay so we're decreasing the loss but like not very quickly. So that's a pretty good low range. Now let's reset it again. And now let's try to find the place at which the loss kind of explodes. So maybe at negative one. Okay we see that we're minimizing the loss but you see how it's kind of unstable. It goes up and down quite a bit. So negative one is probably like a fast learning rate. Let's try negative 10. Okay so this isn't optimizing. This is not working very well. So negative 10 is way too big. Negative one was already kind of big. So therefore negative one was like somewhat reasonable if I reset. So I'm thinking that the right learning rate is somewhere between negative 0.001 and negative one. So the way we can do this here is we can use torque shut line space. And we want to basically do something like this between 0 and one but a number of steps is one more parameter that's required. Let's do a thousand steps. This creates 1000 numbers between 0.001 and 1. But it doesn't really make sense to step between these linearly. So instead let me create learning rate exponent. And instead of 0.001 this will be a negative three and this will be a zero. And then the actual errors that we want to search over are going to be 10 to the power of LRE. So now what we're doing is we're stepping linearly between the exponents of these learning rates. This is 0.001 and this is 1 because 10 to the power of 0 is 1. And therefore we are spaced exponentially in this interval. So these are the candidate learning rates that we want to sort of like search over roughly. So now what we're going to do is here we are going to run the optimization for 1000 steps. And instead of using a fixed number we are going to use learning rate indexing into here lRs of i and make this i. So basically let me reset this to be again starting from random. Creating these learning rates between negative 0.001 and 1 but exponentially stepped. And here what we're doing is we're iterating a thousand times. We're going to use the learning rate that's in the beginning very very low. In the beginning it's going to be 0.001 but by the end it's going to be 1. And then we're going to step with that learning rate. And now what we want to do is we want to keep track of the learning rates that we used. And we want to look at the losses that resulted. And so here let me track stats. So lRi.append.plr and loss.append.loss.item. Okay so again reset everything and then run. And so basically we started with a very low learning rate and we went all the way up to learning rate of negative 1. And now what we can do is we can pedal to that plot and we can plot the two. So we can plot the learning rates on the x-axis and the losses we saw on the y-axis. And often you're going to find that your plot looks something like this. Where in the beginning you have very low learning rates. We basically anything barely anything happened. Then we got to like a nice spot here. And then as we increased the learning rate enough we basically started to be kind of unstable here. So a good learning rate turns out to be somewhere around here. And because we have lRi here we actually may want to do not lR not the learning rate but the exponent. So that would be the lRi at i is maybe what we want to log. So let me reset this and redo that calculation. But now on the x-axis we have the exponent of the learning rate. And so we can see the exponent of the learning rate that is good to use. It would be sort of like roughly in the valley here. Because here the learning rates are just way too low. And then here we expect relatively good learning rate somewhere here. And then here things are starting to explode. So somewhere around negative 1 x the exponent of the learning rate is a pretty good setting. And 10 to the negative 1 is 0.1. So 0.1 is actually a fairly good learning rate around here. And that's what we had in the initial setting. But that's roughly how you would determine it. And so here now we can take out the tracking of these. And we can just simply set a lR to be 10 to the negative 1 or basically otherwise 0.1 as it was before. And now we have some confidence that this is actually a fairly good learning rate. And so now what we can do is we can crank up the iterations. We can reset our optimization. And we can run for a pretty long time using this learning rate. Oops. And we don't want to print. It's way too much printing. So let me again reset and run 10,000 steps. Okay, so we're 0.2 2.48 roughly. Let's run another 10,000 steps. 2.46. And now let's do one learning rate decay. What this means is we're going to take our learning rate and we're going to 10x lower it. And so over at the late stages of training potentially. And we may want to go a bit slower. Let's do one more actually at point one just to see if we're making an indent here. Okay, we're still making dent. And by the way the bi-gram loss that we achieved last video was 2.45. So we've already surpassed the bi-gram level. And once I get a sense that this is actually kind of starting to plateau off, people like to do as I mentioned this learning rate decay. So let's try to decay the loss, the learning rate I mean. And we achieve it about 2.3 now. Obviously this is janky and not exactly how you train it in production. But this is roughly what you're going through. You first find a decent learning rate using the approach that I showed you. Then you start with that learning rate and you train for a while. And then at the end people like to do a learning rate decay where you decay the learning rate by say a factor of 10 and you do a few more steps. And then you get a trained network roughly speaking. So we've achieved 2.3 and dramatically improved on the bi-gram language model using this simple neural net as described here using these 3,400 parameters. Now there's something we have to be careful with. I said that we have a better model because we are achieving a lower loss 2.3 much lower than 2.45 with the bi-gram model previously. Now that's not exactly true. And the reason that's not true is that this is actually fairly small model. But these models can get larger and larger if you keep adding neurons and parameters. So you can imagine that we don't potentially have a thousand parameters. We could have 10,000 or 100,000 or millions of parameters. And as the capacity of the neural network grows it becomes more and more capable of overfitting your training set. What that means is that the loss on the training set on the data that you're training on will become very very low as low as zero. But all that the model is doing is memorizing your training set for bigum. So if you take that model and it looks like it's working really well but you try to sample from it you will basically only get examples exactly as they are in the training set. You won't get any new data. In addition to that if you try to evaluate the loss on some withheld names or other words you will actually see that the loss on those can be very high. As a basically it's not a good model. So the standard in the field it is to split up your data set into three splits as we call them. We have the training split, the dev split or the validation split and the test split. So training split test or sorry dev or validation split and test split. And typically this would be say 80% of your data set. This could be 10% and this 10% roughly. So you have these three splits of the data. Now these 80% of your trainings of the data set, the training set is used to optimize the parameters of the model just like we're doing here using gradient descent. These 10% of the examples the dev or validation split they're used for development over all the hyper parameters of your model. So hyper primers are for example the size of this hidden layer, the size of the embedding. So this is a hundred or a two for us or we could try different things. The strength of the realization which we aren't using yet so far. So there's lots of different hyper primers and settings that go into defining in your lot. And you can try many different variations of them and see whichever one works best on your validation split. So this is used to train the primers. This is used to train the hyper primers and test split is used to evaluate basically the performance of the model at the end. So we're only evaluating the loss on the test split very very sparingly and very few times because every single time you evaluate your test loss and you learn something from it. You are basically starting to also train on the test split. So you are only allowed to test the loss on the test set very very few times. Otherwise you risk overfitting to it as well as you experiment on your model. So let's also split up our training data into train, dev and test. And then we are going to train on train and only evaluate on test very very sparingly. Okay so here we go. Here is where we took all the words and put them into x and y tensors. So instead let me create a new cell here and let me just copy paste some code here because I don't think it's that complex but we're gonna try to save a little bit of time. I'm converting this to be a function now and this function takes some list of words and builds the erase x and y for those words only. And then here I am shuffling up all the words. So these are the input words that we get. We are randomly shuffling them all up. And then we're going to set n1 to be the number of examples that is 80% of the words and n2 to be 90% of the way of the words. So basically if length of words is 30,000 and one is also I should probably run this. n1 is 25,000 and n2 is 28,000. And so here we see that I'm calling build data set to build the training set x and y by indexing into up to n1. So we're going to have only 25,000 training words. And then we're going to have roughly n2 minus n1 3,000 validation examples or dev examples. And we're going to have a length of words basically minus n2 or 3,200 and 4 examples here for the test set. So now we have x is and y's for all those three splits. Oh yeah I'm printing their size here inside it function as well. But here we don't have words but these are already the individual examples made from those words. So let's now scroll down here. And the data set now for training is more like this. And then when we reset the network, when we're training, we're only going to be training using x train x train and y train. So that's the only thing we're training on. Let's see where we are on a single batch. Let's now train maybe a few more steps. Training on neural hours can take a while. Usually you don't do it in line. You launch a bunch of jobs and you wait for them to finish. You can take multiple days and so on. Luckily this is a very small network. Okay so the loss is pretty good. Oh we accidentally used our learning rate. That is way too low. So let me actually come back. We used the the K learning rate of 0.01. So this will train faster. And then here when we evaluate, let's use the dev set here. X dev and Y dev to evaluate the loss. Okay. And let's not decay the learning rate and only do say 10,000 examples. And let's evaluate the dev loss once here. Okay so we're getting about 2.3 on dev. And so the neural network running was training did not see these dev examples. It hasn't optimized on them. And yet when we evaluate the loss on these dev, we actually get a pretty decent loss. And so we can also look at what the loss is on all of training set. Oops. And so we see that the training and the dev loss are about equal. So we're not overfitting. This model is not powerful enough to just be purely memorizing the data. And so far we are what's called underfitting because the training loss and the dev or test losses are roughly equal. So what that typically means is that our network is very tiny, very small. And we expect to make performance improvements by scaling up the size of this neural net. So let's do that now. So let's come over here. And let's increase the size within your net. The easiest way to do this is we can come here to the hidden layer, which currently is 100 neurons. And let's just bump this up. So let's do 300 neurons. And then this is also 300 biases. And here we have 300 inputs into the final layer. So let's initialize our neural net. We now have 10,000, 10,000 parameters instead of 3,000 parameters. And then we're not using this. And then here what I'd like to do is I'd like to actually keep track of that. Okay, let's just do this. Let's keep stats again. And here when we're keeping track of the loss, let's just also keep track of the steps. And let's just have eye here. And let's train on 30,000 or rather say, okay, let's try 30,000. And we are at 0.1. And we should alter on this, not as near a lot. And then here basically I want to plt dot plot the steps and things to the loss. So these are the x's and the y's. And this is the last function and how it's being optimized. Now you see that there's quite a bit of thickness to this. And that's because we are optimizing over these mini batches. And the mini batches create a little bit of noise in this. Where are we in the deficit? We are at 2.5. So we're still having to optimize this neural net very well. And that's probably because we make it bigger. It might take longer for this neural net to converge. And so let's continue training. Yeah, let's just continue training. One possibility is that the batch size is solo that we just have way too much noise in the training. And we may want to increase the batch size so that we have a bit more correct gradient. And we're not thrashing too much. And we can actually like optimize more properly. Okay. This will now become meaningless because we've re-initialized these. So yeah, this looks not pleasing right now. But the problem is look at tiny improvement, but it's so hard to tell. Let's go again. 2.5.2. Let's try to decrease the learning rate by factor of 2. Okay, we're 2.3.2. Let's continue training. We basically expect to see a lower loss than what we had before because now we have a much, much bigger model. And we were underfitting. So we'd expect that increasing the size of the model should help the neural net. 2.3.2. Okay, so that's not happening too well. Now, one other concern is that even though we've made the 10H layer here or the hidden layer much, much bigger, it could be that the bottleneck of the network right now are these embeddings that are too dimensional. It can be that we're just cramming way too many characters into just two dimensions. And the neural net is not able to really use that space effectively. And that that is sort of like the bottleneck to our networks performance. Okay, 2.23. So just by decreasing the learning rate, I was able to make quite a bit of progress. Let's run this one more time. And then evaluate the training and the dev loss. Now, one more thing after training that I'd like to do is I'd like to visualize the embedding vectors for these characters before we scale up the embedding size from 2. Because we'd like to make this bottleneck potentially go away. But once I make this greater than two, we won't be able to visualize them. So here, okay, we're at 2.23 and 2.24. So we're not improving much more. And maybe the bottleneck now is the character embedding size, which is two. So here I have a bunch of code that will create a figure. And then we're going to visualize the embeddings that were trained by the neural net on these characters. Because right now the embedding size is just two. So we can visualize all the characters with the x and the y coordinates as the two embedding locations for each of these characters. And so here are the x coordinates and the y coordinates, which are the columns of c. And then for each one, I also include the text of the little character. So here, what we see is actually kind of interesting. The network has basically learned to separate out the characters and cluster them a little bit. So for example, you see how the vowels, A, E, I, O, U are clustered up here. So what that's telling us is that the neural net treats these is very similar, right? Because when they feed into the neural net, the embedding for all these characters is very similar. And so the neural net thinks that they're very similar and kind of like interchangeable. And that makes sense. Then the points that are like really far away are, for example, Q. Q is kind of treated as an exception. And Q has a very special embedding vector, so to speak. Similarly, dot, which is a special character is all the way out here. And a lot of the other letters are sort of like clustered up here. And so it's kind of interesting that there's a little bit of structure here after the training. And it's not definitely not random. And these embeddings make sense. So we're now going to scale up the embedding size and won't be able to visualize it directly. And we expect that because we're underpinning and we made this layer much bigger and did not sufficiently improve the loss, we're thinking that the constraint to better performance right now could be these embedding vectors. So let's make them bigger. Okay, so let's crawl up here. And now we don't have two dimensional embeddings. We are going to have, say, 10 dimensional embeddings for each word. Then this layer will receive three times 10. So 30 inputs will go into the hidden layer. Let's also make the hidden layer a bit smaller. So instead of 300, let's just do 200 neurons in that hidden layer. So now the total number of elements will be slightly bigger at 11,000. And then we here, we have to be a bit careful because, okay, the learning rate we set to point one. Here we are a hard code in six. And obviously if you're working in production, you don't want to be hard coding magic numbers. But instead of six, this should now be 30. And let's run for 50,000 iterations and let me split out the initialization here outside so that when we run this a multiple times is not going to wipe out our loss. In addition to that here, let's instead of logging in lost items, let's actually log the, let's do log 10, I believe that's a function of the loss. And I'll show you why in a second, let's optimize this. Basically, I'd like to plot the log loss instead of the loss because when you plot the loss, many times it can have this hockey stick appearance and log squashes it in. So it just kind of looks nicer. So the x-axis is step i and the y-axis will be the loss i. And then here this is 30. Ideally, we wouldn't be hard coding these. Because let's look at the loss. Okay, it's again very thick because the mini batch size is very small. But the total loss over the training set is 2.3 and the the test or the dev set is 2.3 as well. So so far so good. Let's try to now decrease the learning rate by a factor of 10 and train for another 50,000 iterations. We'd hope that we would be able to beat 2.3. But again, we're just kind of like doing this very haphazardly. So I don't actually have confidence that our learning rate is set very well. That our learning rate decay, which we just do at random is set very well. And so the optimization here is kind of suspects to be honest. And this is not how you would do a typically production. In production, you would create parameters or hyper parameters out of all these settings. And then you would run lots of experiments and see whichever ones are working well for you. Okay, so we have 2.17 now and 2.2. Okay, so you see how the training and the validation performance are starting to slightly slowly depart. So maybe we're getting the sense that the neural net is getting good enough or that number parameters are large enough that we are slowly starting to overfit. Let's maybe run one more iteration of this and see where we get. But yeah, basically you would be running lots of experiments and then you are slowly scrutinizing whichever ones give you the best death performance. And then once you find all the hyper parameters that make your death performance good, you take that model and you evaluate the test set performance a single time. And that's the number that you report in your paper or wherever else you want to talk about and brag about your model. So let's then rerun the plot and rerun the train and death. And because we're getting lower loss now, it is the case that the embedding size of these was holding us back very likely. Okay, so 2.16 to 0.19 is what we're roughly getting. So there's many ways to go from many ways to go from here. We can continue tuning the optimization. We can continue for example playing with the size of the neural net or we can increase the number of words or characters in our case that we are taking as an input. So instead of just three characters, we could be taking more characters than as an input. And that could further improve the loss. Okay, so I changed the code slightly. So we have here 200,000 steps of the optimization. And in the first 100,000, we're using a learning rate of 0.1. And then in the next 100,000, we're using a learning rate of 0.01. This is the loss that I achieve. And these are the performance on the training and validation loss. And in particular, the best validation loss I've been able to obtain in the last 30 minutes or so is 2.17. So now I invite you to beat this number. And you have quite a few knobs available to you to I think surpass this number. So number one, you can of course change the number of neurons in the hidden layer of this model. You can change the dimensionality of the embedding lookup table. You can change the number of characters that are feeding in as an input, as the context into this model. And then of course, you can change the details of the optimization. How long are we running? What is the learning rate? How does it change over time? How does it decay? You can change the batch size and you may be able to actually achieve a much better convergence speed in terms of how many seconds or minutes it takes to train the model and get your result in terms of really good loss. And then of course, I actually invite you to read this paper. It is 19 pages, but at this point you should actually be able to read a good chunk of this paper and understand pretty good chunks of it. And this paper also has quite a few ideas for improvements that you can play with. So all of those are not available to you and you should be able to beat this number. I'm leaving that as an exercise to the reader and that's it for now and I'll see you next time. Before we wrap up, I also wanted to show how you would sample from the model. So we're going to generate 20 samples. At first we begin with all dots. So that's the context. And then until we generate the zeroed character again, we're going to embed the current context using the embedding table C. Now usually here, the first dimension was the size of the training set, but here we're only working with a single example that we're generating. So this is just the mission one, just for simplicity. And so this embedding then gets projected into the state. You get the logits. Now we calculate the probabilities. For that, you can use f dot softmax of logits. And that just basically exponentially is the logits and makes them sum to one. And similar to cross entropy, it is careful that there's no overflows. Once we have the probabilities, we sample from them using torshot multinomial to get our next index. And then we shift the context window to append the index and record it. And then we can just decode all the integers to strings and print them out. And so these are some example samples. And you can see that the model now works much better. So the words here are much more word like or name like. So we have things like ham, joes, lele, it started to sound a little bit more name like. So we're definitely making progress, but we can still improve on this model quite a lot. Okay, sorry, there's some bonus content. I wanted to mention that I want to make these notebooks more accessible. And so I don't want you to have to like install your bare notebooks and torture everything else. So I will be sharing a link to Google collab. And the Google collab will look like a notebook in your browser. And you can just go to URL and you'll be able to execute all of the code that you saw in the Google collab. And so this is me executing the code in this lecture. And I shortened it a little bit. But basically you're able to train the exact same network and then plot and sample from the model. And everything is ready for you to like tinker with the numbers right there in your browser. No installation necessary. So I just wanted to point that out and the link to this will be in the video description.
[{"start": 0.0, "end": 5.94, "text": " Hi everyone. Today we are continuing our implementation of Makemore. Now in the last"}, {"start": 5.94, "end": 9.32, "text": " lecture we implemented the bi-gram language model and we implemented it both"}, {"start": 9.32, "end": 13.76, "text": " using counts and also using a super simple neural network that has single"}, {"start": 13.76, "end": 20.04, "text": " linear layer. Now this is the Jupyter Notebook that we built out last lecture and"}, {"start": 20.04, "end": 23.76, "text": " we saw that the way we approached this is that we looked at only the single"}, {"start": 23.76, "end": 27.64, "text": " previous character and we predicted the distribution for the character that would"}, {"start": 27.64, "end": 31.92, "text": " go next in the sequence and we did that by taking counts and normalizing them"}, {"start": 31.92, "end": 38.24, "text": " into probabilities so that each row here sums to 1. Now this is all well and good"}, {"start": 38.24, "end": 42.760000000000005, "text": " if you only have one character of previous context and this works and it's"}, {"start": 42.760000000000005, "end": 47.84, "text": " approachable. The problem with this model of course is that the predictions from"}, {"start": 47.84, "end": 51.88, "text": " this model are not very good because you only take one character of context so"}, {"start": 51.88, "end": 57.56, "text": " the model didn't produce very name like sounding things. Now the problem with"}, {"start": 57.56, "end": 61.580000000000005, "text": " this approach though is that if we are to take more context into account when"}, {"start": 61.580000000000005, "end": 65.12, "text": " predicting the next character in a sequence things quickly blow up and this"}, {"start": 65.12, "end": 69.68, "text": " table the size of this table grows and in fact it grows exponentially with the"}, {"start": 69.68, "end": 73.52000000000001, "text": " length of the context because if we only take a single character at a time that's"}, {"start": 73.52000000000001, "end": 78.6, "text": " 27 possibilities of context but if we take two characters in the past and try to"}, {"start": 78.6, "end": 83.04, "text": " predict the third one suddenly the number of rows in this matrix you can look at it"}, {"start": 83.04, "end": 88.56, "text": " that way is 27 times 27 so there's 729 possibilities for what could have come in"}, {"start": 88.56, "end": 94.84, "text": " the context. If we take three characters as the context suddenly we have 20"}, {"start": 94.84, "end": 100.4, "text": " thousand possibilities of context and so there's just way too many rows of this"}, {"start": 100.4, "end": 105.84, "text": " matrix it's way too few counts for each possibility and the whole thing just"}, {"start": 105.84, "end": 110.32000000000001, "text": " kind of explodes and doesn't work very well. So that's why today we're going to"}, {"start": 110.32, "end": 113.88, "text": " move on to this bullet point here and we're going to implement a multi-layer"}, {"start": 113.88, "end": 119.75999999999999, "text": " perceptron model to predict the next character in a sequence and this modeling"}, {"start": 119.75999999999999, "end": 124.91999999999999, "text": " approach that we're going to adopt follows this paper Benjue et al. 2003 so I have"}, {"start": 124.91999999999999, "end": 129.0, "text": " the paper pulled up here. Now this isn't the very first paper that proposed the"}, {"start": 129.0, "end": 132.6, "text": " use of multi-layer perceptrons or neural networks to predict the next"}, {"start": 132.6, "end": 136.84, "text": " character or token in a sequence but it's definitely one that is was very"}, {"start": 136.84, "end": 140.28, "text": " influential around that time it is very often cited to stand in for this"}, {"start": 140.28, "end": 144.08, "text": " idea and I think it's a very nice write-up and so this is the paper that we're"}, {"start": 144.08, "end": 148.92000000000002, "text": " going to first look at and then implement. Now this paper has 19 pages so we don't"}, {"start": 148.92000000000002, "end": 152.68, "text": " have time to go into the full detail of this paper but I invite you to read it"}, {"start": 152.68, "end": 156.12, "text": " it's very readable interesting and has a lot of interesting ideas in it as"}, {"start": 156.12, "end": 159.68, "text": " well. In the introduction they described the exact same problem I just"}, {"start": 159.68, "end": 164.64, "text": " described and then to address it they proposed the following model. Now keep in"}, {"start": 164.64, "end": 168.72, "text": " mind that we are building a character level language model so we're working on"}, {"start": 168.72, "end": 173.52, "text": " the level of characters. In this paper we have a vocabulary of 17,000 possible"}, {"start": 173.52, "end": 177.64, "text": " words and they instead build a word level language model but we're going to"}, {"start": 177.64, "end": 181.64, "text": " still stick with the characters but we'll take the same modeling approach. Now"}, {"start": 181.64, "end": 186.24, "text": " what they do is basically they propose to take every one of these words 17,000"}, {"start": 186.24, "end": 191.52, "text": " words and they're going to associate to each word a say 30-dimensional feature"}, {"start": 191.52, "end": 198.28, "text": " vector. So every word is now embedded into a 30-dimensional space you can think"}, {"start": 198.28, "end": 203.64000000000001, "text": " of it that way. So we have 17,000 points or vectors in a 30-dimensional space and"}, {"start": 203.64000000000001, "end": 207.48, "text": " that's you might imagine that's very crowded that's a lot of points for a"}, {"start": 207.48, "end": 211.32, "text": " very small space. Now in the beginning these words are"}, {"start": 211.32, "end": 215.52, "text": " initialized completely randomly so there's pride out that random but then we're"}, {"start": 215.52, "end": 220.36, "text": " going to tune these embeddings of these words using that propagation. So during"}, {"start": 220.36, "end": 223.48, "text": " the course of training of this neural network these points or vectors are"}, {"start": 223.48, "end": 227.4, "text": " going to basically move around in this space and you might imagine that for example"}, {"start": 227.4, "end": 231.24, "text": " words that have very similar meanings or there are indeed synonyms of each"}, {"start": 231.24, "end": 235.16, "text": " other might end up in a very similar part of the space and conversely words"}, {"start": 235.16, "end": 239.96, "text": " that mean very different things would go somewhere else in the space. Now their"}, {"start": 239.96, "end": 244.0, "text": " modeling approach otherwise is identical to ours. They are using a multi-linear"}, {"start": 244.0, "end": 248.32, "text": " neural network to predict the next word given the previous words and to train"}, {"start": 248.32, "end": 251.12, "text": " the neural network they are maximizing the log-black limit of the training"}, {"start": 251.12, "end": 256.32, "text": " data just like we did. So the modeling approach itself is identical. Now here they"}, {"start": 256.32, "end": 261.48, "text": " have a concrete example of this intuition. Why does it work? Basically suppose that"}, {"start": 261.48, "end": 266.32, "text": " for example you are trying to predict a dog was running in a blank. Now suppose"}, {"start": 266.32, "end": 271.15999999999997, "text": " that the exact phrase a dog was running in a has never occurred in a training"}, {"start": 271.15999999999997, "end": 275.52, "text": " data and here you are at sort of test time later when the model is deployed"}, {"start": 275.52, "end": 280.08, "text": " somewhere and it's trying to make a sentence and it's saying dog was running in"}, {"start": 280.08, "end": 284.68, "text": " a blank and because it's never encountered this exact phrase in the training"}, {"start": 284.68, "end": 288.96, "text": " set you're out of distribution as we say. Like you don't have fundamentally any"}, {"start": 288.96, "end": 295.96, "text": " reason to suspect what might come next but this approach actually allows you to"}, {"start": 295.96, "end": 299.44, "text": " get around that because maybe you didn't see the exact phrase a dog was running"}, {"start": 299.44, "end": 303.24, "text": " in a something but maybe you've seen similar phrases maybe you've seen the"}, {"start": 303.24, "end": 307.88, "text": " phrase the dog was running in a blank and maybe your network has learned that a"}, {"start": 307.88, "end": 312.56, "text": " and the are like frequently are interchangeable with each other and so maybe it"}, {"start": 312.56, "end": 316.52, "text": " took the embedding for a and the embedding for the and it actually put them"}, {"start": 316.52, "end": 320.52, "text": " like nearby each other in the space and so you can transfer knowledge through"}, {"start": 320.52, "end": 324.68, "text": " that embedding and you can generalize in that way. Similarly the network could"}, {"start": 324.68, "end": 328.72, "text": " know that cats and dogs are animals and they co-occur in lots of very similar"}, {"start": 328.72, "end": 333.32, "text": " contexts and so even though you haven't seen this exact phrase or if you haven't"}, {"start": 333.32, "end": 338.12, "text": " seen exactly walking or running you can through the embedding space transfer"}, {"start": 338.12, "end": 343.28000000000003, "text": " knowledge and you can generalize to novel scenarios. So let's now scroll down to"}, {"start": 343.28000000000003, "end": 348.08, "text": " the diagram of the neural network they have a nice diagram here and in this"}, {"start": 348.08, "end": 352.88, "text": " example we are taking three previous words and we are trying to predict the"}, {"start": 352.88, "end": 359.2, "text": " fourth word in a sequence. Now these three previous words as I mentioned we have"}, {"start": 359.2, "end": 366.36, "text": " a vocabulary of 17,000 possible words so every one of these basically are the"}, {"start": 366.36, "end": 372.72, "text": " index of the incoming word and because there are 17,000 words this is an integer"}, {"start": 372.72, "end": 381.28000000000003, "text": " between 0 and 16,999. Now there's also a lookup table that they call C. This"}, {"start": 381.28000000000003, "end": 386.88, "text": " lookup table is a matrix that is 17,000 by say 30 and basically what we're"}, {"start": 386.88, "end": 391.44, "text": " doing here is we're treating this as a lookup table and so every index is"}, {"start": 391.44, "end": 397.12, "text": " plucking out a row of this embedding matrix so that each index is converted"}, {"start": 397.12, "end": 401.4, "text": " to the 30-dimensional vector that corresponds to the embedding vector for that"}, {"start": 401.4, "end": 408.32, "text": " word. So here we have the input layer of 30 neurons for three words making up"}, {"start": 408.32, "end": 413.28, "text": " 90 neurons in total and here they're saying that this matrix C is shared"}, {"start": 413.28, "end": 417.52, "text": " across all the words so we're always indexing it to the same matrix C over and"}, {"start": 417.52, "end": 423.88, "text": " over for each one of these words. Next up is the hidden layer of this neural"}, {"start": 423.88, "end": 428.32, "text": " network. The size of this hidden neural layer of this neural net is a hop"}, {"start": 428.32, "end": 431.68, "text": " parameter. So we use the word hyper parameter when it's kind of like a design"}, {"start": 431.68, "end": 435.59999999999997, "text": " choice up to the designer of the neural net and this can be as large as you'd"}, {"start": 435.59999999999997, "end": 439.76, "text": " like or as small as you'd like so for example the size could be a hundred and we"}, {"start": 439.76, "end": 443.64, "text": " are going to go over multiple choices of the size of this hidden layer and we're"}, {"start": 443.64, "end": 447.76, "text": " going to evaluate how well they work. So say there were a hundred neurons here"}, {"start": 447.76, "end": 454.24, "text": " all of them would be fully connected to the 90 words or 90 numbers that make up"}, {"start": 454.24, "end": 458.96, "text": " these three words. So this is a fully connected layer and there's a 10-inch"}, {"start": 458.96, "end": 463.8, "text": " long linearity and then there's this output layer and because our 17,000"}, {"start": 463.8, "end": 469.28, "text": " possible words that could come next this layer has 17,000 neurons and all of"}, {"start": 469.28, "end": 475.23999999999995, "text": " them are fully connected to all of these neurons in the hidden layer. So there's"}, {"start": 475.23999999999995, "end": 479.35999999999996, "text": " a lot of parameters here because there's a lot of words so most computation is"}, {"start": 479.35999999999996, "end": 485.23999999999995, "text": " here. This is the expensive layer. Now there are 17,000 logits here so on top of"}, {"start": 485.23999999999995, "end": 488.71999999999997, "text": " there we have the softmax layer which we've seen in our previous video as"}, {"start": 488.71999999999997, "end": 492.84, "text": " well. So every one of these logits is expedited and then everything is"}, {"start": 492.84, "end": 497.28, "text": " normalized to sum to one so that we have a nice probability distribution for"}, {"start": 497.28, "end": 502.03999999999996, "text": " the next word in the sequence. Now of course during training we actually have"}, {"start": 502.03999999999996, "end": 507.28, "text": " the label. We have the identity of the next word in the sequence. That word or"}, {"start": 507.28, "end": 513.4399999999999, "text": " its index is used to pluck out the probability of that word and then we are"}, {"start": 513.4399999999999, "end": 518.8399999999999, "text": " maximizing the probability of that word with respect to the parameters of this"}, {"start": 518.8399999999999, "end": 523.6, "text": " neural net. So the parameters are the weights and biases of this output layer,"}, {"start": 523.6, "end": 529.08, "text": " the weights and biases of this in the layer and the embedding lookup table C and"}, {"start": 529.08, "end": 534.28, "text": " all of that is optimized using backpropagation and these dashed arrows"}, {"start": 534.28, "end": 538.44, "text": " ignore those. That represents a variation of a neural net that we are not going"}, {"start": 538.44, "end": 543.0400000000001, "text": " to explore in this video. So that's the setup and now let's implement it. Okay so I"}, {"start": 543.0400000000001, "end": 547.8000000000001, "text": " started a brand new notebook for this lecture. We are importing by torch and we"}, {"start": 547.8000000000001, "end": 552.0, "text": " are importing matplotlibs so we can create figures. Then I am reading all the"}, {"start": 552.0, "end": 556.32, "text": " names into a list of words like I did before and I'm showing the first eight"}, {"start": 556.32, "end": 561.88, "text": " right here. Keep in mind that we have a 32,000 in total. These are just the first"}, {"start": 561.88, "end": 565.8, "text": " eight and then here I'm building out the vocabulary of characters and all the"}, {"start": 565.8, "end": 571.76, "text": " mappings from the characters as strings to integers and vice versa. Now the"}, {"start": 571.76, "end": 574.76, "text": " first thing we want to do is we want to compile the dataset for the neural"}, {"start": 574.76, "end": 579.12, "text": " network and I had to rewrite this code. I'll show you in a second what it looks"}, {"start": 579.12, "end": 584.8, "text": " like. So this is the code that I created for the dataset creation so let me first"}, {"start": 584.8, "end": 589.6, "text": " run it and then I'll briefly explain how this works. So first we're going to"}, {"start": 589.6, "end": 593.92, "text": " define something called block size and this is basically the context length of"}, {"start": 593.92, "end": 598.04, "text": " how many characters do we take to predict the next one. So here in this example"}, {"start": 598.04, "end": 601.96, "text": " we're taking three characters to predict the fourth one so we have a block size"}, {"start": 601.96, "end": 607.2, "text": " of three. That's the size of the block that supports the prediction. Then here"}, {"start": 607.2, "end": 613.08, "text": " I'm building out the x and y. The x are the input to the neural net and the y"}, {"start": 613.08, "end": 619.44, "text": " are the labels for each example inside x. Then I'm area over the first five"}, {"start": 619.44, "end": 623.32, "text": " words. I'm doing first five just four efficiency while we are developing all"}, {"start": 623.32, "end": 627.0, "text": " the code but then later we're going to come here and erase this so that we use"}, {"start": 627.0, "end": 632.76, "text": " the entire training set. So here I'm printing the word m up and here I'm"}, {"start": 632.76, "end": 636.8000000000001, "text": " basically showing the examples that we can generate the five examples that we"}, {"start": 636.8, "end": 643.04, "text": " can generate out of the single sort of word m up. So when we are given the"}, {"start": 643.04, "end": 648.12, "text": " context of just dot dot dot the first character in a sequence is E in this"}, {"start": 648.12, "end": 654.8399999999999, "text": " context the label SM when the context is this the label SM and so forth. And so"}, {"start": 654.8399999999999, "end": 658.0799999999999, "text": " the way I build this out is first I start with a padded context of just zero"}, {"start": 658.0799999999999, "end": 663.5999999999999, "text": " tokens. Then I iterate over all the characters I get the character in the"}, {"start": 663.6, "end": 668.88, "text": " sequence and I basically build out the array y of this current character and the"}, {"start": 668.88, "end": 673.16, "text": " array x which stores the current running context. And then here see I print"}, {"start": 673.16, "end": 678.08, "text": " everything and here I crop the context and enter the new character in a"}, {"start": 678.08, "end": 683.48, "text": " sequence. So this is kind of like a roll in the window of context. Now we can change"}, {"start": 683.48, "end": 687.36, "text": " the block size here to for example four. And in that case we would be predicting"}, {"start": 687.36, "end": 692.44, "text": " the fifth character given the previous four or it can be five and then it would"}, {"start": 692.44, "end": 698.0400000000001, "text": " look like this or it can be say 10 and then it would look something like this."}, {"start": 698.0400000000001, "end": 702.12, "text": " We're taking 10 characters to predict the 11th one and we're always padding"}, {"start": 702.12, "end": 707.9200000000001, "text": " with dots. So let me bring this back to three just so that we have what we have"}, {"start": 707.9200000000001, "end": 713.84, "text": " here in the paper. And finally the data set right now looks as follows. From"}, {"start": 713.84, "end": 719.2, "text": " these five words we have created a data set of 32 examples and each input"}, {"start": 719.2, "end": 723.0, "text": " is a neural net is three integers and we have a label that is also an integer"}, {"start": 723.0, "end": 730.32, "text": " y. So x looks like this. These are the individual examples and then y are the"}, {"start": 730.32, "end": 738.1600000000001, "text": " labels. So given this let's now write a neural network that takes these x's"}, {"start": 738.1600000000001, "end": 743.88, "text": " and predicts to y's. First let's build the embedding lookup table C. So we have"}, {"start": 743.88, "end": 747.48, "text": " 27 possible characters and we're going to embed them in a lower dimensional"}, {"start": 747.48, "end": 753.64, "text": " space. In the paper they have 17,000 words and they embed them in spaces as"}, {"start": 753.64, "end": 760.04, "text": " small dimensional as 30. So they cram 17,000 words into 30 dimensional space."}, {"start": 760.04, "end": 764.44, "text": " In our case we have only 27 possible characters. So let's cram them in"}, {"start": 764.44, "end": 769.04, "text": " something as small as to start with for example a two dimensional space. So this"}, {"start": 769.04, "end": 774.52, "text": " lookup table will be random numbers and we'll have 27 rows and we'll have two"}, {"start": 774.52, "end": 780.4, "text": " columns. Right so each 20 each one of 27 characters will have a two-dimensional"}, {"start": 780.4, "end": 786.0799999999999, "text": " embedding. So that's our matrix C of embeddings in the beginning"}, {"start": 786.0799999999999, "end": 791.0, "text": " initialized randomly. Now before we embed all of the integers inside the input"}, {"start": 791.0, "end": 796.4399999999999, "text": " x using this lookup table C let me actually just try to embed a single"}, {"start": 796.4399999999999, "end": 803.12, "text": " individual integer like say five. So we get a sense of how this works. Now one"}, {"start": 803.12, "end": 806.96, "text": " way this works of course is we can just take the C and we can index into row five"}, {"start": 806.96, "end": 815.08, "text": " and that gives us a vector the fifth row of C and this is one way to do it. The"}, {"start": 815.08, "end": 818.76, "text": " other way that I presented in the previous lecture is actually seemingly"}, {"start": 818.76, "end": 822.44, "text": " different but actually identical. So in the previous lecture what we did is we"}, {"start": 822.44, "end": 827.24, "text": " took these integers and we used the one-hot encoding to first encode them. So"}, {"start": 827.24, "end": 832.04, "text": " if that one hot we want to encode integer five and we want to tell it that"}, {"start": 832.04, "end": 836.4, "text": " their number of classes is 27. So that's the 26-dimensional vector of all"}, {"start": 836.4, "end": 843.48, "text": " zeros except the fifth bit is turned on. Now this actually doesn't work. The"}, {"start": 843.48, "end": 848.64, "text": " reason is that this input actually must be a two-shot tensor. And I'm making"}, {"start": 848.64, "end": 851.64, "text": " some of these errors intentionally just so you get to see some errors and how to"}, {"start": 851.64, "end": 856.52, "text": " fix them. So this must be a tensor not an int, fairly straightforward to fix. We"}, {"start": 856.52, "end": 861.12, "text": " get a one-hot vector. The fifth dimension is one and the shape of this is 27."}, {"start": 861.12, "end": 866.88, "text": " And now notice that just as I briefly alluded to in a previous video if we take"}, {"start": 866.88, "end": 876.64, "text": " this one-hot vector and we multiply it by C then what would you expect?"}, {"start": 876.64, "end": 884.72, "text": " Well number one first you'd expect an error because expected scalar type"}, {"start": 884.72, "end": 889.76, "text": " long but found float. So a little bit confusing but the problem here is that one"}, {"start": 889.76, "end": 897.04, "text": " hot the data type of it is long. It's a 64-bit integer but this is a float"}, {"start": 897.04, "end": 902.04, "text": " tensor. And so PyTorch doesn't know how to multiply an int with a float and that's"}, {"start": 902.04, "end": 907.2, "text": " why we had to explicitly cast this to a float so that we can multiply. Now the"}, {"start": 907.2, "end": 913.2, "text": " output actually here is identical and that it's identical because of the way the"}, {"start": 913.2, "end": 918.4399999999999, "text": " matrix multiplication here works. We have the one-hot vector multiplying columns"}, {"start": 918.44, "end": 923.8000000000001, "text": " of C and because of all the zeros they actually end up masking out everything in"}, {"start": 923.8000000000001, "end": 928.72, "text": " C except for the fifth row which is blocked out. And so we actually arrive at the"}, {"start": 928.72, "end": 934.12, "text": " same result and that tells you that here we can interpret this first piece here"}, {"start": 934.12, "end": 938.24, "text": " this embedding of the integer. We can either think of it as the integer indexing"}, {"start": 938.24, "end": 942.6800000000001, "text": " into a lookup table C but equivalently we can also think of this little piece"}, {"start": 942.68, "end": 948.68, "text": " here as a first layer of this bigger neural net. This layer here has neurons that"}, {"start": 948.68, "end": 952.7199999999999, "text": " have no nonlinearity there's no 10H there are just linear neurons and their"}, {"start": 952.7199999999999, "end": 958.9599999999999, "text": " wake matrix is C. And then we are encoding integers into one hot and feeding"}, {"start": 958.9599999999999, "end": 963.16, "text": " those into a neural net and this first layer basically embeds them. So those"}, {"start": 963.16, "end": 966.5999999999999, "text": " are two equivalent ways of doing the same thing. We're just going to index"}, {"start": 966.5999999999999, "end": 970.28, "text": " because it's much much faster and we're going to discard this interpretation of"}, {"start": 970.28, "end": 975.28, "text": " one-hot inputs into neural nets and we're just going to index integers and"}, {"start": 975.28, "end": 979.64, "text": " create and use embedding tables. Now embedding a single integer like five is"}, {"start": 979.64, "end": 985.16, "text": " easy enough. We can simply ask by torch to retrieve the fifth row of C or the"}, {"start": 985.16, "end": 991.28, "text": " row index five of C. But how do we simultaneously embed all of these 32 by"}, {"start": 991.28, "end": 997.04, "text": " three integers stored in array X? Wattly by torch indexing is fairly flexible and"}, {"start": 997.04, "end": 1003.8399999999999, "text": " quite powerful. So it doesn't just work to ask for a single element five like"}, {"start": 1003.8399999999999, "end": 1008.3199999999999, "text": " this. You can actually index using lists. So for example we can get the rows five"}, {"start": 1008.3199999999999, "end": 1014.0799999999999, "text": " six and seven and this will just work like this. We can index with a list. It"}, {"start": 1014.0799999999999, "end": 1017.9599999999999, "text": " doesn't just have to be a list it can also be a actually a tensor of integers."}, {"start": 1017.9599999999999, "end": 1023.8399999999999, "text": " And we can index with that. So this is a integer tensor five six seven and this"}, {"start": 1023.84, "end": 1029.4, "text": " will just work as well. In fact we can also for example repeat row seven and"}, {"start": 1029.4, "end": 1034.72, "text": " retrieve it multiple times and that same index will just get embedded multiple"}, {"start": 1034.72, "end": 1040.32, "text": " times here. So here we are indexing with a one-dimensional tensor of integers."}, {"start": 1040.32, "end": 1044.52, "text": " But it turns out that you can also index with multi-dimensional tensors of"}, {"start": 1044.52, "end": 1049.2, "text": " integers. Here we have a two-dimensional tensor of integers. So we can"}, {"start": 1049.2, "end": 1058.88, "text": " simply just do C at X and this just works. And the shape of this is 32 by 3 which"}, {"start": 1058.88, "end": 1061.8400000000001, "text": " is the original shape. And now for every one of those three two by three"}, {"start": 1061.8400000000001, "end": 1067.8, "text": " integers we've retrieved the embedding vector here. So basically we have that"}, {"start": 1067.8, "end": 1076.56, "text": " as an example the 13th or example index 13 the second dimension is the integer"}, {"start": 1076.56, "end": 1083.48, "text": " one as an example. And so here if we do C of X which gives us that array and"}, {"start": 1083.48, "end": 1090.9199999999998, "text": " then we index into 13 by 2 of that array then we get the embedding here. And you"}, {"start": 1090.9199999999998, "end": 1098.1599999999999, "text": " can verify that C at one which is the integer at that location is indeed equal"}, {"start": 1098.1599999999999, "end": 1103.6399999999999, "text": " to this. You see they're equal. So basically a long story short PyTorch"}, {"start": 1103.64, "end": 1109.8400000000001, "text": " indexing is awesome and to embed simultaneously all of the integers in X we"}, {"start": 1109.8400000000001, "end": 1115.5600000000002, "text": " can simply do C of X and that is our embedding and that just works. Now let's"}, {"start": 1115.5600000000002, "end": 1121.76, "text": " construct this layer here the hidden layer. So we have that W1 as I'll call it"}, {"start": 1121.76, "end": 1127.3600000000001, "text": " are these weights which we will initialize randomly. Now the number of inputs"}, {"start": 1127.3600000000001, "end": 1131.68, "text": " to this layer is going to be three times two right because we have two"}, {"start": 1131.68, "end": 1135.44, "text": " dimensional embeddings and we have three of them. So the number of inputs is six"}, {"start": 1135.44, "end": 1141.28, "text": " and the number of neurons in this layer is a variable up to us. Let's use 100"}, {"start": 1141.28, "end": 1146.72, "text": " neurons as an example and then biases will be also initialized randomly as an"}, {"start": 1146.72, "end": 1153.3600000000001, "text": " example and let's and we just need 100 of them. Now the problem with this is we"}, {"start": 1153.3600000000001, "end": 1157.44, "text": " can't simply normally we would take the input in this case that's embedding and"}, {"start": 1157.44, "end": 1161.96, "text": " we'd like to multiply it with these weights and then we would like to add the"}, {"start": 1161.96, "end": 1166.0, "text": " bias. This is roughly what we want to do but the problem here is that these"}, {"start": 1166.0, "end": 1170.64, "text": " embeddings are stacked up in the dimensions of this impotenture. So this will"}, {"start": 1170.64, "end": 1174.72, "text": " not work this matrix multiplication because this is a shape 32 by 3 by 2 and I"}, {"start": 1174.72, "end": 1179.6000000000001, "text": " can't multiply that by 6 by 100. So somehow we need to concatenate these"}, {"start": 1179.6000000000001, "end": 1183.28, "text": " inputs here together so that we can do something along these lines which"}, {"start": 1183.28, "end": 1189.08, "text": " currently does not work. So how do we transform this 32 by 3 by 2 into a 32 by"}, {"start": 1189.08, "end": 1194.76, "text": " 6 so that we can actually perform this multiplication over here. I'd like to"}, {"start": 1194.76, "end": 1199.36, "text": " show you that there are usually many ways of implementing what you'd like to"}, {"start": 1199.36, "end": 1204.28, "text": " do in Torch and some of them will be faster, better, shorter, etc. And that's"}, {"start": 1204.28, "end": 1208.48, "text": " because Torch is a very large library and it's got lots and lots of functions."}, {"start": 1208.48, "end": 1212.44, "text": " So if we just go to the documentation and click on Torch you'll see that my"}, {"start": 1212.44, "end": 1216.04, "text": " slider here is very tiny and that's because there are so many functions that"}, {"start": 1216.04, "end": 1220.4, "text": " you can call on these tensors to transform them, create them, multiply them,"}, {"start": 1220.4, "end": 1226.1200000000001, "text": " add them, perform all kinds of different operations on them. And so this is"}, {"start": 1226.1200000000001, "end": 1232.16, "text": " kind of like the space of possibility if you will. Now one of the things that you"}, {"start": 1232.16, "end": 1236.44, "text": " can do is we can control here, control off for concatenate and we see that"}, {"start": 1236.44, "end": 1241.6000000000001, "text": " there's a function torqued.cat, short for concatenate. And this concatenate is"}, {"start": 1241.6, "end": 1246.6799999999998, "text": " given sequence of tensors in a given dimension and these tensors must have the"}, {"start": 1246.6799999999998, "end": 1251.28, "text": " same shape, etc. So we can use the concatenate operation to in a naive way"}, {"start": 1251.28, "end": 1257.04, "text": " concatenate these three embeddings for each input. So in this case we have"}, {"start": 1257.04, "end": 1262.84, "text": " m of m of the shape. And really what we want to do is we want to retrieve these"}, {"start": 1262.84, "end": 1269.12, "text": " three parts and concatenate them. So we want to grab all the examples. We want to"}, {"start": 1269.12, "end": 1281.52, "text": " grab first the zero index and then all of this. So this plugs out the 32 by"}, {"start": 1281.52, "end": 1288.9199999999998, "text": " two embeddings of just the first word here. And so basically we want this guy. We"}, {"start": 1288.9199999999998, "end": 1293.3999999999999, "text": " want the first dimension and we want the second dimension. And these are the"}, {"start": 1293.3999999999999, "end": 1298.9599999999998, "text": " three pieces individually. And then we want to treat this as a sequence and we"}, {"start": 1298.96, "end": 1305.1200000000001, "text": " want to torqued.cat on that sequence. So this is the list torqued.cat takes a"}, {"start": 1305.1200000000001, "end": 1310.3600000000001, "text": " sequence of tensors. And then we have to tell it along which dimension to concatenate."}, {"start": 1310.3600000000001, "end": 1315.4, "text": " So in this case all these are 32 by two and we want to concatenate not across"}, {"start": 1315.4, "end": 1322.1200000000001, "text": " dimension zero but across dimension one. So passing in one gives us a result that"}, {"start": 1322.1200000000001, "end": 1326.88, "text": " the shape of this is 32 by six exactly as we'd like. So that basically took 32"}, {"start": 1326.88, "end": 1332.3600000000001, "text": " and squashed these back and concatenate them into 32 by six. Now this is kind"}, {"start": 1332.3600000000001, "end": 1336.48, "text": " of ugly because this code would not generalize if we want to later change the"}, {"start": 1336.48, "end": 1341.5600000000002, "text": " block size. Right now we have three inputs three words. But what if we had five"}, {"start": 1341.5600000000002, "end": 1346.2, "text": " then here we would have to change the code because I'm indexing directly. Well"}, {"start": 1346.2, "end": 1350.0800000000002, "text": " torqued comes to rescue again because that turns out to be a function called"}, {"start": 1350.08, "end": 1357.36, "text": " unbind and it removes a tensor dimension. So removes a tensor dimension returns a"}, {"start": 1357.36, "end": 1362.9199999999998, "text": " tuple of all slices along the given dimension without it. So this is exactly what"}, {"start": 1362.9199999999998, "end": 1372.48, "text": " we need. And basically when we call tors.unbind tors.unbind of m and passing"}, {"start": 1372.48, "end": 1381.28, "text": " dimension one index one. This gives us a list of a list of tensors exactly"}, {"start": 1381.28, "end": 1388.6, "text": " equivalent to this. So running this gives us a line three and it's exactly this"}, {"start": 1388.6, "end": 1394.2, "text": " list. And so we can call torched out cat on it and along the first dimension."}, {"start": 1394.2, "end": 1401.28, "text": " And this works and this shape is the same. But now this is it doesn't matter if"}, {"start": 1401.28, "end": 1405.84, "text": " we have block size three or five or ten this will just work. So this is one way"}, {"start": 1405.84, "end": 1409.76, "text": " to do it. But it turns out that in this case there's actually a significantly"}, {"start": 1409.76, "end": 1413.68, "text": " better and more efficient way. And this gives me an opportunity to hint at some"}, {"start": 1413.68, "end": 1420.92, "text": " of the internals of torched out tensor. So let's create an array here of elements"}, {"start": 1420.92, "end": 1426.6, "text": " from zero to 17. And the shape of this is just 18. It's a single picture of 18"}, {"start": 1426.6, "end": 1432.4399999999998, "text": " numbers. It turns out that we can very quickly we represent this as different"}, {"start": 1432.4399999999998, "end": 1438.3999999999999, "text": " sized and dimensional tensors. We do this by calling a view. And we can say that"}, {"start": 1438.3999999999999, "end": 1444.48, "text": " actually this is not a single vector of 18. This is a two by nine tensor. Or"}, {"start": 1444.48, "end": 1450.04, "text": " alternatively this is a nine by two tensor. Or this is actually a three by three"}, {"start": 1450.04, "end": 1455.52, "text": " by two tensor. As long as the total number of elements here multiply to be the"}, {"start": 1455.52, "end": 1462.36, "text": " same this will just work. And in PyTorch this operation calling that view is"}, {"start": 1462.36, "end": 1467.36, "text": " extremely efficient. And the reason for that is that in each tensor there's"}, {"start": 1467.36, "end": 1472.68, "text": " something called the underlying storage. And the storage is just the numbers"}, {"start": 1472.68, "end": 1477.28, "text": " always as a one dimensional vector. And this is how this tensor has represented"}, {"start": 1477.28, "end": 1482.8, "text": " in the computer memory. It's always a one dimensional vector. But when we call"}, {"start": 1482.8, "end": 1488.28, "text": " that view we are manipulating some of attributes of that tensor that dictate"}, {"start": 1488.28, "end": 1492.48, "text": " how this one dimensional sequence is interpreted to be an end-dimensional"}, {"start": 1492.48, "end": 1497.0, "text": " tensor. And so what's happening here is that no memory is being changed, copied,"}, {"start": 1497.0, "end": 1502.24, "text": " moved, or created when we call that view. The storage is identical. But when you"}, {"start": 1502.24, "end": 1508.04, "text": " call that view some of the internal attributes of the view of this tensor are"}, {"start": 1508.04, "end": 1511.0, "text": " being manipulated and changed. In particular that's something there's something"}, {"start": 1511.0, "end": 1515.52, "text": " called storage offset, strides, and shapes. And those are manipulated so that"}, {"start": 1515.52, "end": 1519.2, "text": " this one dimensional sequence of bytes is seen as different and dimensional"}, {"start": 1519.2, "end": 1525.28, "text": " arrays. There's a blog post here from Eric called PyTorch internals where he"}, {"start": 1525.28, "end": 1529.24, "text": " goes into some of this with respect to tensor and how the view of a tensor is"}, {"start": 1529.24, "end": 1534.08, "text": " represented. And this is really just like a logical construct of representing"}, {"start": 1534.08, "end": 1539.28, "text": " the physical memory. And so this is a pretty good blog post that you can go into."}, {"start": 1539.28, "end": 1542.92, "text": " I might also create an entire video on the internals of Torch tensor and how"}, {"start": 1542.92, "end": 1547.0, "text": " this works. For here we just note that this is an extremely efficient"}, {"start": 1547.0, "end": 1554.28, "text": " operation. And if I delete this and come back to our end we see that the shape of"}, {"start": 1554.28, "end": 1559.8, "text": " our end is 3 2 by 3 by 2. But we can simply ask for PyTorch to view this"}, {"start": 1559.8, "end": 1566.84, "text": " instead as a 3 2 by 6. And the way that gets flattened into a 3 2 by 6 array"}, {"start": 1566.84, "end": 1574.28, "text": " just happens that these two get stacked up in a single row. And so that's"}, {"start": 1574.28, "end": 1578.24, "text": " basically the concatenation operation that we're after. And you can verify that"}, {"start": 1578.24, "end": 1582.9199999999998, "text": " this actually gives the exact same result as what we had before. So this is an"}, {"start": 1582.9199999999998, "end": 1586.24, "text": " element y equals and you can see that all the elements of these two tensors are"}, {"start": 1586.24, "end": 1592.36, "text": " the same. And so we get the exact same result. So long story short we can"}, {"start": 1592.36, "end": 1600.0, "text": " actually just come here. And if we just view this as a 3 2 by 6 instead then"}, {"start": 1600.0, "end": 1604.6, "text": " this multiplication will work and give us the hidden states that were after. So"}, {"start": 1604.6, "end": 1611.1999999999998, "text": " if this is h then h dot shape is now the 100 dimensional activations for"}, {"start": 1611.1999999999998, "end": 1616.08, "text": " every one of our 32 examples. And this gives the desired result. Let me do two"}, {"start": 1616.08, "end": 1620.84, "text": " things here. Number one let's not use 32. We can for example do something like"}, {"start": 1620.84, "end": 1628.04, "text": " m dot shape at zero so that we don't hard code these numbers and this would"}, {"start": 1628.04, "end": 1632.9599999999998, "text": " work for any size of this m or alternatively we can also do negative one. When we"}, {"start": 1632.9599999999998, "end": 1637.52, "text": " do negative one, PytroTroll and Fur what this should be. Because the number of"}, {"start": 1637.52, "end": 1641.28, "text": " elements must be the same and we're saying that this is 6. PytroTroll derived"}, {"start": 1641.28, "end": 1647.28, "text": " that this must be 32 or whatever else it is if m is of different size. The other"}, {"start": 1647.28, "end": 1653.84, "text": " thing is here one more thing I'd like to point out is here when we do the"}, {"start": 1653.84, "end": 1659.76, "text": " concatenation this actually is much less efficient because this concatenation"}, {"start": 1659.76, "end": 1663.36, "text": " would create a whole new tensor with a whole new storage so new memory is being"}, {"start": 1663.36, "end": 1667.32, "text": " created because there's no way to concatenate tensors just by manipulating the"}, {"start": 1667.32, "end": 1672.8799999999999, "text": " view attributes. So this is inefficient and creates all kinds of new memory. So"}, {"start": 1672.88, "end": 1679.4, "text": " let me repeat this now. We don't need this and here to calculate H we want to"}, {"start": 1679.4, "end": 1688.0800000000002, "text": " also dot 10 H of this ticket our. Oops to get our H. So these are now numbers"}, {"start": 1688.0800000000002, "end": 1692.5600000000002, "text": " between negative one and one because of the 10 H and we have that the shape is"}, {"start": 1692.5600000000002, "end": 1697.96, "text": " 32 by 100 and that is basically this hidden layer of activations here for"}, {"start": 1697.96, "end": 1702.48, "text": " every one of our 32 examples. Now there's one more thing I've lost over that we"}, {"start": 1702.48, "end": 1706.52, "text": " have to be very careful with and that this and that's this plus here. In"}, {"start": 1706.52, "end": 1711.04, "text": " particular we want to make sure that the broadcasting will do what we like. The"}, {"start": 1711.04, "end": 1717.0, "text": " shape of this is 32 by 100 and the one's shape is 100. So we see that the"}, {"start": 1717.0, "end": 1721.3600000000001, "text": " addition here will broadcast these two and in particular we have 32 by 100"}, {"start": 1721.3600000000001, "end": 1727.96, "text": " broadcasting to 100. So broadcasting will align on the right create a fake"}, {"start": 1727.96, "end": 1732.52, "text": " dimension here. So this will become a one by 100 row vector and then it will"}, {"start": 1732.52, "end": 1737.96, "text": " copy vertically for every one of these rows of 32 and do an element wise"}, {"start": 1737.96, "end": 1741.76, "text": " addition. So in this case the correct thing will be happening because the"}, {"start": 1741.76, "end": 1748.56, "text": " same bias vector will be added to all the rows of this matrix. So that is"}, {"start": 1748.56, "end": 1752.32, "text": " correct. That's what we'd like and it's always good practice just make sure"}, {"start": 1752.32, "end": 1756.0, "text": " so that you don't treat yourself in the foot. And finally let's create the"}, {"start": 1756.0, "end": 1765.4, "text": " final layer here. So let's create W2 and V2. The input now is 100 and the"}, {"start": 1765.4, "end": 1769.76, "text": " output number of neurons will be for us 27 because we have 27 possible"}, {"start": 1769.76, "end": 1775.68, "text": " characters that come next. So the biases will be 27 as well. So therefore the"}, {"start": 1775.68, "end": 1782.08, "text": " low jits which are the outputs of this neural net are going to be H"}, {"start": 1782.08, "end": 1790.96, "text": " multiplied by W2 plus B2. Loads is that shape is 32 by 27 and the"}, {"start": 1790.96, "end": 1795.52, "text": " low jits look good. Now exactly as we saw in the previous video we want to"}, {"start": 1795.52, "end": 1799.12, "text": " take these low jits and we want to first experiment shape them to get our fake"}, {"start": 1799.12, "end": 1804.48, "text": " counts and then we want to normalize them into a probability. So prob is counts"}, {"start": 1804.48, "end": 1811.36, "text": " divide and now counts that sum along the first dimension and keep them"}, {"start": 1811.36, "end": 1818.08, "text": " as true exactly as in the previous video. And so prob that shape now is the"}, {"start": 1818.08, "end": 1825.6399999999999, "text": " R2 by 27 and you'll see that every row of prob sums to one so it's normalized."}, {"start": 1825.6399999999999, "end": 1830.4399999999998, "text": " So that gives us the probabilities. Now of course we have the actual letter that"}, {"start": 1830.4399999999998, "end": 1836.4799999999998, "text": " comes next and that comes from this array why which we created during the"}, {"start": 1836.4799999999998, "end": 1840.3999999999999, "text": " data separation. So why is this last piece here which is the"}, {"start": 1840.4, "end": 1844.16, "text": " unethically of the next character in a sequence that we'd like to now predict."}, {"start": 1844.16, "end": 1848.16, "text": " So what we'd like to do now is just as in the previous video we'd like to"}, {"start": 1848.16, "end": 1853.1200000000001, "text": " index into the rows of prob and each row we'd like to pluck out the probability"}, {"start": 1853.1200000000001, "end": 1858.76, "text": " assigned to the correct character as given here. So first we have torshtot"}, {"start": 1858.76, "end": 1865.92, "text": " range of 32 which is kind of like an iterator over numbers from 0 to 31 and"}, {"start": 1865.92, "end": 1870.8400000000001, "text": " then we can index into prob in the following way. Prob in torshtot"}, {"start": 1870.8400000000001, "end": 1876.0, "text": " range of 32 which it erased the roads and then each row we'd like to grab this"}, {"start": 1876.0, "end": 1882.0800000000002, "text": " column as given by why. So this gives the current probabilities as assigned by"}, {"start": 1882.0800000000002, "end": 1885.92, "text": " this neural network with this setting of its weights to the correct"}, {"start": 1885.92, "end": 1890.28, "text": " character in the sequence. And you can see here that this looks okay for some"}, {"start": 1890.28, "end": 1894.0800000000002, "text": " of these characters like this is basically point two but it doesn't look very"}, {"start": 1894.08, "end": 1900.4399999999998, "text": " good at all for many other characters. Like this is 0.0701 probability and so the"}, {"start": 1900.4399999999998, "end": 1903.6799999999998, "text": " network thinks that some of these are extremely unlikely but of course we"}, {"start": 1903.6799999999998, "end": 1909.1999999999998, "text": " haven't trained the neural network yet. So this will improve and ideally all of"}, {"start": 1909.1999999999998, "end": 1912.52, "text": " these numbers here of course are one because then we are correctly predicting"}, {"start": 1912.52, "end": 1916.24, "text": " the next character. Now just as in the previous video we want to take these"}, {"start": 1916.24, "end": 1920.28, "text": " probabilities. We want to look at the lock probability and then we want to look"}, {"start": 1920.28, "end": 1925.08, "text": " at the average rock probability and the negative of it to create the negative"}, {"start": 1925.08, "end": 1931.8, "text": " log likelihood loss. So the loss here is 17 and this is the loss that we'd like"}, {"start": 1931.8, "end": 1936.6, "text": " to minimize to get the network to predict the correct character in the sequence."}, {"start": 1936.6, "end": 1941.0, "text": " Okay so I rewrote everything here and made it a bit more respectable. So here's"}, {"start": 1941.0, "end": 1945.76, "text": " our data set. Here's all the parameters that we defined. I'm now using a"}, {"start": 1945.76, "end": 1949.8799999999999, "text": " generator to make it reproducible. I clustered all the primers into a single"}, {"start": 1949.88, "end": 1953.96, "text": " list of primers so that for example it's easy to count them and see that in"}, {"start": 1953.96, "end": 1958.5600000000002, "text": " total we currently have about 3,400 primers and this is the forward pass as we"}, {"start": 1958.5600000000002, "end": 1963.64, "text": " developed it and we arrive at a single number here the loss that is currently"}, {"start": 1963.64, "end": 1967.7600000000002, "text": " expressing how well this neural network works with the current setting of"}, {"start": 1967.7600000000002, "end": 1972.0400000000002, "text": " primers. Now I would like to make it even more respectable. So in particular"}, {"start": 1972.0400000000002, "end": 1977.8000000000002, "text": " see these lines here where we take the logits and we calculate a loss. We're"}, {"start": 1977.8, "end": 1983.1599999999999, "text": " not actually reinventing the wheel here. This is just classification and many"}, {"start": 1983.1599999999999, "end": 1987.1599999999999, "text": " people use classification and that's why there is a functional dot cross entropy"}, {"start": 1987.1599999999999, "end": 1991.48, "text": " function in PyTorch to calculate this much more efficiently. So we could just"}, {"start": 1991.48, "end": 1995.48, "text": " simply call f dot cross entropy and we can pass in the logits and we can pass in"}, {"start": 1995.48, "end": 2003.08, "text": " the array of targets. Why? And this calculates the exact same loss. So in fact we"}, {"start": 2003.08, "end": 2008.36, "text": " can simply put this here and erase these three lines and we're going to get the"}, {"start": 2008.36, "end": 2012.32, "text": " exact same result. Now there are actually many good reasons to prefer f dot"}, {"start": 2012.32, "end": 2016.6, "text": " cross entropy over rolling your own implementation like this. I did this for"}, {"start": 2016.6, "end": 2021.1999999999998, "text": " educational reasons but you'd never use this in practice. Why is that? Number one"}, {"start": 2021.1999999999998, "end": 2025.28, "text": " when you use f dot cross entropy PyTorch will not actually create all these"}, {"start": 2025.28, "end": 2029.6399999999999, "text": " intermediate tensors because these are all new tensors in memory and all this is"}, {"start": 2029.64, "end": 2034.2800000000002, "text": " fairly inefficient to run like this. Instead PyTorch will cluster up all these"}, {"start": 2034.2800000000002, "end": 2040.0800000000002, "text": " operations and very often create fused kernels that very efficiently evaluate"}, {"start": 2040.0800000000002, "end": 2044.24, "text": " these expressions that are sort of like clustered mathematical operations."}, {"start": 2044.24, "end": 2048.48, "text": " Number two the backward pass can be made much more efficient and not just because"}, {"start": 2048.48, "end": 2053.4, "text": " it's a fused kernel but also analytically and mathematically it's much it's"}, {"start": 2053.4, "end": 2058.36, "text": " often a very much simpler backward pass to implement. We actually sell this"}, {"start": 2058.36, "end": 2063.08, "text": " with micrograd. You see here when we implemented 10h the forward pass of this"}, {"start": 2063.08, "end": 2066.8, "text": " operation to calculate the 10h was actually fairly complicated mathematical"}, {"start": 2066.8, "end": 2071.56, "text": " expression but because it's a clustered mathematical expression when we did"}, {"start": 2071.56, "end": 2075.6400000000003, "text": " the backward pass we didn't individually backward through the x and the two"}, {"start": 2075.6400000000003, "end": 2079.76, "text": " times and the minus one and division etc. We just said it's 1 minus t squared"}, {"start": 2079.76, "end": 2084.52, "text": " and that's a much simpler mathematical expression and we were able to do this"}, {"start": 2084.52, "end": 2087.56, "text": " because we're able to reuse calculations and because we are able to"}, {"start": 2087.56, "end": 2091.32, "text": " mathematically and analytically derive the derivative and often that"}, {"start": 2091.32, "end": 2095.48, "text": " expression simplifies mathematically and so there's much less to implement."}, {"start": 2095.48, "end": 2099.96, "text": " So not only can it be made more efficient because it runs in a fused kernel"}, {"start": 2099.96, "end": 2105.32, "text": " but also because the expressions can take a much simpler form mathematically."}, {"start": 2105.32, "end": 2110.7999999999997, "text": " So that's number one. Number two under the hood f dot cross entropy can also"}, {"start": 2110.7999999999997, "end": 2116.2, "text": " be significantly more numerically well behaved. Let me show you an example of"}, {"start": 2116.2, "end": 2122.16, "text": " how this works. Suppose we have a logit of negative two three negative three"}, {"start": 2122.16, "end": 2126.68, "text": " zero and five and then we are taking the exponent of it and normalizing it to"}, {"start": 2126.68, "end": 2131.3999999999996, "text": " sum to one. So when logits take on this values everything is well and good and"}, {"start": 2131.3999999999996, "end": 2135.3999999999996, "text": " we get a nice probability distribution. Now consider what happens when some of"}, {"start": 2135.3999999999996, "end": 2138.6, "text": " these logits take on more extreme values and that can happen during"}, {"start": 2138.6, "end": 2142.8799999999997, "text": " optimization of neural network. Suppose that some of these numbers grow very"}, {"start": 2142.88, "end": 2147.6800000000003, "text": " negative like say negative 100 then actually everything will come out fine."}, {"start": 2147.6800000000003, "end": 2152.8, "text": " We still get a probabilities that you know are well behaved and they sum to one"}, {"start": 2152.8, "end": 2157.52, "text": " and everything is great but because of the way the exports if you have very"}, {"start": 2157.52, "end": 2161.92, "text": " positive logits like say positive 100 in here you actually start to run into"}, {"start": 2161.92, "end": 2166.32, "text": " trouble and we get not a number here and the reason for that is that these"}, {"start": 2166.32, "end": 2174.1200000000003, "text": " counts have an inf here. So if you pass in a very negative number two pecs you just"}, {"start": 2174.1200000000003, "end": 2178.48, "text": " get a very negative, sorry not negative but very small number very near zero"}, {"start": 2178.48, "end": 2182.8, "text": " and that's fine. But if you pass in a very positive number suddenly we run out"}, {"start": 2182.8, "end": 2188.76, "text": " of range in our floating point number that represents these counts. So basically"}, {"start": 2188.76, "end": 2192.44, "text": " we're taking E and we're raising it to the power of 100 and that gives us"}, {"start": 2192.44, "end": 2196.56, "text": " inf because we run out of dynamic range on this floating point number that is"}, {"start": 2196.56, "end": 2204.48, "text": " count. And so we cannot pass very large logits through this expression. Now let me"}, {"start": 2204.48, "end": 2209.12, "text": " reset these numbers to something reasonable. The way PyTorch solved this is"}, {"start": 2209.12, "end": 2214.36, "text": " that you see how we have a really well behaved result here. It turns out that"}, {"start": 2214.36, "end": 2219.32, "text": " because of the normalization here you can actually offset logits by any arbitrary"}, {"start": 2219.32, "end": 2223.84, "text": " constant value that you want. So if I add one here you actually get the exact"}, {"start": 2223.84, "end": 2230.2400000000002, "text": " same result or if I add two or if I subtract three any offset will produce the"}, {"start": 2230.2400000000002, "end": 2235.6400000000003, "text": " exact same probabilities. So because negative numbers are okay but positive"}, {"start": 2235.6400000000003, "end": 2240.2000000000003, "text": " numbers can actually overflow this exp. What PyTorch does is it internally"}, {"start": 2240.2000000000003, "end": 2244.88, "text": " calculates the maximum value that occurs in the logits and it subtracts it. So in"}, {"start": 2244.88, "end": 2249.1600000000003, "text": " this case it would subtract five. And so therefore the greatest number in logits"}, {"start": 2249.1600000000003, "end": 2252.56, "text": " will become zero and all the other numbers will become some negative numbers."}, {"start": 2252.56, "end": 2257.44, "text": " And then the result of this is always well behaved. So even if we have 100"}, {"start": 2257.44, "end": 2263.08, "text": " here previously not good but because PyTorch will subtract 100 this will work."}, {"start": 2263.08, "end": 2269.12, "text": " And so there's many good reasons to call cross entropy. Number one the"}, {"start": 2269.12, "end": 2272.2000000000003, "text": " forward pass can be much more efficient. The backward pass can be much more"}, {"start": 2272.2, "end": 2277.0, "text": " efficient and also thinks it can be much more numerically well behaved. Okay so"}, {"start": 2277.0, "end": 2283.08, "text": " let's now set up the training of this neural net. We have the forward pass. We"}, {"start": 2283.08, "end": 2287.52, "text": " don't need these because that we have that loss is equal to half that cross"}, {"start": 2287.52, "end": 2292.4399999999996, "text": " entropy. That's the forward pass. Then we need the backward pass. First we want"}, {"start": 2292.4399999999996, "end": 2297.16, "text": " to set the gradients to be zero. So for P in parameters we want to make sure"}, {"start": 2297.16, "end": 2300.6, "text": " that P dot grad is none which is the same as setting it to zero in PyTorch. And"}, {"start": 2300.6, "end": 2305.16, "text": " then lost a backward to populate those gradients. Once we have the"}, {"start": 2305.16, "end": 2309.12, "text": " gradients we can do the parameter update. So for P in parameters we want to take"}, {"start": 2309.12, "end": 2317.64, "text": " all the dear and we want to nudge it learning rate times P dot grad. And then we"}, {"start": 2317.64, "end": 2329.4, "text": " want to repeat this a few times. And let's print the loss here as well. Now this"}, {"start": 2329.4, "end": 2332.88, "text": " once the vice and it will create an error because we also have to go for P in"}, {"start": 2332.88, "end": 2337.56, "text": " parameters. And we have to make sure that P dot requires grad is set to"}, {"start": 2337.56, "end": 2345.48, "text": " true in PyTorch. And this should just work. Okay so we started off with loss of"}, {"start": 2345.48, "end": 2351.08, "text": " 17 and we're decreasing it. Lots run longer. And you see how the loss"}, {"start": 2351.08, "end": 2360.52, "text": " decreases a lot here. So if we just run for a thousand times we get a very very"}, {"start": 2360.52, "end": 2364.2799999999997, "text": " low loss. And that means that we're making very good predictions. Now the reason"}, {"start": 2364.2799999999997, "end": 2370.12, "text": " that this is so straightforward right now is because we're only overfitting"}, {"start": 2370.12, "end": 2377.2799999999997, "text": " 32 examples. So we only have 32 examples of the first five words. And therefore"}, {"start": 2377.28, "end": 2382.0400000000004, "text": " it's very easy to make this neural net fit only these 32 examples because we"}, {"start": 2382.0400000000004, "end": 2387.0800000000004, "text": " have 3,400 parameters and only 32 examples. So we're doing what's called"}, {"start": 2387.0800000000004, "end": 2392.52, "text": " overfitting a single batch of the data and getting a very low loss and good"}, {"start": 2392.52, "end": 2396.32, "text": " predictions. But that's just because we have so many parameters for so few"}, {"start": 2396.32, "end": 2401.32, "text": " examples. So it's easy to make this be very low. Now we're not able to achieve"}, {"start": 2401.32, "end": 2406.28, "text": " exactly zero. And the reason for that is we can for example look at low juts which"}, {"start": 2406.28, "end": 2413.44, "text": " are being predicted. And we can look at the max along the first dimension and"}, {"start": 2413.44, "end": 2419.48, "text": " in PyTorch max reports both the actual values that take on the maximum number"}, {"start": 2419.48, "end": 2424.2400000000002, "text": " but also the indices of ease. And you'll see that the indices are very close to"}, {"start": 2424.2400000000002, "end": 2430.52, "text": " the labels. But in some cases they differ. For example in this very first example"}, {"start": 2430.52, "end": 2436.24, "text": " the predicted index is 19 but the label is 5. And we're not able to make"}, {"start": 2436.24, "end": 2441.92, "text": " loss be zero. And fundamentally that's because here the very first or the"}, {"start": 2441.92, "end": 2446.08, "text": " 0th index is the example where dot dot dot is supposed to predict E. But you"}, {"start": 2446.08, "end": 2450.2, "text": " see how dot dot dot is also supposed to predict and O. And dot dot dot is also"}, {"start": 2450.2, "end": 2456.32, "text": " supposed to predict in the eye and then S as well. And so basically E O A or S are"}, {"start": 2456.32, "end": 2460.76, "text": " all possible outcomes in a training set for the exact same input. So we're not"}, {"start": 2460.76, "end": 2466.84, "text": " able to completely overfit and and make the last big exactly zero. But we're"}, {"start": 2466.84, "end": 2471.8, "text": " getting very close in the cases where there's a unique input for a unique"}, {"start": 2471.8, "end": 2475.92, "text": " output. In those cases we do what's called overfit and we basically get the"}, {"start": 2475.92, "end": 2482.0800000000004, "text": " exact same and the exact correct result. So now all we have to do is we just need"}, {"start": 2482.0800000000004, "end": 2484.6400000000003, "text": " to make sure that we read in the full data set and optimize the neural"}, {"start": 2484.64, "end": 2489.92, "text": " line. Okay so let's swing back up where we created the data set and we see that"}, {"start": 2489.92, "end": 2494.56, "text": " here we only use the first five words. So let me now erase this and let me erase"}, {"start": 2494.56, "end": 2498.7599999999998, "text": " the print statements otherwise be be printing way too much. And so when we"}, {"start": 2498.7599999999998, "end": 2503.8399999999997, "text": " process the full data set of all the words we now had 228,000 examples instead"}, {"start": 2503.8399999999997, "end": 2509.6, "text": " of just 32. So let's now scroll back down to this as much larger. We"}, {"start": 2509.6, "end": 2514.08, "text": " initialize the weights the same number of parameters they all require gradients. And"}, {"start": 2514.08, "end": 2519.56, "text": " then let's push this print our lost item to be here and let's just see how the"}, {"start": 2519.56, "end": 2526.16, "text": " optimization goes if we run this. Okay so we started with a fairly high loss"}, {"start": 2526.16, "end": 2533.12, "text": " and then as we're optimizing the loss is coming down. But you'll notice that it"}, {"start": 2533.12, "end": 2536.8399999999997, "text": " takes quite a bit of time for every single iteration. So let's actually"}, {"start": 2536.84, "end": 2539.8, "text": " address that because we're doing way too much work forwarding and"}, {"start": 2539.8, "end": 2544.6800000000003, "text": " backwarding 220,000 examples. In practice what people usually do is they"}, {"start": 2544.6800000000003, "end": 2550.36, "text": " perform forward and backward pass an update on many batches of the data. So what"}, {"start": 2550.36, "end": 2554.2400000000002, "text": " we will want to do is we want to randomly select some portion of the data set and"}, {"start": 2554.2400000000002, "end": 2557.96, "text": " that's a mini batch and then only forward backward and update on that little"}, {"start": 2557.96, "end": 2563.36, "text": " mini batch. And then we erase on those mini batches. So in PyTorch we can for"}, {"start": 2563.36, "end": 2568.08, "text": " example use tors.randent. We can generate numbers between 0 and 5 and make"}, {"start": 2568.08, "end": 2578.6, "text": " 32 of them. I believe the size has to be a tuple in PyTorch. So we can have a"}, {"start": 2578.6, "end": 2584.96, "text": " tuple 32 of numbers between 0 and 5. But actually we want x. shape of 0 here. And"}, {"start": 2584.96, "end": 2591.48, "text": " so this creates integers that index into our data set and there's 32. So if"}, {"start": 2591.48, "end": 2597.8, "text": " our mini batch size is 32 then we can come here and we can first do mini batch"}, {"start": 2597.8, "end": 2606.52, "text": " construct. So integers that we want to optimize in this single iteration are in"}, {"start": 2606.52, "end": 2614.6, "text": " the Ix and then we want to index into x with Ix to only grab those rows. So we're"}, {"start": 2614.6, "end": 2619.92, "text": " only getting 32 rows of x and therefore embeddings will again be 32 by 3 by 2."}, {"start": 2619.92, "end": 2626.28, "text": " Not 200,000 by 3 by 2. And then this Ix has to be used not just to index into"}, {"start": 2626.28, "end": 2633.2000000000003, "text": " x but also to index into y. And now this should be mini batches and this should"}, {"start": 2633.2000000000003, "end": 2639.6800000000003, "text": " be much much faster. So okay so it's instant almost. So this way we can run many"}, {"start": 2639.6800000000003, "end": 2645.96, "text": " many examples, nearly instantly and decrease the loss much much faster. Now"}, {"start": 2645.96, "end": 2649.88, "text": " because we're only doing with many batches the quality of our gradient is lower."}, {"start": 2649.88, "end": 2655.04, "text": " So the direction is not as reliable. It's not the actual gradient direction. But"}, {"start": 2655.04, "end": 2659.28, "text": " the gradient direction is good enough even when it's estimating on only 32"}, {"start": 2659.28, "end": 2664.68, "text": " examples that it is useful. And so it's much better to have an approximate"}, {"start": 2664.68, "end": 2668.92, "text": " gradient and just make more steps than it is to evaluate the exact gradient and"}, {"start": 2668.92, "end": 2675.08, "text": " take fewer steps. So that's why in practice this works quite well. So let's now"}, {"start": 2675.08, "end": 2683.48, "text": " continue the optimization. Let me take out this lost item from here and place it"}, {"start": 2683.48, "end": 2690.92, "text": " over here at the end. Okay so we're hovering around 2.5 or so. However this is"}, {"start": 2690.92, "end": 2696.7999999999997, "text": " only the loss for that mini batch. So let's actually evaluate the loss here for"}, {"start": 2696.7999999999997, "end": 2702.7599999999998, "text": " all of x and for all of y. Just so we have a full sense of exactly how well the"}, {"start": 2702.76, "end": 2707.7200000000003, "text": " model is doing right now. So right now we're at about 2.7 on the entire"}, {"start": 2707.7200000000003, "end": 2716.0, "text": " training set. So let's run the optimization for a while. Okay we're at 2.6, 2.5,"}, {"start": 2716.0, "end": 2726.32, "text": " 7, 2.5, 3. Okay so one issue of course is we don't know if we're stepping too"}, {"start": 2726.32, "end": 2732.84, "text": " slow or too fast. So this point one I just guessed it. So one question is how do"}, {"start": 2732.84, "end": 2737.56, "text": " you determine this learning rate? And how do we gain confidence that we're"}, {"start": 2737.56, "end": 2742.0800000000004, "text": " stepping in the right sort of speed? So I'll show you one way to determine a"}, {"start": 2742.0800000000004, "end": 2748.0800000000004, "text": " reasonable learning rate. It works as follows. Let's reset our parameters to the"}, {"start": 2748.08, "end": 2757.08, "text": " initial settings. And now let's print an every step. But let's only do 10 steps"}, {"start": 2757.08, "end": 2763.08, "text": " or so or maybe maybe 100 steps. We want to find like a very reasonable set"}, {"start": 2763.08, "end": 2770.4, "text": " the search range if you will. So for example this is like very low. Then we see"}, {"start": 2770.4, "end": 2775.0, "text": " that the loss is barely decreasing. So that's not that's like too low basically."}, {"start": 2775.0, "end": 2780.92, "text": " So let's try this one. Okay so we're decreasing the loss but like not very"}, {"start": 2780.92, "end": 2786.52, "text": " quickly. So that's a pretty good low range. Now let's reset it again. And now let's"}, {"start": 2786.52, "end": 2790.76, "text": " try to find the place at which the loss kind of explodes. So maybe at negative"}, {"start": 2790.76, "end": 2797.0, "text": " one. Okay we see that we're minimizing the loss but you see how it's kind of"}, {"start": 2797.0, "end": 2801.24, "text": " unstable. It goes up and down quite a bit. So negative one is probably like a"}, {"start": 2801.24, "end": 2808.6, "text": " fast learning rate. Let's try negative 10. Okay so this isn't optimizing. This is"}, {"start": 2808.6, "end": 2812.0, "text": " not working very well. So negative 10 is way too big. Negative one was already"}, {"start": 2812.0, "end": 2819.7599999999998, "text": " kind of big. So therefore negative one was like somewhat reasonable if I reset."}, {"start": 2819.7599999999998, "end": 2824.16, "text": " So I'm thinking that the right learning rate is somewhere between negative"}, {"start": 2824.16, "end": 2831.04, "text": " 0.001 and negative one. So the way we can do this here is we can use torque"}, {"start": 2831.04, "end": 2836.08, "text": " shut line space. And we want to basically do something like this between 0 and"}, {"start": 2836.08, "end": 2842.6, "text": " one but a number of steps is one more parameter that's required. Let's do a"}, {"start": 2842.6, "end": 2850.68, "text": " thousand steps. This creates 1000 numbers between 0.001 and 1. But it doesn't"}, {"start": 2850.68, "end": 2854.04, "text": " really make sense to step between these linearly. So instead let me create"}, {"start": 2854.04, "end": 2860.16, "text": " learning rate exponent. And instead of 0.001 this will be a negative three and"}, {"start": 2860.16, "end": 2864.6, "text": " this will be a zero. And then the actual errors that we want to search over are"}, {"start": 2864.6, "end": 2869.8399999999997, "text": " going to be 10 to the power of LRE. So now what we're doing is we're stepping"}, {"start": 2869.8399999999997, "end": 2874.7999999999997, "text": " linearly between the exponents of these learning rates. This is 0.001 and this is"}, {"start": 2874.7999999999997, "end": 2880.04, "text": " 1 because 10 to the power of 0 is 1. And therefore we are spaced"}, {"start": 2880.04, "end": 2885.24, "text": " exponentially in this interval. So these are the candidate learning rates that we"}, {"start": 2885.24, "end": 2891.9199999999996, "text": " want to sort of like search over roughly. So now what we're going to do is here we"}, {"start": 2891.9199999999996, "end": 2895.9199999999996, "text": " are going to run the optimization for 1000 steps. And instead of using a fixed"}, {"start": 2895.9199999999996, "end": 2903.0, "text": " number we are going to use learning rate indexing into here lRs of i and make"}, {"start": 2903.0, "end": 2909.8799999999997, "text": " this i. So basically let me reset this to be again starting from random."}, {"start": 2909.88, "end": 2918.2000000000003, "text": " Creating these learning rates between negative 0.001 and 1 but exponentially"}, {"start": 2918.2000000000003, "end": 2923.6800000000003, "text": " stepped. And here what we're doing is we're iterating a thousand times. We're"}, {"start": 2923.6800000000003, "end": 2928.56, "text": " going to use the learning rate that's in the beginning very very low. In the"}, {"start": 2928.56, "end": 2934.0, "text": " beginning it's going to be 0.001 but by the end it's going to be 1. And then"}, {"start": 2934.0, "end": 2938.56, "text": " we're going to step with that learning rate. And now what we want to do is we"}, {"start": 2938.56, "end": 2946.92, "text": " want to keep track of the learning rates that we used. And we want to look at the"}, {"start": 2946.92, "end": 2957.96, "text": " losses that resulted. And so here let me track stats. So lRi.append.plr and"}, {"start": 2957.96, "end": 2970.92, "text": " loss.append.loss.item. Okay so again reset everything and then run. And so"}, {"start": 2970.92, "end": 2973.52, "text": " basically we started with a very low learning rate and we went all the way up"}, {"start": 2973.52, "end": 2978.0, "text": " to learning rate of negative 1. And now what we can do is we can pedal to that"}, {"start": 2978.0, "end": 2982.96, "text": " plot and we can plot the two. So we can plot the learning rates on the x-axis"}, {"start": 2982.96, "end": 2988.16, "text": " and the losses we saw on the y-axis. And often you're going to find that your"}, {"start": 2988.16, "end": 2992.92, "text": " plot looks something like this. Where in the beginning you have very low"}, {"start": 2992.92, "end": 2997.88, "text": " learning rates. We basically anything barely anything happened. Then we got"}, {"start": 2997.88, "end": 3003.28, "text": " to like a nice spot here. And then as we increased the learning rate enough we"}, {"start": 3003.28, "end": 3007.32, "text": " basically started to be kind of unstable here. So a good learning rate turns"}, {"start": 3007.32, "end": 3015.48, "text": " out to be somewhere around here. And because we have lRi here we actually may"}, {"start": 3015.48, "end": 3023.52, "text": " want to do not lR not the learning rate but the exponent. So that would be the"}, {"start": 3023.52, "end": 3028.6800000000003, "text": " lRi at i is maybe what we want to log. So let me reset this and redo that"}, {"start": 3028.6800000000003, "end": 3036.1600000000003, "text": " calculation. But now on the x-axis we have the exponent of the learning rate. And so"}, {"start": 3036.16, "end": 3039.04, "text": " we can see the exponent of the learning rate that is good to use. It would be"}, {"start": 3039.04, "end": 3042.7999999999997, "text": " sort of like roughly in the valley here. Because here the learning rates are just"}, {"start": 3042.7999999999997, "end": 3046.68, "text": " way too low. And then here we expect relatively good learning rate somewhere"}, {"start": 3046.68, "end": 3051.0, "text": " here. And then here things are starting to explode. So somewhere around negative"}, {"start": 3051.0, "end": 3055.24, "text": " 1 x the exponent of the learning rate is a pretty good setting. And 10 to the"}, {"start": 3055.24, "end": 3061.64, "text": " negative 1 is 0.1. So 0.1 is actually a fairly good learning rate around here."}, {"start": 3061.64, "end": 3066.52, "text": " And that's what we had in the initial setting. But that's roughly how you"}, {"start": 3066.52, "end": 3073.3199999999997, "text": " would determine it. And so here now we can take out the tracking of these. And we"}, {"start": 3073.3199999999997, "end": 3079.2799999999997, "text": " can just simply set a lR to be 10 to the negative 1 or basically otherwise 0.1"}, {"start": 3079.2799999999997, "end": 3082.8799999999997, "text": " as it was before. And now we have some confidence that this is actually a fairly"}, {"start": 3082.8799999999997, "end": 3087.0, "text": " good learning rate. And so now what we can do is we can crank up the iterations."}, {"start": 3087.0, "end": 3095.0, "text": " We can reset our optimization. And we can run for a pretty long time using this"}, {"start": 3095.0, "end": 3100.52, "text": " learning rate. Oops. And we don't want to print. It's way too much printing. So"}, {"start": 3100.52, "end": 3112.12, "text": " let me again reset and run 10,000 steps. Okay, so we're 0.2 2.48 roughly. Let's"}, {"start": 3112.12, "end": 3122.7999999999997, "text": " run another 10,000 steps. 2.46. And now let's do one learning rate decay. What"}, {"start": 3122.7999999999997, "end": 3125.88, "text": " this means is we're going to take our learning rate and we're going to 10x"}, {"start": 3125.88, "end": 3130.4, "text": " lower it. And so over at the late stages of training potentially. And we may"}, {"start": 3130.4, "end": 3136.7599999999998, "text": " want to go a bit slower. Let's do one more actually at point one just to see if"}, {"start": 3136.76, "end": 3142.1600000000003, "text": " we're making an indent here. Okay, we're still making dent. And by the way the"}, {"start": 3142.1600000000003, "end": 3147.36, "text": " bi-gram loss that we achieved last video was 2.45. So we've already surpassed the"}, {"start": 3147.36, "end": 3151.44, "text": " bi-gram level. And once I get a sense that this is actually kind of starting to"}, {"start": 3151.44, "end": 3156.2000000000003, "text": " plateau off, people like to do as I mentioned this learning rate decay. So let's"}, {"start": 3156.2000000000003, "end": 3165.8, "text": " try to decay the loss, the learning rate I mean. And we achieve it about 2.3 now."}, {"start": 3165.8, "end": 3170.36, "text": " Obviously this is janky and not exactly how you train it in production. But this"}, {"start": 3170.36, "end": 3174.0800000000004, "text": " is roughly what you're going through. You first find a decent learning rate using"}, {"start": 3174.0800000000004, "end": 3177.5600000000004, "text": " the approach that I showed you. Then you start with that learning rate and you"}, {"start": 3177.5600000000004, "end": 3181.2000000000003, "text": " train for a while. And then at the end people like to do a learning rate decay"}, {"start": 3181.2000000000003, "end": 3184.7200000000003, "text": " where you decay the learning rate by say a factor of 10 and you do a few more"}, {"start": 3184.7200000000003, "end": 3189.44, "text": " steps. And then you get a trained network roughly speaking. So we've achieved"}, {"start": 3189.44, "end": 3194.2000000000003, "text": " 2.3 and dramatically improved on the bi-gram language model using this"}, {"start": 3194.2, "end": 3200.56, "text": " simple neural net as described here using these 3,400 parameters. Now there's"}, {"start": 3200.56, "end": 3204.48, "text": " something we have to be careful with. I said that we have a better model because"}, {"start": 3204.48, "end": 3209.68, "text": " we are achieving a lower loss 2.3 much lower than 2.45 with the bi-gram model"}, {"start": 3209.68, "end": 3216.56, "text": " previously. Now that's not exactly true. And the reason that's not true is that"}, {"start": 3216.56, "end": 3220.96, "text": " this is actually fairly small model. But these models can get larger and larger"}, {"start": 3220.96, "end": 3224.88, "text": " if you keep adding neurons and parameters. So you can imagine that we don't"}, {"start": 3224.88, "end": 3228.84, "text": " potentially have a thousand parameters. We could have 10,000 or 100,000 or millions"}, {"start": 3228.84, "end": 3234.0, "text": " of parameters. And as the capacity of the neural network grows it becomes more"}, {"start": 3234.0, "end": 3238.56, "text": " and more capable of overfitting your training set. What that means is that the"}, {"start": 3238.56, "end": 3242.8, "text": " loss on the training set on the data that you're training on will become very"}, {"start": 3242.8, "end": 3247.36, "text": " very low as low as zero. But all that the model is doing is memorizing your"}, {"start": 3247.36, "end": 3251.2400000000002, "text": " training set for bigum. So if you take that model and it looks like it's working"}, {"start": 3251.2400000000002, "end": 3255.1600000000003, "text": " really well but you try to sample from it you will basically only get examples"}, {"start": 3255.1600000000003, "end": 3259.76, "text": " exactly as they are in the training set. You won't get any new data. In addition"}, {"start": 3259.76, "end": 3264.36, "text": " to that if you try to evaluate the loss on some withheld names or other words"}, {"start": 3264.36, "end": 3268.88, "text": " you will actually see that the loss on those can be very high. As a"}, {"start": 3268.88, "end": 3273.1600000000003, "text": " basically it's not a good model. So the standard in the field it is to split up"}, {"start": 3273.16, "end": 3277.8799999999997, "text": " your data set into three splits as we call them. We have the training split, the"}, {"start": 3277.8799999999997, "end": 3286.3599999999997, "text": " dev split or the validation split and the test split. So training split test or"}, {"start": 3286.3599999999997, "end": 3293.44, "text": " sorry dev or validation split and test split. And typically this would be"}, {"start": 3293.44, "end": 3298.92, "text": " say 80% of your data set. This could be 10% and this 10% roughly. So you have"}, {"start": 3298.92, "end": 3303.96, "text": " these three splits of the data. Now these 80% of your trainings of the"}, {"start": 3303.96, "end": 3307.56, "text": " data set, the training set is used to optimize the parameters of the model"}, {"start": 3307.56, "end": 3313.2000000000003, "text": " just like we're doing here using gradient descent. These 10% of the examples"}, {"start": 3313.2000000000003, "end": 3317.36, "text": " the dev or validation split they're used for development over all the hyper"}, {"start": 3317.36, "end": 3321.7200000000003, "text": " parameters of your model. So hyper primers are for example the size of this"}, {"start": 3321.7200000000003, "end": 3326.04, "text": " hidden layer, the size of the embedding. So this is a hundred or a two for us"}, {"start": 3326.04, "end": 3330.24, "text": " or we could try different things. The strength of the realization which we"}, {"start": 3330.24, "end": 3333.96, "text": " aren't using yet so far. So there's lots of different hyper primers and"}, {"start": 3333.96, "end": 3337.44, "text": " settings that go into defining in your lot. And you can try many different"}, {"start": 3337.44, "end": 3342.48, "text": " variations of them and see whichever one works best on your validation split."}, {"start": 3342.48, "end": 3347.6, "text": " So this is used to train the primers. This is used to train the hyper"}, {"start": 3347.6, "end": 3352.92, "text": " primers and test split is used to evaluate basically the performance of the"}, {"start": 3352.92, "end": 3356.76, "text": " model at the end. So we're only evaluating the loss on the test split very"}, {"start": 3356.76, "end": 3361.28, "text": " very sparingly and very few times because every single time you evaluate your"}, {"start": 3361.28, "end": 3366.04, "text": " test loss and you learn something from it. You are basically starting to also"}, {"start": 3366.04, "end": 3372.0, "text": " train on the test split. So you are only allowed to test the loss on the test set"}, {"start": 3372.0, "end": 3377.7200000000003, "text": " very very few times. Otherwise you risk overfitting to it as well as you"}, {"start": 3377.7200000000003, "end": 3382.44, "text": " experiment on your model. So let's also split up our training data into"}, {"start": 3382.44, "end": 3387.64, "text": " train, dev and test. And then we are going to train on train and only evaluate"}, {"start": 3387.64, "end": 3393.0, "text": " on test very very sparingly. Okay so here we go. Here is where we took all the"}, {"start": 3393.0, "end": 3398.08, "text": " words and put them into x and y tensors. So instead let me create a new cell"}, {"start": 3398.08, "end": 3402.08, "text": " here and let me just copy paste some code here because I don't think it's that"}, {"start": 3402.08, "end": 3408.56, "text": " complex but we're gonna try to save a little bit of time. I'm converting this"}, {"start": 3408.56, "end": 3413.2, "text": " to be a function now and this function takes some list of words and builds the"}, {"start": 3413.2, "end": 3419.4, "text": " erase x and y for those words only. And then here I am shuffling up all the"}, {"start": 3419.4, "end": 3423.72, "text": " words. So these are the input words that we get. We are randomly shuffling them"}, {"start": 3423.72, "end": 3431.24, "text": " all up. And then we're going to set n1 to be the number of examples that is"}, {"start": 3431.24, "end": 3437.12, "text": " 80% of the words and n2 to be 90% of the way of the words. So basically if"}, {"start": 3437.12, "end": 3445.92, "text": " length of words is 30,000 and one is also I should probably run this. n1 is 25,000"}, {"start": 3445.92, "end": 3452.0, "text": " and n2 is 28,000. And so here we see that I'm calling build data set to build"}, {"start": 3452.0, "end": 3457.24, "text": " the training set x and y by indexing into up to n1. So we're going to have"}, {"start": 3457.24, "end": 3465.56, "text": " only 25,000 training words. And then we're going to have roughly n2 minus n1"}, {"start": 3465.56, "end": 3474.12, "text": " 3,000 validation examples or dev examples. And we're going to have a length of"}, {"start": 3474.12, "end": 3485.2799999999997, "text": " words basically minus n2 or 3,200 and 4 examples here for the test set. So now we"}, {"start": 3485.2799999999997, "end": 3494.6, "text": " have x is and y's for all those three splits. Oh yeah I'm printing their size"}, {"start": 3494.6, "end": 3500.8399999999997, "text": " here inside it function as well. But here we don't have words but these are"}, {"start": 3500.8399999999997, "end": 3506.2799999999997, "text": " already the individual examples made from those words. So let's now scroll down"}, {"start": 3506.2799999999997, "end": 3514.12, "text": " here. And the data set now for training is more like this. And then when we"}, {"start": 3514.12, "end": 3520.68, "text": " reset the network, when we're training, we're only going to be training"}, {"start": 3520.68, "end": 3531.3199999999997, "text": " using x train x train and y train. So that's the only thing we're training on."}, {"start": 3537.44, "end": 3546.2, "text": " Let's see where we are on a single batch. Let's now train maybe a few more steps."}, {"start": 3546.2, "end": 3551.3599999999997, "text": " Training on neural hours can take a while. Usually you don't do it in line. You"}, {"start": 3551.3599999999997, "end": 3555.52, "text": " launch a bunch of jobs and you wait for them to finish. You can take multiple"}, {"start": 3555.52, "end": 3562.24, "text": " days and so on. Luckily this is a very small network. Okay so the loss is"}, {"start": 3562.24, "end": 3567.8799999999997, "text": " pretty good. Oh we accidentally used our learning rate. That is way too low. So"}, {"start": 3567.8799999999997, "end": 3575.7999999999997, "text": " let me actually come back. We used the the K learning rate of 0.01. So this will"}, {"start": 3575.8, "end": 3582.7200000000003, "text": " train faster. And then here when we evaluate, let's use the dev set here. X"}, {"start": 3582.7200000000003, "end": 3590.2400000000002, "text": " dev and Y dev to evaluate the loss. Okay. And let's not decay the learning"}, {"start": 3590.2400000000002, "end": 3598.2400000000002, "text": " rate and only do say 10,000 examples. And let's evaluate the dev loss once"}, {"start": 3598.2400000000002, "end": 3602.6800000000003, "text": " here. Okay so we're getting about 2.3 on dev. And so the neural network running"}, {"start": 3602.68, "end": 3607.2999999999997, "text": " was training did not see these dev examples. It hasn't optimized on them. And"}, {"start": 3607.2999999999997, "end": 3611.7999999999997, "text": " yet when we evaluate the loss on these dev, we actually get a pretty decent loss."}, {"start": 3611.7999999999997, "end": 3621.08, "text": " And so we can also look at what the loss is on all of training set. Oops. And so"}, {"start": 3621.08, "end": 3625.3999999999996, "text": " we see that the training and the dev loss are about equal. So we're not overfitting."}, {"start": 3625.3999999999996, "end": 3631.48, "text": " This model is not powerful enough to just be purely memorizing the data. And so"}, {"start": 3631.48, "end": 3636.04, "text": " far we are what's called underfitting because the training loss and the dev or"}, {"start": 3636.04, "end": 3640.44, "text": " test losses are roughly equal. So what that typically means is that our network"}, {"start": 3640.44, "end": 3645.72, "text": " is very tiny, very small. And we expect to make performance improvements by"}, {"start": 3645.72, "end": 3649.52, "text": " scaling up the size of this neural net. So let's do that now. So let's come over"}, {"start": 3649.52, "end": 3654.04, "text": " here. And let's increase the size within your net. The easiest way to do this is"}, {"start": 3654.04, "end": 3657.36, "text": " we can come here to the hidden layer, which currently is 100 neurons. And let's"}, {"start": 3657.36, "end": 3663.4, "text": " just bump this up. So let's do 300 neurons. And then this is also 300 biases. And"}, {"start": 3663.4, "end": 3670.44, "text": " here we have 300 inputs into the final layer. So let's initialize our neural net."}, {"start": 3670.44, "end": 3676.4, "text": " We now have 10,000, 10,000 parameters instead of 3,000 parameters. And then"}, {"start": 3676.4, "end": 3681.92, "text": " we're not using this. And then here what I'd like to do is I'd like to actually keep track of"}, {"start": 3681.92, "end": 3692.2000000000003, "text": " that. Okay, let's just do this. Let's keep stats again. And here when we're keeping"}, {"start": 3692.2000000000003, "end": 3699.56, "text": " track of the loss, let's just also keep track of the steps. And let's just have"}, {"start": 3699.56, "end": 3708.96, "text": " eye here. And let's train on 30,000 or rather say, okay, let's try 30,000. And we are at"}, {"start": 3708.96, "end": 3719.36, "text": " 0.1. And we should alter on this, not as near a lot. And then here basically I want to"}, {"start": 3719.36, "end": 3732.64, "text": " plt dot plot the steps and things to the loss. So these are the x's and the y's. And this"}, {"start": 3732.64, "end": 3737.88, "text": " is the last function and how it's being optimized. Now you see that there's quite a"}, {"start": 3737.88, "end": 3742.1600000000003, "text": " bit of thickness to this. And that's because we are optimizing over these mini batches."}, {"start": 3742.1600000000003, "end": 3747.92, "text": " And the mini batches create a little bit of noise in this. Where are we in the deficit?"}, {"start": 3747.92, "end": 3752.52, "text": " We are at 2.5. So we're still having to optimize this neural net very well. And that's"}, {"start": 3752.52, "end": 3757.92, "text": " probably because we make it bigger. It might take longer for this neural net to converge."}, {"start": 3757.92, "end": 3767.48, "text": " And so let's continue training. Yeah, let's just continue training. One possibility is"}, {"start": 3767.48, "end": 3773.08, "text": " that the batch size is solo that we just have way too much noise in the training. And"}, {"start": 3773.08, "end": 3777.52, "text": " we may want to increase the batch size so that we have a bit more correct gradient. And"}, {"start": 3777.52, "end": 3789.2, "text": " we're not thrashing too much. And we can actually like optimize more properly. Okay. This"}, {"start": 3789.2, "end": 3795.48, "text": " will now become meaningless because we've re-initialized these. So yeah, this looks not pleasing"}, {"start": 3795.48, "end": 3800.84, "text": " right now. But the problem is look at tiny improvement, but it's so hard to tell. Let's"}, {"start": 3800.84, "end": 3830.8, "text": " go again. 2.5.2. Let's try to decrease the learning rate by factor of 2. Okay, we're"}, {"start": 3830.8, "end": 3848.2000000000003, "text": " 2.3.2. Let's continue training. We basically expect to see a lower loss than what we had"}, {"start": 3848.2000000000003, "end": 3852.5600000000004, "text": " before because now we have a much, much bigger model. And we were underfitting. So we'd"}, {"start": 3852.5600000000004, "end": 3857.5600000000004, "text": " expect that increasing the size of the model should help the neural net. 2.3.2. Okay,"}, {"start": 3857.56, "end": 3861.92, "text": " so that's not happening too well. Now, one other concern is that even though we've made"}, {"start": 3861.92, "end": 3866.72, "text": " the 10H layer here or the hidden layer much, much bigger, it could be that the bottleneck"}, {"start": 3866.72, "end": 3871.04, "text": " of the network right now are these embeddings that are too dimensional. It can be that"}, {"start": 3871.04, "end": 3874.7999999999997, "text": " we're just cramming way too many characters into just two dimensions. And the neural net"}, {"start": 3874.7999999999997, "end": 3879.88, "text": " is not able to really use that space effectively. And that that is sort of like the bottleneck"}, {"start": 3879.88, "end": 3886.12, "text": " to our networks performance. Okay, 2.23. So just by decreasing the learning rate, I was able"}, {"start": 3886.12, "end": 3892.64, "text": " to make quite a bit of progress. Let's run this one more time. And then evaluate the"}, {"start": 3892.64, "end": 3899.52, "text": " training and the dev loss. Now, one more thing after training that I'd like to do is I'd"}, {"start": 3899.52, "end": 3908.44, "text": " like to visualize the embedding vectors for these characters before we scale up the embedding"}, {"start": 3908.44, "end": 3914.24, "text": " size from 2. Because we'd like to make this bottleneck potentially go away. But once"}, {"start": 3914.24, "end": 3918.8799999999997, "text": " I make this greater than two, we won't be able to visualize them. So here, okay, we're"}, {"start": 3918.8799999999997, "end": 3925.72, "text": " at 2.23 and 2.24. So we're not improving much more. And maybe the bottleneck now is the"}, {"start": 3925.72, "end": 3930.2799999999997, "text": " character embedding size, which is two. So here I have a bunch of code that will create"}, {"start": 3930.2799999999997, "end": 3935.9199999999996, "text": " a figure. And then we're going to visualize the embeddings that were trained by the neural"}, {"start": 3935.9199999999996, "end": 3941.04, "text": " net on these characters. Because right now the embedding size is just two. So we can visualize"}, {"start": 3941.04, "end": 3945.56, "text": " all the characters with the x and the y coordinates as the two embedding locations for each of"}, {"start": 3945.56, "end": 3951.92, "text": " these characters. And so here are the x coordinates and the y coordinates, which are the columns"}, {"start": 3951.92, "end": 3958.92, "text": " of c. And then for each one, I also include the text of the little character. So here,"}, {"start": 3958.92, "end": 3965.08, "text": " what we see is actually kind of interesting. The network has basically learned to separate"}, {"start": 3965.08, "end": 3972.08, "text": " out the characters and cluster them a little bit. So for example, you see how the vowels,"}, {"start": 3972.08, "end": 3975.2, "text": " A, E, I, O, U are clustered up here. So what that's telling us is that the neural net treats"}, {"start": 3975.2, "end": 3980.2799999999997, "text": " these is very similar, right? Because when they feed into the neural net, the embedding"}, {"start": 3980.2799999999997, "end": 3983.92, "text": " for all these characters is very similar. And so the neural net thinks that they're very"}, {"start": 3983.92, "end": 3990.7599999999998, "text": " similar and kind of like interchangeable. And that makes sense. Then the points that"}, {"start": 3990.76, "end": 3995.1200000000003, "text": " are like really far away are, for example, Q. Q is kind of treated as an exception. And"}, {"start": 3995.1200000000003, "end": 4000.5600000000004, "text": " Q has a very special embedding vector, so to speak. Similarly, dot, which is a special"}, {"start": 4000.5600000000004, "end": 4005.0800000000004, "text": " character is all the way out here. And a lot of the other letters are sort of like clustered"}, {"start": 4005.0800000000004, "end": 4009.32, "text": " up here. And so it's kind of interesting that there's a little bit of structure here"}, {"start": 4009.32, "end": 4016.28, "text": " after the training. And it's not definitely not random. And these embeddings make sense."}, {"start": 4016.28, "end": 4020.6000000000004, "text": " So we're now going to scale up the embedding size and won't be able to visualize it directly."}, {"start": 4020.6, "end": 4026.44, "text": " And we expect that because we're underpinning and we made this layer much bigger and did"}, {"start": 4026.44, "end": 4032.52, "text": " not sufficiently improve the loss, we're thinking that the constraint to better performance"}, {"start": 4032.52, "end": 4036.88, "text": " right now could be these embedding vectors. So let's make them bigger. Okay, so let's"}, {"start": 4036.88, "end": 4042.04, "text": " crawl up here. And now we don't have two dimensional embeddings. We are going to have, say,"}, {"start": 4042.04, "end": 4050.0, "text": " 10 dimensional embeddings for each word. Then this layer will receive three times 10."}, {"start": 4050.0, "end": 4057.68, "text": " So 30 inputs will go into the hidden layer. Let's also make the hidden layer a bit smaller."}, {"start": 4057.68, "end": 4062.72, "text": " So instead of 300, let's just do 200 neurons in that hidden layer. So now the total number"}, {"start": 4062.72, "end": 4068.92, "text": " of elements will be slightly bigger at 11,000. And then we here, we have to be a bit careful"}, {"start": 4068.92, "end": 4075.52, "text": " because, okay, the learning rate we set to point one. Here we are a hard code in six."}, {"start": 4075.52, "end": 4079.72, "text": " And obviously if you're working in production, you don't want to be hard coding magic numbers."}, {"start": 4079.72, "end": 4087.2799999999997, "text": " But instead of six, this should now be 30. And let's run for 50,000 iterations and let"}, {"start": 4087.2799999999997, "end": 4093.04, "text": " me split out the initialization here outside so that when we run this a multiple times"}, {"start": 4093.04, "end": 4101.8, "text": " is not going to wipe out our loss. In addition to that here, let's instead of logging"}, {"start": 4101.8, "end": 4110.52, "text": " in lost items, let's actually log the, let's do log 10, I believe that's a function of"}, {"start": 4110.52, "end": 4117.88, "text": " the loss. And I'll show you why in a second, let's optimize this. Basically, I'd like"}, {"start": 4117.88, "end": 4122.2, "text": " to plot the log loss instead of the loss because when you plot the loss, many times it can"}, {"start": 4122.2, "end": 4129.360000000001, "text": " have this hockey stick appearance and log squashes it in. So it just kind of looks nicer."}, {"start": 4129.36, "end": 4143.88, "text": " So the x-axis is step i and the y-axis will be the loss i. And then here this is 30. Ideally,"}, {"start": 4143.88, "end": 4152.92, "text": " we wouldn't be hard coding these. Because let's look at the loss. Okay, it's again very"}, {"start": 4152.92, "end": 4157.36, "text": " thick because the mini batch size is very small. But the total loss over the training set"}, {"start": 4157.36, "end": 4163.96, "text": " is 2.3 and the the test or the dev set is 2.3 as well. So so far so good. Let's try to"}, {"start": 4163.96, "end": 4175.639999999999, "text": " now decrease the learning rate by a factor of 10 and train for another 50,000 iterations."}, {"start": 4175.639999999999, "end": 4184.92, "text": " We'd hope that we would be able to beat 2.3. But again, we're just kind of like doing"}, {"start": 4184.92, "end": 4189.4400000000005, "text": " this very haphazardly. So I don't actually have confidence that our learning rate is set"}, {"start": 4189.4400000000005, "end": 4195.68, "text": " very well. That our learning rate decay, which we just do at random is set very well. And"}, {"start": 4195.68, "end": 4199.84, "text": " so the optimization here is kind of suspects to be honest. And this is not how you would"}, {"start": 4199.84, "end": 4204.32, "text": " do a typically production. In production, you would create parameters or hyper parameters"}, {"start": 4204.32, "end": 4207.92, "text": " out of all these settings. And then you would run lots of experiments and see whichever"}, {"start": 4207.92, "end": 4217.32, "text": " ones are working well for you. Okay, so we have 2.17 now and 2.2. Okay, so you see how"}, {"start": 4217.32, "end": 4223.72, "text": " the training and the validation performance are starting to slightly slowly depart. So"}, {"start": 4223.72, "end": 4229.28, "text": " maybe we're getting the sense that the neural net is getting good enough or that number"}, {"start": 4229.28, "end": 4235.76, "text": " parameters are large enough that we are slowly starting to overfit. Let's maybe run"}, {"start": 4235.76, "end": 4243.400000000001, "text": " one more iteration of this and see where we get. But yeah, basically you would be running"}, {"start": 4243.400000000001, "end": 4247.16, "text": " lots of experiments and then you are slowly scrutinizing whichever ones give you the best"}, {"start": 4247.16, "end": 4251.88, "text": " death performance. And then once you find all the hyper parameters that make your death"}, {"start": 4251.88, "end": 4256.280000000001, "text": " performance good, you take that model and you evaluate the test set performance a single"}, {"start": 4256.280000000001, "end": 4260.4800000000005, "text": " time. And that's the number that you report in your paper or wherever else you want to"}, {"start": 4260.48, "end": 4268.679999999999, "text": " talk about and brag about your model. So let's then rerun the plot and rerun the train"}, {"start": 4268.679999999999, "end": 4274.959999999999, "text": " and death. And because we're getting lower loss now, it is the case that the embedding"}, {"start": 4274.959999999999, "end": 4283.04, "text": " size of these was holding us back very likely. Okay, so 2.16 to 0.19 is what we're roughly"}, {"start": 4283.04, "end": 4288.44, "text": " getting. So there's many ways to go from many ways to go from here. We can continue"}, {"start": 4288.44, "end": 4293.04, "text": " tuning the optimization. We can continue for example playing with the size of the neural"}, {"start": 4293.04, "end": 4298.799999999999, "text": " net or we can increase the number of words or characters in our case that we are taking"}, {"start": 4298.799999999999, "end": 4302.44, "text": " as an input. So instead of just three characters, we could be taking more characters than as"}, {"start": 4302.44, "end": 4308.32, "text": " an input. And that could further improve the loss. Okay, so I changed the code slightly."}, {"start": 4308.32, "end": 4313.759999999999, "text": " So we have here 200,000 steps of the optimization. And in the first 100,000, we're using a learning"}, {"start": 4313.76, "end": 4319.12, "text": " rate of 0.1. And then in the next 100,000, we're using a learning rate of 0.01. This is the"}, {"start": 4319.12, "end": 4324.4800000000005, "text": " loss that I achieve. And these are the performance on the training and validation loss. And in"}, {"start": 4324.4800000000005, "end": 4328.04, "text": " particular, the best validation loss I've been able to obtain in the last 30 minutes or"}, {"start": 4328.04, "end": 4334.6, "text": " so is 2.17. So now I invite you to beat this number. And you have quite a few knobs available"}, {"start": 4334.6, "end": 4339.280000000001, "text": " to you to I think surpass this number. So number one, you can of course change the number"}, {"start": 4339.280000000001, "end": 4343.72, "text": " of neurons in the hidden layer of this model. You can change the dimensionality of the embedding"}, {"start": 4343.72, "end": 4350.08, "text": " lookup table. You can change the number of characters that are feeding in as an input, as"}, {"start": 4350.08, "end": 4355.360000000001, "text": " the context into this model. And then of course, you can change the details of the optimization."}, {"start": 4355.360000000001, "end": 4359.84, "text": " How long are we running? What is the learning rate? How does it change over time? How does"}, {"start": 4359.84, "end": 4364.400000000001, "text": " it decay? You can change the batch size and you may be able to actually achieve a much"}, {"start": 4364.400000000001, "end": 4369.52, "text": " better convergence speed in terms of how many seconds or minutes it takes to train the"}, {"start": 4369.52, "end": 4376.96, "text": " model and get your result in terms of really good loss. And then of course, I actually"}, {"start": 4376.96, "end": 4380.96, "text": " invite you to read this paper. It is 19 pages, but at this point you should actually be"}, {"start": 4380.96, "end": 4387.120000000001, "text": " able to read a good chunk of this paper and understand pretty good chunks of it. And"}, {"start": 4387.120000000001, "end": 4391.6, "text": " this paper also has quite a few ideas for improvements that you can play with. So all"}, {"start": 4391.6, "end": 4395.68, "text": " of those are not available to you and you should be able to beat this number. I'm leaving"}, {"start": 4395.68, "end": 4404.68, "text": " that as an exercise to the reader and that's it for now and I'll see you next time."}, {"start": 4404.68, "end": 4409.16, "text": " Before we wrap up, I also wanted to show how you would sample from the model. So we're"}, {"start": 4409.16, "end": 4415.68, "text": " going to generate 20 samples. At first we begin with all dots. So that's the context."}, {"start": 4415.68, "end": 4423.84, "text": " And then until we generate the zeroed character again, we're going to embed the current context"}, {"start": 4423.84, "end": 4429.8, "text": " using the embedding table C. Now usually here, the first dimension was the size of the"}, {"start": 4429.8, "end": 4433.6, "text": " training set, but here we're only working with a single example that we're generating."}, {"start": 4433.6, "end": 4441.28, "text": " So this is just the mission one, just for simplicity. And so this embedding then gets projected"}, {"start": 4441.28, "end": 4446.2, "text": " into the state. You get the logits. Now we calculate the probabilities. For that, you"}, {"start": 4446.2, "end": 4452.08, "text": " can use f dot softmax of logits. And that just basically exponentially is the logits"}, {"start": 4452.08, "end": 4456.8, "text": " and makes them sum to one. And similar to cross entropy, it is careful that there's"}, {"start": 4456.8, "end": 4462.16, "text": " no overflows. Once we have the probabilities, we sample from them using torshot multinomial"}, {"start": 4462.16, "end": 4467.32, "text": " to get our next index. And then we shift the context window to append the index and record"}, {"start": 4467.32, "end": 4473.96, "text": " it. And then we can just decode all the integers to strings and print them out. And so these"}, {"start": 4473.96, "end": 4478.28, "text": " are some example samples. And you can see that the model now works much better. So the"}, {"start": 4478.28, "end": 4488.639999999999, "text": " words here are much more word like or name like. So we have things like ham, joes, lele,"}, {"start": 4488.639999999999, "end": 4492.88, "text": " it started to sound a little bit more name like. So we're definitely making progress, but"}, {"start": 4492.88, "end": 4497.599999999999, "text": " we can still improve on this model quite a lot. Okay, sorry, there's some bonus content."}, {"start": 4497.599999999999, "end": 4502.24, "text": " I wanted to mention that I want to make these notebooks more accessible. And so I don't"}, {"start": 4502.24, "end": 4506.04, "text": " want you to have to like install your bare notebooks and torture everything else. So I"}, {"start": 4506.04, "end": 4511.8, "text": " will be sharing a link to Google collab. And the Google collab will look like a notebook"}, {"start": 4511.8, "end": 4516.44, "text": " in your browser. And you can just go to URL and you'll be able to execute all of the"}, {"start": 4516.44, "end": 4521.72, "text": " code that you saw in the Google collab. And so this is me executing the code in this"}, {"start": 4521.72, "end": 4525.96, "text": " lecture. And I shortened it a little bit. But basically you're able to train the exact"}, {"start": 4525.96, "end": 4530.32, "text": " same network and then plot and sample from the model. And everything is ready for you"}, {"start": 4530.32, "end": 4535.88, "text": " to like tinker with the numbers right there in your browser. No installation necessary."}, {"start": 4535.88, "end": 4538.8, "text": " So I just wanted to point that out and the link to this will be in the video description."}]
Neural Networks: Zero to Hero
https://www.youtube.com/watch?v=PaCmpygFfXo
The spelled-out intro to language modeling: building makemore
We implement a bigram character-level language model, which we will further complexify in followup videos into a modern Transformer language model, like GPT. In this video, the focus is on (1) introducing torch.Tensor and its subtleties and use in efficiently evaluating neural networks and (2) the overall framework of language modeling that includes model training, sampling, and the evaluation of a loss (e.g. the negative log likelihood for classification). Links: - makemore on github: https://github.com/karpathy/makemore - jupyter notebook I built in this video: https://github.com/karpathy/nn-zero-to-hero/blob/master/lectures/makemore/makemore_part1_bigrams.ipynb - my website: https://karpathy.ai - my twitter: https://twitter.com/karpathy - (new) Neural Networks: Zero to Hero series Discord channel: https://discord.gg/Hp2m3kheJn , for people who'd like to chat more and go beyond youtube comments Useful links for practice: - Python + Numpy tutorial from CS231n https://cs231n.github.io/python-numpy-tutorial/ . We use torch.tensor instead of numpy.array in this video. Their design (e.g. broadcasting, data types, etc.) is so similar that practicing one is basically practicing the other, just be careful with some of the APIs - how various functions are named, what arguments they take, etc. - these details can vary. - PyTorch tutorial on Tensor https://pytorch.org/tutorials/beginner/basics/tensorqs_tutorial.html - Another PyTorch intro to Tensor https://pytorch.org/tutorials/beginner/nlp/pytorch_tutorial.html Exercises: E01: train a trigram language model, i.e. take two characters as an input to predict the 3rd one. Feel free to use either counting or a neural net. Evaluate the loss; Did it improve over a bigram model? E02: split up the dataset randomly into 80% train set, 10% dev set, 10% test set. Train the bigram and trigram models only on the training set. Evaluate them on dev and test splits. What can you see? E03: use the dev set to tune the strength of smoothing (or regularization) for the trigram model - i.e. try many possibilities and see which one works best based on the dev set loss. What patterns can you see in the train and dev set loss as you tune this strength? Take the best setting of the smoothing and evaluate on the test set once and at the end. How good of a loss do you achieve? E04: we saw that our 1-hot vectors merely select a row of W, so producing these vectors explicitly feels wasteful. Can you delete our use of F.one_hot in favor of simply indexing into rows of W? E05: look up and use F.cross_entropy instead. You should achieve the same result. Can you think of why we'd prefer to use F.cross_entropy instead? E06: meta-exercise! Think of a fun/interesting exercise and complete it. Chapters: 00:00:00 intro 00:03:03 reading and exploring the dataset 00:06:24 exploring the bigrams in the dataset 00:09:24 counting bigrams in a python dictionary 00:12:45 counting bigrams in a 2D torch tensor ("training the model") 00:18:19 visualizing the bigram tensor 00:20:54 deleting spurious (S) and (E) tokens in favor of a single . token 00:24:02 sampling from the model 00:36:17 efficiency! vectorized normalization of the rows, tensor broadcasting 00:50:14 loss function (the negative log likelihood of the data under our model) 01:00:50 model smoothing with fake counts 01:02:57 PART 2: the neural network approach: intro 01:05:26 creating the bigram dataset for the neural net 01:10:01 feeding integers into neural nets? one-hot encodings 01:13:53 the "neural net": one linear layer of neurons implemented with matrix multiplication 01:18:46 transforming neural net outputs into probabilities: the softmax 01:26:17 summary, preview to next steps, reference to micrograd 01:35:49 vectorized loss 01:38:36 backward and update, in PyTorch 01:42:55 putting everything together 01:47:49 note 1: one-hot encoding really just selects a row of the next Linear layer's weight matrix 01:50:18 note 2: model smoothing as regularization loss 01:54:31 sampling from the neural net 01:56:16 conclusion
Hi everyone, hope you're well. And next up what I'd like to do is I'd like to build out Makemore. Like micrograd before it, Makemore is a repository that I have on my GitHub web page. You can look at it. But just like with micrograd, I'm going to build it out step by step and I'm going to spell everything out. So we're going to build it out slowly and together. Now, what is Makemore? Makemore, as the name suggests, makes more of things that you give it. So here's an example. Names.TXT is an example dataset to make more. And when you look at names.TXT, you'll find that it's a very large dataset of names. So here's lots of different types of names. In fact, I believe there are 32,000 names that I've sort of found randomly on the government website. And if you trade Makemore on this dataset, it will learn to make more of things like this. And in particular, in this case, that will mean more things that sound name-like, but are actually unique names. And maybe if you have a baby and you're trying to assign a name, maybe you're looking for a cool new sounding unique name, Makemore might help you. So here are some examples of generations from the neural network. Once we train it on our dataset. So here's some examples of unique names that it will generate. Don't tell, I rot, Zendi, and so on. And so all these sort of sound name-like, but they're not, of course, names. So under the hood, Makemore is a character-level language model. So what that means is that it is treating every single line here as an example. And within each example, it's treating them all as sequences of individual characters. So R-E-E-S-E is this example. And that's the sequence of characters. And that's the level on which we are building out Makemore. And what it means to be a character-level language model then is that it's just sort of modeling those sequences of characters and it knows how to predict the next character in the sequence. Now, we're actually going to implement a large number of character-level language models in terms of the neural networks that are involved in predicting the next character in a sequence. So very simple, bi-gram and bag of word models, multilaylor perceptrons, recurring neural networks, all the way to modern transformers. In fact, a transformer that we will build will be basically the equivalent transformer to GPT2 if you have heard of GPT. So that's kind of a big deal. It's a modern network and by the end of the series, you will actually understand how that works on the level of characters. Now, to give you a sense of the extensions here, after characters, we will probably spend some time on the word level so that we can generate documents of words, not just little segments of characters, but we can generate entire much larger documents. And then we're probably going to go into images and image text networks, such as Dali, stable diffusion, and so on. But for now, we have to start here, careful level language modeling. Let's go. So like before, we are starting with a completely blank GPNodebit page. The first thing is I would like to basically load up the dataset, names.txt. So we're going to open up names.txt for reading. And we're going to read in everything into a massive string. And then because it's a massive string, we'd only like the individual words and put them in the list. So let's call split lines on that string to get all of our words as a Python list of strings. So basically we can look at, for example, the first 10 words. And we have that it's a list of Emma, Olivia, Eva, and so on. And if we look at the top of the page here, that is indeed what we see. So that's good. This list actually makes me feel that this is probably sorted by frequency. But okay, so these are the words. Now we'd like to actually learn a little bit more about this dataset. Let's look at the total number of words. We expect this to be roughly 32,000. And then what is the, for example, shortest word. So min of length of each word for W in words. So the shortest word will be length 2. And max of length W for W in words. So the longest word will be 15 characters. So let's now think through our very first language model. As I mentioned, a character level and good model is predicting the next character in a sequence given already some concrete sequence of characters before it. Now what we have to realize here is that every single word here, like Isabella, is actually quite a few examples packed in to that single word. Because what is an existence of a word like Isabella and the dataset telling us really? It's saying that the character I is a very likely character to come first in a sequence of a name. The character S is likely to come after I. The character A is likely to come after I.S. The character B is very likely to come after I.S.A. And someone all the way to A following Isabella. And then there's one more example actually packed in here. And that is that after there's Isabella, the word is very likely to end. So that's one more sort of explicit piece of information that we have here, that we have to be careful with. And so there's a lot packed into a single individual word in terms of the statistical structure of what's likely to follow in these character sequences. And then of course we don't have just an individual word. We actually have 32,000 of these. And so there's a lot of structure here to model. Now in the beginning, what I'd like to start with is I'd like to start with building a program language model. Now in a by-gram language model, we're always working with just two characters at a time. So we're only looking at one character that we are given and we're trying to predict the next character in the sequence. So what characters are likely to follow are what characters are likely to follow. Hey, and so on. And we're just modeling that kind of a little local structure. And we're forgetting the fact that we may have a lot more information. We're always just looking at the previous character to predict the next one. So it's a very simple and weak language model, but I think it's a great place to start. So now let's begin by looking at these by-grams in our data set and what they look like. And these by-grams again are just two characters in a row. So for WNWords, HW here is an individual word string. We want to iterate for, we want to iterate this word with consecutive characters. So two characters at a time, sliding it through the word. Now, a interesting nice way, cute way to this in Python, by the way, is doing something like this. For character on character two, in zip of W and W at one. One call. Print, character on character two. And let's not do all the words. Let's just do the first three words. And I'm going to show you in a second how this works. But from now, basically as an example, let's just do the very first word alone. You see how we have a M up. And this will just print EM, M-M, M-A. And the reason this works is because W is the string M up. W at one column is the string M-A. And zip takes two iterators. And it pairs them up and then creates an iterator over the tuples of their consecutive entries. And if any one of these lists is shorter than the other, then it will just halt and return. So basically that's why we return EM, M-M, M-M, M-A. But then because this iterator's second one here runs out of elements, zip just ends. And that's why we only get these tuples. So pretty cute. So these are the consecutive elements in the first word. Now we have to be careful because we actually have more information here than just these three examples. As I mentioned, we know that E is very likely to come first. And we know that A in this case is coming last. So one way to do this is basically we're going to create a special array here, our characters. And we're going to hallucinate a special start token here. I'm going to call it like special start. So this is a list of one element. And plus W. And then plus a special end character. And the reason I'm wrapping a list of W here is because W is a string, M-A, list of W will just have the individual characters in the list. And then doing this again now, but not iterating over W's, but over the characters. We'll give us something like this. So E is likely, so this is a bygram of the start character and E. And this is a bygram of the A in the special end character. And now we can look at, for example, what this looks like for Olivia or Eva. And indeed, we can actually, especially this for the entire dataset. But we won't print that. That's going to be too much. But these are the individual character bygrams and we can print them. Now, in order to learn the statistics about which characters are likely to follow other characters, the simplest way in the bygram language models is to simply do it by counting. So we're basically just going to count how often any one of these combinations occurs in the training set in these words. So we're going to need some kind of a dictionary that's going to maintain some counts for every one of these bygrams. So let's use a dictionary B. And this will map these bygrams. So bygram is a tuple of character on character two. And then B at bygram will be B dot get of bygram, which is basically the same as B at bygram. But in the case that bygram is not in the dictionary B, we would like to buy default or term zero plus one. So this will basically add up all the bygrams and count how often they occur. Let's get rid of printing or rather let's keep the printing and let's just inspect what B is in this case. And we see that many bygrams occur just a single time. This one allegedly occluder three times. So A was an ending character three times. And that's true for all of these words. All of Emma, Olivia and Eva and with A. So that's why this occurred three times. Now let's do it for all the words. Oops, I should not have printed it. I'm going to erase that. Let's kill this. Let's just run and now B will have the statistics of the entire data set. So these are the counts across all the words of the individual bygrams. And we could, for example, look at some of the most common ones and least common ones. This kind of grows in Python, but the way to do this, the simplest way I like is we just use B dot items. B dot items returns the tuples of key value. In this case, the keys are the character bygrams and the values are the counts. And so then what we want to do is we want to do sort it of this. But by default, sort is on the first on the first item of a tuple. But we want to sort by the values, which are the second element of a tuple that is the key value. So we want to use the key equals lambda that takes the key value and returns the key value at one, not at zero, but at one, which is the count. So we want to sort by the count of these elements. And actually, we wanted to go backwards. So here what we have is the bygram q and r occurs only a single time, d z occurred only a single time. And when we sort this the other way around, we're going to see the most likely bygrams. So we see that n was very often an ending character many, many times. And apparently n always always follows an a and that's a very likely combination as well. So this is kind of the individual counts that we achieve over the entire data set. Now it's actually going to be significantly more convenient for us to keep this information in a two dimensional array instead of a high-thond dictionary. So we're going to store this information in a 2D array and the rows are going to be the first character of the bygram and the columns are going to be the second character. And each entry in the two dimensional array will tell us how often that first character follows the second character in the data set. So in particular, the array representation that we're going to use or the library is that of PyTorch. And PyTorch is a deep learning neural framework, but part of it is also this torch.tensor which allows us to create multi-dimensional arrays and manipulate them very efficiently. So let's import PyTorch, which you can do by import Torch. And then we can create a race. So let's create a array of zeros. And we give it a size of this array. Let's create a 3 by 5 array as an example. And this is a 3 by 5 array of zeros. And by default, you'll notice a dot d type, which is short for data type, is flow 32. So these are single precision floating point numbers. Because we are going to represent counts, let's actually use d type as Torch.t in 32. So these are 32 bit integers. So now you see that we have integer data inside this tensor. Now, tensors allow us to really manipulate all the individual entries and do it very efficiently. So for example, if we want to change this bit, we have to index into the tensor. And in particular, here, this is the first row. And the, because it's zero indexed. So this is row index one and column index zero, one, two, three. So a at one comma three, we can set that to one. And then a will have a one over there. We can of course also do things like this. So now a will be to over there. And also we can, for example, say a zero zero is five. And then a will have a five over here. So that's how we can index into the arrays. Now, of course, the array that we are interested in is much much bigger. So for our purposes, we have 26 letters of the alphabet. And then we have two special characters as and e. So we want 26 plus two or 28 by 28 array. And let's call it the capital N because it's going to represent the sort of the counts. Let me raise this stuff. So that's the array that starts at zeroes, 28 by 28. And now let's copy paste this here. But instead of having an dictionary B, which we're going to erase, we now have an N. Now the problem here is that we have these characters, which are strings, but we have to now basically index into a array. And we have to index using integers. So we need some kind of a lockup table from characters to integers. So let's construct such a character array. And the way we're going to do this is we're going to take all the words, which is a list of strings. We're going to concatenate all of it into a massive string. So this is just simply the entire data set as a single string. We're going to pass this to the set constructor, which takes this massive string and throws out duplicates because sets do not allow duplicates. So set of this will just be the set of all the lowercase characters. And there should be a total of 26 of them. And now we actually don't want a set. We want a list. But we don't want a list sorted in some weird arbitrary way. We wanted to be sorted from a to Z. So sorted list. So those are our characters. Now we want is this lookup table, as I mentioned. So let's create a special S to I. I will call it. S is string or character. And this will be an S to I mapping for I S in a numerate of these characters. So enumerate basically gives us this iterator over the integer index and the actual element of the list. And then we are mapping the character to the integer. So S to I is a mapping from a to zero, B to one, etc. All the way from Z to 25. And that's going to be useful here. But we actually also have to specifically set that S will be 26. And S to I at E will be 27. Right? Because Z was 25. So those are the lookups. And now we can come here and we can map both character one and character to to their integers. So this will be S to I at character one. And I X to will be S to I of character two. And now we should be able to do this line, but using our array. So in it, I X one, I X to this is the two dimensional array indexing. I showed you before. And honestly, just plus equals one because everything starts at zero. So this should work and give us a large 28 by 20 array of all these counts. So if we print in this is the array, but of course it looks ugly. So let's erase this ugly mess. And let's try to visualize it a bit more nicer. So for that, we're going to use a library called matplotlib. So matplotlib allows us to create figures. So we can do things like PILT I'm show off the counter ray. So this is the 20 by 28 array. And this is structure, but even this, I would say is still pretty ugly. So we're going to try to create a much nicer visualization of it. And I wrote a bunch of code for that. The first thing we're going to need is we're going to need to invert this array here, this dictionary. So S to I is mapping from S to I. And in I to S, we're going to reverse this dictionary. So it rate of all the items and just reverse that array. So I to S maps inversely from zero to a want to be, etc. So we'll need that. And then here's the code that I came up with to try to make this a little bit nicer. We create a figure we plot and then we do and then we visualize a bunch of things here. Let me just run it so you get a sense of what it is. So you see here that we have the array spaced out and every one of these is basically like B follows G zero times B follows H 41 times. So a follows J 175 times. And so what you can see that I'm doing here is first I show that entire array. And then I iterate over all the individual little cells here. And I create a character string here, which is the inverse mapping I to S of the integer I and the integer J. So that's the diagrams in a character representation. And then I plot just the diagram text and then I plot the number of times that this diagram occurs. Now the reason that there's a dot item here is because when you index into these arrays, these are torch tensors. You see that we still get a tensor back. So the type of this thing you think it would be just an integer long 49 but it's actually a torched up tensor. And so if you do dot item, then it will pop out that individual integer. So it will just be 149. So that's what's happening there. And these are just some options to make it look nice. So what is this structure of this array? We have all these counts and we see that some of them occur often and some of them do not occur often. Now if you scrutinize this carefully, you will notice that we're not actually being very clever. That's because when you come over here, you'll notice that for example, we have an entire row of completely zeroes. And that's because the end character is never possibly going to be the first character of a diagram because we're always placing these end tokens all at the end of the diagram. Similarly, we have entire column zeros here because the S character will never possibly be the second element of a diagram because we always start with S and we end with E and we only have the words in between. So we have an entire column of zeros and entire row of zeros. And in this little two by two matrix here as well, the only one that can possibly happen is if S directly follows E. That can be non-zero if we have a word that has no letters. So in that case, there's no letters in a word. It's an empty word and we just have S follows E. But the other ones are just not possible. And so we're basically wasting space and not only that, but the S and the E are getting very crowded here. I was using these brackets because there's convention and natural language processing to use these kinds of brackets to denote special tokens. But we're going to use something else. So let's fix all this and make it prettier. We're not actually going to have two special tokens. We're only going to have one special token. So we're going to have n by n array of 27 by set 27 instead. Instead of having two, we will just have one and I will call it a dot. Okay. Let me swing this over here. Now, one more thing that I would like to do is I would actually like to make this special character half position zero. And I would like to offset all the other letters off. I find that a little bit more pleasing. So we need a plus one here so that the first character, which is A, will start at one. So S to I will now be a starts at one and dot is zero. And I to us, of course, we're not changing this because I to us just creates reverse mapping and this will work fine. So one is a to us B zero is dot. So we reverse that here. We have a dot and a dot. This should work fine. Make sure I started zeros count. And then here we don't go up to 28. We go up to 27. And this should just work. Okay. So we see that dot dot never happened. It's at zero because we don't have empty words. Then this row here now is just very simply the counts for all the first letters. So G J starts a word H starts word I starts a word, etc. And then these are all the ending characters. And in between we have the structure of what characters follow each other. So this is the counts array of our entire data set. So this array actually has all the information necessary for us to actually sample from this by gram character level language model. And roughly speaking we're going to do is we're just going to start following these probabilities and these counts. And we're going to start sampling from the model. So in the beginning, of course, we start with the dot, the start token dot. So to sample the first character of a name, we're looking at this row here. So we see that we have the counts and those counts externally are telling us how often any one of these characters is to start a word. So if we take this n and we grab the first row, we can do that by using just indexing at zero and then using this notation colon for the rest of that row. So n zero colon is indexing into the zero row and then grabbing all the columns. And so this will give us a one dimensional array of the first row. So zero for four ten, you know, zero for four ten, one three oh six one five four two, etc. It's just the first row. The shape of this is 27, just the row of 27. And the other way that you can do this also is you just you don't actually give this you just grab the zero row like this. This is equivalent. Now these are the counts and now what we'd like to do is we'd like to basically sample from this. Since these are the raw counts, we actually have to convert this to probabilities. So we create a probability vector. So we'll take n of zero and we'll actually convert this to float first. Okay, so these integers are converted to float, float, equine numbers. And the reason we're creating floats is because we're about to normalize these counts. So to create a probability distribution here, we want to divide, we basically want to the peak, p divide p dot sum. And now we get a vector of smaller numbers and these are now probabilities. So of course, because we divided by the sum, the sum of p now is one. So this is a nice proper probability distribution. It sums to one and this is giving us the probability for any single character to be the first character of a word. So now we can try to sample from this distribution to sample from these distributions. We're going to use torshtut multinomial, which I've pulled up here. So torshtut multinomial returns samples from the multinomial probability distribution, which is a complicated way of saying, you give me probabilities and I will give you integers, which are sampled according to the probability distribution. So this is the signature of the method. And to make everything deterministic, we're going to use a generator object in PyTorch. So this makes everything deterministic. So when you run this on your computer, you're going to get the exact same results that I'm getting here on my computer. So let me show you how this works. Here's the deterministic way of creating a torch generator object, seeding it with some number that we can agree on. So that seeds a generator, gets us an object g. And then we can pass that g to a function that creates here random numbers, torshtut random, creates random numbers, three of them. And it's using this generator object to, as a source of randomness. So without normalizing it, I can just print. This is sort of like numbers between zero and one that are random according to this thing. And whenever I run it again, I'm always going to get the same result because I keep using the same generator object, which I'm seeding here. And then if I divide to normalize, I'm going to get a nice probability distribution of just three elements. And then we can use torshtut multinomial to draw samples from it. So this is what that looks like. So torshtut multinomial will take the torshtensor of probability distributions. Then we can ask for a number of samples like C20. Replacement equals true means that when we draw an element, we can draw it, and then we can put it back into the list of eligible indices to draw again. And we have to specify replacement as true because by default, for some reason, it's false. So I think it's just something to be careful with. And the generator is passed in here. So we are going to always get deterministic results, the same results. So if I run these two, we're going to get a bunch of samples from this distribution. Now you'll notice here that the probability for the first element in this tensor is 60%. So in these 20 samples, we'd expect 60% of them to be zero. We'd expect 30% of them to be one. And because the element index 2 has only 10% probability, very few of these samples should be two. And indeed, we only have a small number of twos. And we can sample as many as we would like. And the more we sample, the more these numbers should, roughly, have the distribution here. So we should have lots of zeros, half as many ones. And we should have three times s few, sorry, s few ones, and three times s few twos. So you see that we have very few twos. We have some ones and most of them are zero. So that's what torsion multilimals doing. For us here, we are interested in this row. We've created this p here. And now we can sample from it. So if we use the same seed, and then we sample from this distribution, let's just get one sample. Then we see that the sample is, say, 13. So this will be the index. And let's, you see how it's a tensor that wraps 13. We again have to use dot item to pop out that integer. And now index would be just number 13. And of course, we can map the I2S of IX to figure out exactly which character we're sampling here. We're sampling M. So we're saying that the first character is in our generation. And just look at the row here. M was drawn, and we can see that M actually starts a large number of words. M started 2,500 words out of 32,000 words. So almost a bit less than 10% of the words start with M. So this was actually fairly likely character to draw. So that would be the first character of our word. And now we can continue to sample more characters. Because now we know that M started. Now M is already sampled. So now to draw the next character, we will come back here. And we will work for the row that starts with M. So you see M and we have a row here. So we see that M dot is 516, M A is this many, M B is this many, etc. So these are the counts for the next row. And that's the next character that we are going to now generate. So I think we are ready to actually just write out a loop. So we are going to start to get a sense of how this is going to go. We always begin at index 0, because that's the start token. And then while true, we are going to grab the row corresponding to index that we are currently on. So that's n array at ix. Converted to float is rp. Then we normalize this p to sum to 1. I accidentally ran the infinite loop. We normalize p to sum to 1. Then we need this generator object. And we are going to initialize up here. And we are going to draw a single sample from this distribution. And then this is going to tell us what index is going to be next. If the index sampled is 0, then that's now the end token. So we will break. Otherwise, we are going to print s2i of ix. i2s of ix. And that's pretty much it. We're just... this should work. Okay, more. So that's the name that we've sampled. We started with m. The next step was o, then r, and then dot. And this dot, we printed here as well. So let's not do this a few times. So let's actually create an out list here. And instead of printing, we're going to append. So out that append this character. And then here, let's just print it at the end. So let's just join up all the outs. And we're just going to print more. Now we're always getting the same result because of the generator. So who want to do this a few times? We can go for i and range 10. We can sample 10 names. And we can just do that 10 times. And these are the names that we're getting out. 10, 20. I'll be honest with you, this doesn't look right. So I've started a few minutes to convince myself that it actually is right. The reason these samples are so terrible is that by-gram language model is actually just like really terrible. We can generate a few more here. And you can see that they're kind of like their name, like a little bit, like yanu, irailly, etc. But they're just like totally messed up. And I mean the reason that this is so bad, like we're generating h as a name. But you have to think through it from the model's eyes. It doesn't know that this h is the very first h. All it knows is that h was previously. And now how likely is h the last character? Well, it's somewhat likely. And so it just makes it last character. It doesn't know that there were other things before it or there were not other things before it. And so that's why I'm generating all these like nonsense names. And the other way to do this is to convince yourself that it is actually doing something reasonable, even though it's so terrible, is these little piece here are 27, right? Like 27. So how about if we did something like this? Instead of having any structure whatsoever. How about if p was just a torch dot ones of 27? By default, this is a float 32. So this is fine. Divide 27. So what I'm doing here is this is the uniform distribution, which will make everything equally likely. And we can sample from that. So let's see if that doesn't need better. Okay. So it's this is what you have from a model that is completely untrained where everything is equally likely. So it's obviously garbage. And then if we have a trained model, which is trained on just by grams, this is what we get. So you can see that it is more name like it is actually working. It's just by gram is so terrible and we have to do better. Now next I would like to fix an inefficiency that we have going on here. Because what we're doing here is we're always fetching a row of n from the counts matrix up ahead. And we're always doing the same things. We're converting to float and we're dividing. And we're doing this every single iteration of the slope. And we just keep normalizing these rows over and over again. And it's extremely inefficient and wasteful. So what I'd like to do is I'd like to actually prepare a matrix capital p that will just have the probabilities in it. So in other words, it's going to be the same as the capital n matrix here of counts. But every single row will have the row of probabilities that is normalized to one, indicating the probability distribution for the next character given the character before it. As defined by which row we're in. So basically what we'd like to do is we'd like to just do it up front here. And then we would like to just use that row here. So here we would like to just do p equals p of ix instead. Okay. The other reason I want to do this is not just for efficiency, but also I would like us to practice these and dimensional tensors. And I'd like us to practice their manipulation. And especially something that's called broadcasting that we'll go into in a second. We're actually going to have to become very good at these tensor manipulations because we're going to build out all the way to transformers. We're going to be doing some pretty complicated array operations for efficiency. And we need to really understand that and be very good at it. So in doing what we want to do is we first want to grab the floating point copy of n. And I'm mimicking the line here basically. And then we want to divide all the rows so that they sum to one. So we'd like to do something like this. p divide p dot sum. But now we have to be careful because p dot sum actually produces a sum. Sorry. P equals n dot float copy. p dot sum produces a sums up all of the counts of this entire matrix n. And gives us a single number of just the summation of everything. So that's not the way we want to divide. We want to simultaneously and in parallel divide all the rows by their respective sums. So what we have to do now is we have to go into documentation for tors.sum. And we can scroll down here to a definition that is relevant to us, which is where we don't only provide an input array that we want to sum. But we also provide the dimension along which we want to sum. And in particular, we want to sum up over rows. Now one more argument that I want you to pay attention to here is the keep them as false. If keep them is true and the output tensor is of the same size as input, except of course the dimension along which you summed, which will become just one. But if you pass in, keep them as false, then this dimension is squeezed out. And so tors.sum not only does the sum and collapses dimension to be of size one, but in addition, it does what's called a squeeze where it squeezes out, it squeezes out that dimension. So basically what we want here is we instead want to do p dot sum of sum axis. And in particular, notice that p dot shape is 27 by 27. So when we sum up across axis zero, then we would be taking the zero dimension and we would be summing across it. So when keep them as true, then this thing will not only give us the counts across along the columns, but notice that basically the shape of this is one by 27. We just get a row vector and the reason we get a row vector here again is because we pass in zero dimension. So this zero dimension becomes one and we've done a sum and we get a row. And so basically we've done the sum this way vertically and arrived at just a single one by 27 vector of counts. What happens when you take out keep them is that we just get 27. So it squeezes out that dimension and we just get one dimensional vector of size 27. Now we don't actually want one by 27 row vector because that gives us the counts or the sums across the columns. We actually want to sum the other way along dimension one and you'll see that the shape of this is 27 by 1. So it's a column vector. It's a 27 by 1 vector of counts. And that's because what's happened here is that we're going horizontally and this 27 by 27 matrix becomes a 27 by 1 array. Now you'll notice by the way that the actual numbers of these counts are identical. And that's because this special array of counts here comes from by grams to the sticks and actually it just so happens by chance. Or because of the way this array is constructed that the sums along the columns or along the rows horizontally or vertically is identical. But actually what we want to do in this case is we want to sum across the rows horizontally. So what we want here is be that some of one with keep them true 27 by 1 column vector. And now what we want to do is we want to divide by that. Now we have to be careful here again. Is it possible to take what's a p. shape you see here is 27 by 27. Is it possible to take a 27 by 27 array and divide it by what is a 27 by 1 array? Is that an operation that you can do? And whether or not you can perform this operation is determined by what's called broadcasting rules. So if you just search broadcasting semantics in torch, you'll notice that there's a special definition for what's called broadcasting that for whether or not these two arrays can be combined in a binary operation like division. So the first condition is each tensor has at least one dimension, which is the case for us. And then when iterating over the dimension sizes starting at the trailing dimension, the dimension sizes must either be equal, one of them is one or one of them does not exist. So let's do that. We need to align the two arrays and their shapes, which is very easy because both of these shapes have two elements. So they're aligned. Then we iterate over from the right and going to the left. Each dimension must be either equal, one of them is a one or one of them does not exist. So in this case, they're not equal, but one of them is a one. So this is fine. And then this dimension, they're both equal. So this is fine. So all the dimensions are fine. And therefore the this operation is broadcastable. So that means that this operation is allowed. And what is it that these arrays do when you divide 27 by 27 by 27 by one? What it does is that it takes this dimension one and it stretches it out. It copies it to match 27 here. In this case. So in our case, it takes this column vector, which is 27 by one. And it copies it 27 times to make these both be 27 by 27 internally. You can think of it that way. And so it copies those counts. And then it does an element wise division, which is what we want. Because these counts we want to divide by them on every single one of these columns in this matrix. So this actually we expect will normalize every single row. And we can check that this is true by taking the first row, for example, and taking it some. We expect this to be one because it's not normalized. And then we expect this now, because if we actually correctly normalize all the rows, we expect to get the exact same result here. So let's run this. It's the exact same result. So this is correct. So now I would like to scare you a little bit. You actually have to like basically encourage you very strongly to read through broadcasting semantics. And I encourage you to treat this with respect. And it's not something to play fast and smooth. It's something to really respect, really understand and look up maybe some tutorials for broadcasting and practice it and be careful with it because you can very quickly run into bugs. Let me show you what I mean. You see how here we have p.1. Keep them this true. The shape of this is 27 by 1. Let me take out this line just so we have the n and then we can see the counts. We can see that this is all the counts across all the rows. And it's 27 by 1 column vector. Now suppose that I tried to do the following, but I erased keep them this true here. What does that do? If keep them is not true, it's false. Then remember according to documentation, it gets rid of this dimension 1. It squeezes it out. So basically we just get all the same counts, the same result except the shape of it is not 27 by 1. It is just 27 by 1 disappears. But all the counts are the same. So you'd think that this divide that would work. First of all, can we even write this? Is it even expected to run? Is it broadcastable? Let's determine if this result is broadcastable. p.1 is shape. It's 27. This is 27 by 27. So 27 by 27 broadcasting into 27. So now suppose broadcasting number 1 align all the dimensions on the right done. Now iteration over all the dimensions started from the right going to the left. All the dimensions must either be equal. One of them must be 1 or one then does not exist. So here they are or equal. Here the dimension does not exist. So internally what broadcasting will do is it will create a 1 here and then we see that one of them is a 1 and this will get copied and this will run. This will broadcast. Okay, so you'd expect this to work because we are this broadcast and this we can divide this. Now if I run this, you'd expect it to work but it doesn't. Now you actually get garbage. You got a wrong result because this is actually a bug. This keeps them equal to true. Makes it work. This is a bug. In both cases we are doing the correct counts. We are summing up across the rows but keep them saving us and making it work. So in this case, I'd like you to encourage you to potentially like pause this video at this point and try to think about why this is buggy and why the keep them was necessary here. Okay, so the reason to do for this is I'm trying to hint it here when I was giving you a bit of a hint on how this works. This 27 vector internally inside the broadcasting, this becomes a 1 by 27 and 1 by 27 is a row vector. Right? And now we are dividing 27 by 27 by 1 by 27 and torch will replicate this dimension. So basically it will take this row vector and it will copy it vertically now 27 times so the 27 by 27 lies exactly and element wise divides. And so basically what's happening here is we're actually normalizing the columns instead of normalizing the rows. So you can check what's happening here is that p at 0 which is the first row of p dot sum is not 1 it's 7. It is the first column as an example that sums to 1. So to summarize where does the issue come from? The issue comes from the silent adding of the dimension here because in broadcasting rules you align on the right and go from right to left and if the dimension doesn't exist you create it. So that's where the problem happens. We still did the counts correctly. We did the counts across the rows and we got the counts on the right here as a column vector. But because the key things was true this dimension was discarded and now we just have a vector 27. And because of broadcasting the way it works this vector of 27 suddenly becomes a row vector. And then this row vector gets replicated vertically and that every single point we are dividing by the count in the opposite direction. So this thing just doesn't work. This needs to be keep them as true in this case. So then then we have that p at 0 is normalized. And conversely the first column you'd expect to potentially not be normalized. And this is what makes it work. So pretty subtle and hopefully this helps to scare you that you should have respect for broadcasting. Be careful. Check your work and understand how it works under the hood and make sure that it's broadcasting in the direction that you like. Otherwise you're going to introduce very subtle bugs, very hard to find bugs. And just be careful. One more note on efficiency. We don't want to be doing this here because this creates a completely new tensor that we store into p. We prefer to use in place operations if possible. So this would be an in place operation has the potential to be faster. It doesn't create new memory under the hood. And then let's erase this. We don't need it. And let's also just do fewer. Just so I'm not wasting space. Okay. So we're actually in the pre-good spot now. We trained a bi-gram language model and we trained it really just by counting how frequently any pairing occurs and then normalizing so that we get a nice property distribution. So really these elements of this array p are really the parameters of our bi-gram language model given us and summarizing the statistics of these diagrams. So we trained a model and then we know how to sample from a model. We just iteratively sampled the next character and feed it in each time and get a next character. Now what I'd like to do is I'd like to somehow evaluate the quality of this model. We'd like to somehow summarize the quality of this model into a single number. How good is it at predicting the training set? And as an example, so in the training set, we can evaluate now the training loss and this training loss is telling us about sort of the quality of this model in a single number just like we saw in micrograd. So let's try to think through the quality of the model and how we would evaluate it. Basically what we're going to do is we're going to copy paste this code that we previously used for counting. Okay. And let me just print these bi-grams first. We're going to use f strings and I'm going to print character one followed by character two. These are the bi-grams. And then I didn't want to do it for all the words. Let's just do first three words. So here we have Emma, Olivia and Eva bi-grams. Now what we'd like to do is we'd like to basically look at the probability that the model assigns to every one of these bi-grams. So in other words, we can look at the probability, which is summarized in the matrix B of Ix1, Ix2. And then we can print it here as probability. And because these probabilities are way too large, let me percent or call on.4F to like truncated a bit. So what do we have here? We're looking at the probabilities that the model assigns to every one of these bi-grams in the dataset. And so we can see some of them are 4% 3% etc. Just to have a measuring stick in our mind, by the way, we have 27 possible characters or tokens. And if everything was equally likely, then you'd expect all these probabilities to be 4% roughly. So anything above 4% means that we've learned something useful from these bi-grams statistics. And you see that roughly some of these are 4%, but some of them are as high as 40%, 35%. And so on. So you see that the model actually assigned a pretty high probability to whatever's in the training set. And so that's a good thing. Basically, if you have a very good model, you'd expect that these probabilities should be near one, because that means that your model is correctly predicting what's going to come next, especially in the training set where you trained your model. So now we'd like to think about how can we summarize these probabilities into a single number that measures the quality of this model. Now when you look at the literature into maximum likelihood estimation and statistical modeling and so on, you'll see that what's typically used here is something called the likelihood. And the likelihood is the product of all of these probabilities. And so the product of all of these probabilities is the likelihood. And it's really telling us that the probability of the entire dataset assigned by the model that we've trained. And that is a measure of quality. So the product of these should be as high as possible when you are training the model and when you have a good model, your product of these probabilities should be very high. Now because the product of these probabilities and is an unwieldy thing to work with, you can see that all of them are between 0 and 1. So your product of these probabilities will be a very tiny number. So for convenience, what people work with usually is not the likelihood, but they work with what's called the log likelihood. So the product of these is the likelihood to get the log likelihood. We just have to take the log of the probability. And so the log of the probability here, I have the log of x from 0 to 1. The log is a, you see here monotonic transformation of the probability. Where if you pass in 1, you get 0. So probability 1 gets your log probability of 0. And then as you go lower and lower probability, the log will grow more and more negative until all the way to negative infinity at 0. So here we have a log prob, which is religious, the torshtod log of probability. Let's print it out to get a sense of what that looks like. Log prob also point for f. So as you can see, when we plug in numbers that are very close, some of our higher numbers, we get closer and closer to 0. And then if we plug in very bad probabilities, we get more and more negative number. That's bad. And the reason we work with this is for large extent convenience, right, because we have mathematically that if you have some product, a times b times c, of all these probabilities, right, the likelihood is the product of all these probabilities. Then the log of these is just log of a plus log of b plus log of c. If you remember your logs from your high school or undergrad and so on. So we have that basically the likelihood of the product probabilities, the what likelihood is just the sum of the logs of the individual probabilities. So log likelihood starts at 0 and then log likelihood here, we can just accumulate simply. And then the end we can print this print the log likelihood, f strings, maybe you're familiar with this. So what likelihood is negative 38. Now we actually want. So how high can log likelihood get? It can go to zero. So when all the probabilities are one log likelihood to zero. And then when all the probabilities are lower, this will grow more and more negative. Now we don't actually like this because what we'd like is a loss function and a loss function has the semantics that low is good because we're trying to minimize the loss. So we actually need to invert this and that's what gives us something called the negative log likelihood. Negative log likelihood is just negative of the log likelihood. These are f strings, by the way, if you'd like to look this up, negative log likelihood equals. So the negative log likelihood now is just negative of it. And so the negative log likelihood is a very nice loss function because the lowest it can get is zero. And the higher it is, the worse off the predictions are that you're making. And then one more modification to this that sometimes people do is that for convenience, they actually like to normalize by they like to make it an average instead of a sum. And so here, let's just keep some counts as well. So n plus equals one starts at zero. And then here we can have sort of like a normalized log likelihood. If we just normalize it by the count, then we will sort of get the average log likelihood. So this would be usually our loss function here is this we would this is what we would use. So our loss function for the training set assigned by the model is 2.4. That's the quality of this model. And the lower it is, the better off we are. And the higher it is, the worse off we are. And the job of our training is to find the parameters that minimize the negative log likelihood loss. And that would be like a high quality model. Okay, so to summarize, I actually wrote it out here. So our goal is to maximize likelihood, which is the product of all the probabilities assigned by the model. And we want to maximize this likelihood with respect to the model parameters. And in our case, the model parameters here are defined in the table. These numbers, the probabilities are the model parameters, sort of in our background language model so far. But you have to keep in mind that here we are storing everything in a table format, the probabilities. But what's coming up as a brief preview is that these numbers will not be kept explicitly. But these numbers will be calculated by neural network. So that's coming up. And we want to change and tune the parameters of these neural networks. We want to change these parameters to maximize the likelihood, the product of the probabilities. Now, maximizing the likelihood is equivalent to maximizing the log likelihood because log is a monotonic function. Here's the graph of log. And basically, all it is doing is it's just scaling your... You can look at it as just a scaling of the loss function. And so the optimization problem here and here are actually equivalent because this is just a scaling. You can look at it that way. And so these are two identical optimization problems. Maximizing the log likelihood is equivalent to minimizing the negative log likelihood. And then in practice, people actually minimize the average negative log likelihood to get numbers like 2.4. And then this summarizes the quality of your model. And we'd like to minimize it and make it as small as possible. And the lowest it can get is zero. And the lower it is, the better off your model is because it's signing high probabilities to your data. Now let's estimate the probability over the entire training set just to make sure that we get something around 2.4. Let's run this over the entire... Oops! Let's take out the print statement as well. Okay, 2.45 or the entire training set. Now what I'd like to show you is that you can actually evaluate the probability for any word that you want. Like for example, if we just test a single word, Andre, and bring back the print statement, then you see that Andre is actually kind of like an unlikely word. Like on average, we take three log probability to represent it. And roughly that's because EJ apparently is very uncommon as an example. Now, think through this. I'm going to take Andre and I append Q, and I test the probability of it. Andre Q. We actually get infinity. And that's because JQ has a 0% probability according to our model. So the log likelihood... So the log of 0 will be negative infinity. We get infinite loss. So this is kind of undesirable, right? Because we plugged in a string that could be like a somewhat reasonable name. But basically what this is saying is that this model is exactly 0% likely to predict this name. And our loss is infinity on this example. And really, the reason for that is that J is followed by Q 0 times, where is Q? JQ is 0. And so JQ is 0% likely. So this is actually kind of gross and people don't like this too much. To fix this, there's a very simple fix that people like to do to sort of smooth out your model a little bit. And it's called model smoothing. And roughly what's happening is that we will add some eight counts. So imagine adding a count of one to everything. So we add a count of one like this. And then we recalculate the probabilities. And that's model smoothing. And you can add as much as you like. You can add five and that will give you a smoother model. And the more you add here, the more uniform model you're going to have. And the less you add, the more peaked model you're going to have. Of course, so one is like a pretty decent count to add. And that will ensure that there will be no zeros in our probability matrix P. And so this will of course change the generations a little bit. In this case, it didn't but it in principle it could. But what that's going to do now is that nothing will be infinity unlikely. So now our model will predict some other probability. And we see that JQ now has a very small probability. So the model still finds it's very surprising that this was a word or a by gram. But we don't get negative infinity. So it's kind of like a nice fix that people like to apply sometimes and it's called models moving. Okay, so we've now trained a respectable by gram character level language model. And we saw that we both sort of trained the model by looking at the counts of all the by grams. And normalizing the rows to get probability distributions. So we saw that we can also then use those parameters of this model to perform sampling of new words. So we sample new names according to those distributions. And we also saw that we can evaluate the quality of this model. And the quality of this model is summarized in a single number, which is the negative log likelihood. And the lower this number is the better the model is because it is giving high probabilities to the actual next characters in all the by grams in our training set. So that's all well and good. But we've arrived at this model explicitly by doing something that felt sensible. We were just performing counts. And then we were normalizing those counts. Now what I would like to do is I would like to take an alternative approach. We will end up in a very, very similar position, but the approach will look very different. Because I would like to cast the problem of by gram character level language modeling into the neural network framework. And in neural network framework, we're going to approach things slightly differently, but again, end up in a very similar spot. I'll go into that later. Now, our neural network is going to be a still a by gram character level language model. So it receives a single character as an input. Then there's neural network with some weights or some parameters w. And it's going to output the probability distribution over the next character in a sequence. It's going to make guesses as to what is likely to follow this character that was input to the model. And then in addition to that, we're going to be able to evaluate any setting of the parameters of the neural net, because we have a loss function, the negative lot likelihood. So we're going to take a look at its probability distributions, and we're going to use the labels, which are basically just the identity of the next character in that by gram, the second character. So knowing what the second character actually comes next in the by gram allows us to then look at what, how high of probability the model assigns to that character. And then we of course want the probability to be very high. And that is another way of saying that the loss is low. So we're going to use gradient based optimization then to tune the parameters of this network, because we have the loss function, and we're going to minimize it. So we're going to tune the weights so that the neural net is correctly predicting the probabilities for the next character. So let's get started. The first thing I want to do is I want to compile the training set of this neural network. So create the training set of all the by grams. And here I'm going to copy paste this code, because this code iterates over all the by grams. So here we start with the words, we iterate over all the by grams. And previously, as you recall, we did the counts. But now we're not going to do counts. We're just creating a training set. Now this training set will be made up of two lists. We have the inputs and the targets, the labels. And these by grams will denote x, y. Those are the characters, right? And so we're given the first character of the by gram, and then we're trying to predict the next one. Both of these are going to be integers. So here we'll take x's that append is just x1, y's that append, x2. And then here we actually don't want lists of integers. We will create tensors out of these. So x's is torched up tensor of x's, and y's is torched up tensor of y's. And then we don't actually want to take all the words just yet, because I want everything to be manageable. So let's just do the first word, which is m, i. And then it's clear what these x's and y's would be. Here, let me print character one character two. Just so you see what's going on here. So the by grams of these characters is dot e, e m, m, m, a dot. So this single word, as I mentioned, has one, two, three, four, five examples for our neural network. There are five separate examples in m, and those examples I show my is here. When the input to the neural neural network is integer zero, the desired label is integer five, which corresponds to e. When the input to the neural network is five, we want its weights to be arranged so that 13 gets a very high probability. When 13 is put in, we want 13 to have a high probability. When 13 is put in, we also want one to have a high probability. When one is input, we want zero to have a very high probability. So there are five separate input examples to a neural net in this data set. I wanted to add a tangent of a note of caution to be careful with a lot of the APIs of some of these frameworks. You saw me silently use torched dot tensor with a lowercase t, and the output looked right. But you should be aware that there's actually two ways of constructing a tensor. There's a torched dot lowercase tensor, and there's also a torched dot capital tensor class, which you can also construct. So you can actually call both. You can also do torched dot capital tensor, and you get a nexus and wise as well. So that's not confusing at all. There are threads on what is the difference between these two. And unfortunately, the docs are just like not clear on the difference. And when you look at the docs of lowercase tensor, construct tensor with no undergrad history by copying data. It's just like it doesn't, it doesn't make sense. So the actual difference, as far as I can tell, is explained eventually in this random thread that you can google. And really, it comes down to, I believe, that, where is this? Torched dot tensor refers to the d type, the data type automatically, while torched dot tensor just returns a flow tensor. I would recommend to stick to torched dot lowercase tensor. So, indeed, we see that when I construct this with a capital T, the data type here of x is flow 32. But torched dot lowercase tensor, you see how it's now x dot d type is now integer. So, it's advised that you use lowercase t, and you can read more about it if you like in some of these threads. But basically, I'm pointing out some of these things because I want to caution you, and I want you to get used to reading a lot of documentation, and reading through a lot of Q and A's and threads like this. And, you know, some of the stuff is unfortunately not easy and not very well documented, and you have to be careful out there. What we want here is integers, because that's what makes sense. And so, lowercase tensor is what we are using. Okay, now we want to think through how we're going to feed in these examples into a neural network. Now, it's not quite as straightforward as plugging it in, because these examples right now are integers. So, there's like a 0, 5, or 13. It gives us the index of the character, and you can't just plug an integer index into a neural net. These neural nets are sort of made up of these neurons, and these neurons have weights. And, as you saw in micrograd, these weights act multiplicatively on the inputs, wx, plus b, there's 10HS, and so on. And so, it doesn't really make sense to make an input neuron take on integer values that you feed in, and then multiply on with weights. So, instead, a common way of encoding integers is what's called one-hot encoding. In one-hot encoding, we take an integer like 13, and we create a vector that is all zeros, except for the 13th dimension, which we turn to a 1. And then that vector can feed into a neural net. Now, conveniently, PyTorch actually has something called the one-hot function inside Torch and in Functional. It takes a tensor made up of integers. Long is an integer, and it also takes a number of classes, which is how large you want your vector to be. So, here, let's import Torch. and end.functional.sf. This is a common way of importing it. And then let's do f.1 hot, and we feed in the integers that we want to encode. So, we can actually feed in the entire array of x's, and we can tell that num class is 27. So, it doesn't have to try to guess it. It may have guessed that it's only 13, and would give us an incorrect result. So, this is the one-hot. Let's call this x-inc for x-incoded. And then we see that x-incoded.shape is 5x27. And we can also visualize it at plt.imShowofxinc to make it a little bit more clear because this is a little messy. So, we see that we've encoded all the five examples into vectors. We have five examples, so we have five rows, and each row here is now an example into a neural mat. And we see that the appropriate bit is turned on as a one, and everything else is zero. So, here, for example, the zero-th bit is turned on, the fifth bit is turned on, 13th bits are turned on for both of these examples, and then the first bit here is turned on. So, that's how we can encode integers into vectors, and then these vectors can feed in to neural mats. One more issue to be careful with here, by the way, is let's look at the data type of echoing. We always want to be careful with data types. What would you expect x-incodings data type to be? When we're plugging numbers into neural mats, we don't want them to be integers. We want them to be floating point numbers that can take on various values. But the D-type here is actually 64-bit integer. And the reason for that, I suspect, is that one hot received a 64-bit integer here, and it returned to the same data type. And when you look at the signature of one hot, it doesn't even take a desired data type of the output tensor. And so, we can't, in a lot of functions in tortuary, be able to do something like D-type equals torshtotflot32, which is what we want, but one hot does not support that. So instead, we're going to want to cast this to float like this. So that these, everything is the same. Everything looks the same, but the D-type is float32. And floats can feed into neural mats. So now let's construct our first neuron. This neuron will look at these input vectors. And as you remember from micrograd, these neurons basically perform a very simple function, Wx plus B, where Wx is a dot product. So we can achieve the same thing here. Let's first define the weights of this neuron, basically. We're the initial weights at initialization for this neuron. Let's initialize them with torshtotrandin. Torshtotrandin is fills a tensor with random numbers, drawn from a normal distribution. And a normal distribution has a probability density function like this. And so most of the numbers drawn from this distribution will be around zero, but some of them will be as high as almost three and so on. And very few numbers will be above three in magnitude. So we need to take a size as an input here. And I'm going to use size as to be 27 by one. So 27 by one, and then let's visualize W. So W is a column vector of 27 numbers. And these weights are then multiplied by the inputs. So now to perform this multiplication, we can take x encoding, and we can multiply it with W. This is a matrix multiplication operator in PyTorch. And the output of this operation is 5 by 1. The reason it's 5 by 5 is the following. We took x encoding, which is 5 by 27, and we multiplied it by 27 by 1. And in matrix multiplication, you see that the output will become 5 by 1, because these 27 will multiply and add. So basically what we're seeing here out of this operation is we are seeing the five activations of this neuron on these five inputs. And we've evaluated all of them in parallel. We didn't feed in just a single input to the single neuron. We fed in simultaneously all the five inputs into the same neuron. And in parallel, PyTorch has evaluated the Wx plus B, but here is just Wx. There's no bias. It is value W times x for all of them, independently. Now instead of a single neuron, though, I would like to have 27 neurons. And I'll show you in a second why I've gone 27 neurons. So instead of having just a one here, which is indicating this presence of one single neuron, we can use 27. And then when W is 27 by 27, this will in parallel evaluate all the 27 neurons on all five inputs, giving us a much better, much, much bigger result. So now what we've done is 5 by 27 multiplied, 27 by 27, and the output of this is now 5 by 27. So we can see that the shape of this is 5 by 27. So what is every element here telling us? It's telling us for every one of 27 neurons that we created. What is the firing rate of those neurons on every one of those five examples? So the element, for example, 3 comma 13 is giving us the firing rate of the 13th neuron looking at the third input. And the way this was achieved is by a dot product between the third input and the 13th column of this W matrix here. So using a major multiplication, we can very efficiently evaluate the dot product between lots of input examples in a batch. And lots of neurons where all of those neurons have weights in the columns of those W's. And in major multiplication, we're just doing those dot products in parallel. Just to show you that this is the case, we can take X and we can take the third row. And we can take the W and take its 13th column. And then we can do X and get 3, element wise multiply with W at 13 and sum that up. This WX plus B. Well, there's no plus B. It's just WX dot product. And that's this number. So you see that this is just being done efficiently by the matrix multiplication operation for all the input examples and for all the output neurons of this first layer. Okay, so we fed our 27 dimensional inputs into a first layer of a neural net that has 27 neurons. Right? So we have 27 inputs and now we have 27 neurons. These neurons perform W times X. They don't have a bias and they don't have a non-linearity like 10H. We're going to leave them to be a linear layer. In addition to that, we're not going to have any other layers. This is going to be it. It's just going to be the dumbest, smallest, simplest neural net, which is just a single linear layer. And now I'd like to explain what I want those 27 outputs to be. Intuitively, what we're trying to produce here for every single input example is we're trying to produce some kind of a probability distribution for the next character in a sequence. And there's 27 of them. But we have to come up with like precise semantics for exactly how we're going to interpret these 27 numbers that these neural state come on. Now intuitively, you see here that these numbers are negative and some of them are positive, etc. And that's because these are coming out of the neural net layer initialized with these normal distribution parameters. But what we want is we want something like we had here, like each row here told us the counts. And then we normalize the counts to get probabilities. And we want something similar to come out of the neural net. But what we just have right now is just some negative and positive numbers. Now, we want those numbers to somehow represent the probabilities for the next character. But you see that probabilities, they have a special structure. They're positive numbers and they sum to 1. And so that doesn't just come out of a neural net. And then they can't be counts because these counts are positive and counts are integers. So counts are also not really a good thing to output from a neural net. So instead what the neural net is going to output and how we are going to interpret the 27 numbers is that these 27 numbers are giving us log counts, basically. So instead of giving us counts directly, lock in this table, they're giving us log counts. And to get the counts, we're going to take the log counts and we're going to exponentially eat them. Now, exponentiation takes the following form. It takes numbers that are negative or they are positive. It takes the entire real line. And then if you plug in negative numbers, you're going to get e to the x, which is always below 1. So you're getting numbers lower than 1. And if you plug in numbers greater than 0, you're getting numbers greater than 1 all the way growing to the infinity. And this here grows to 0. So basically we're going to take these numbers here. And instead of them being positive and negative in all of the place, we're going to interpret them as log counts. And then we're going to element wise, exponentiate these numbers. Exponentiating them now gives us something like this. And you see that these numbers now, because they went through an exponent, all the negative numbers turned into numbers below 1, like 0,338. And all the positive numbers originally turned into even more positive numbers, sort of greater than 1. So like for example, 7 is some positive number over here. That is greater than 0. But exponentiated outputs here basically give us something that we can use and interpret as the equivalent of counts originally. So you see these counts here, 1, 12, 7, 51, 1, etc. The neural net is kind of now predicting counts. And these counts are positive numbers. They can never be below 0. So that makes sense. And they can now take on various values depending on the settings of W. So let me break this down. We're going to interpret these to be the log counts. In other words, for this, that is often used is so called logits. These are logits log counts. And these will be sort of the counts, logits exponentiated. And this is equivalent to the n matrix, sort of the n array that we used previously. Remember this was the n. This is the array of counts. And each row here are the counts for the next character, sort of. So those are the counts. And now the probabilities are just the counts normalized. And so I'm not going to find the same. But basically I'm not going to scroll all the place. We've already done this. We want to count that sum along the first dimension. And we want to keep them as true. We've went over this. And this is how we normalize the rows of our counts matrix to get our probabilities. So now these are the probabilities. And these are the counts that we have currently. And now when I show the probabilities, you see that every row here, of course, will sum to 1. Because they're normalized. And the shape of this is 5 by 27. And so really what we've achieved is for every one of our five examples, we now have a row that came out of a neural net. And because of the transformations here, we made sure that this output of this neural net now are probabilities, or we can interpret to be probabilities. So our WX here gave us logits. And then we interpret those to be log counts. We exponentiate to get something that looks like counts. And then we normalize those counts to get a probability distribution. And all of these are differentiable operations. So what we've done now is we are taking inputs. We have differentiable operations that we can back propagate through. And we're getting out probability distributions. So, for example, for the zero example that fed in, right, which was the zero example here was a one-half vector of zero. And it basically corresponded to feeding in this example here. So we're feeding an adot into a neural net. And the way we fed the dot into a neural net is that we first got its index. Then we one hot encoded it. Then it went into the neural net. And out came this distribution of probabilities. And its shape is 27, there's 27 numbers. And we're going to interpret this as the neural net's assignment for how likely every one of these characters, 27 characters are to come next. And as we tune the weights W, we're going to be, of course, getting different probabilities out for any character that you input. And so now the question is just, can we optimize and find a good W such that the probabilities coming out are pretty good. And the way we measure pretty good is by the loss function. Okay, so I organized everything into a single summary so that hopefully it's a bit more clear. So it starts here. With an input dataset, we have some inputs through the neural net. And we have some labels for the correct next character in a sequence. These are integers. Here, I'm using torsion generators now so that you see the same numbers that I see. And I'm generating 27 neurons weights. And each neuron here receives 27 inputs. Then here we're going to plug in all the input examples x's into a neural net. So here, this is a forward pass. First, we have to encode all of the inputs into one hot representations. So we have 27 classes. We pass in these integers. And x-nc becomes a array that is 5 by 27. Zeroes, except for a few ones. We then multiply this in the first layer of a neural net to get low-gets. Expand and shade the low-gets to get fake counts, sort of, and normalize these counts to get probabilities. So these last two lines, by the way, here, are called the softmax, which I pulled up here. Softmax is a very often used layer in a neural net that takes these z's, which are low-gets. Expand and shade them. And the rise in normalizes. It's a way of taking outputs of a neural net layer. And these outputs can be positive or negative. And it outputs probability distributions. It outputs something that is always sums to one in our positive numbers, just like probabilities. So it's going to look like a normalization function if you want to think of it that way. And you can put it on top of any other linear layer inside a neural net. And it basically makes a neural net output probabilities. That's very often used. And we used it as well here. So this is the forward pass. And that's how we made a neural net output probability. Now, you'll notice that all of these, this entire forward pass is made up of differentiable layers. Everything here, we can back propagate through. And we saw some of the back propagation in micrograd. This is just multiplication and addition. All that's happening here is just multiply and add. And we know how to back propagate through them. Expandentiation, we know how to back propagate through. And then here we are summing. And sum is easily back propagated as well. And division as well. So everything here is the differential operation. And we can back propagate through. Now, we achieve these probabilities, which are five by 27 for every single example. We have a vector of probabilities that sum to one. And then here I wrote a bunch of stuff to sort of like break down the examples. So we have five examples making up Emma, right? And there are five by grams inside Emma. So by gram example, a by gram example one is that E is the beginning character right after dot. And the indexes for these are zero and five. So then we feed in a zero. That's the input as a neural net. We get probabilities from the neural net that are 27 numbers. And then the label is five because he actually comes after dot. So that's the label. And then we use this label five to index into the probability distribution here. So this index five here is zero one two three four five. It's this number here, which is here. So that's basically the probability assigned by the neural net to the actual correct character. You see that the net work currently thinks that this next character that E following dot is only one percent likely, which is of course not very good, right? Because this actually is a training example. And the network thinks that this is currently very, very unlikely. But that's just because we didn't get very lucky in generating a good setting of W. So right now this network thinks it's unlikely and zero point zero one is not a good outcome. So the log likelihood then is very negative. And the negative log likelihood is very positive. And so four is a very high negative log likelihood. And that means we're going to have a high loss. Because what is the loss? The loss is just the average negative log likelihood. And the second character is E and you see here that also the network thought that M following E is very unlikely one percent. Of the for M following M it thought it was 2% and for a following M it actually thought it was 7% likely. So just by chance, this one actually has a pretty good probability. And therefore pretty low negative log likelihood. And finally here it thought this was 1% likely. So overall our average negative log likelihood, which is the loss, the total loss that summarizes basically the how well this network currently works. At least on this one word, not on the full data suggest the one word is 3.76. Which is actually very fairly high loss. This is not a very good setting of W's. Now here's what we can do. We're currently getting 3.76. We can actually come here and we can change our W. We can resample it. So let me just add one to have a different seed. And then we get a different W. And then we can rerun this. And with this different seed, with this different setting of W's, we now get 3.37. So this is a much better W. And it's better because the probabilities just happen to come out higher for the characters that actually are next. And so you can imagine actually just resampling this, you know, we can try two. So, okay, this was not very good. Let's try one more. We can try three. Okay, this was terrible setting because we have a very high loss. So anyway, I'm going to erase this. What I'm doing here, which is just guess and check of randomly assigning parameters and seeing if the network is good. That is amateur hour. That's not how you optimize in your alert. The way you optimize in your alert is you start with some random guess. And we're going to commit to this one, even though it's not very good. But not the big deal is we have a loss function. So this loss is made up only of the furniture operations. And we can minimize the loss by tuning W's by computing the gradients of the loss with respect to these W matrices. And so then we can tune W to minimize the loss and find a good setting of W using gradient based optimization. So let's see how that will work. Now, things are actually going to look almost identical to what we had with micrograd. So here I pulled up the lecture from micrograd, the notebook. It's from this repository. And when I scroll all the way to the end where we left off with micrograd, we had something very, very similar. We had a number of input examples. In this case, we had four input examples inside X's. And we had their targets. These are targets. Just like here we have our X's now, but we have five of them. And they're now integers instead of vectors. But we're going to convert our integers to vectors, except our vectors will be 27 large instead of three large. And then here what we did is first we did a forward pass where we ran a neural net on all the inputs to get predictions. Our neural net at the time this n effects was a net of multilayer perceptron. Our neural net is going to look different because our neural net is just a single layer. Single linear layer followed by a softmax. So that's our neural net. And the loss here was the mean squared error. So we simply subtracted the prediction from the ground truth and squared it and some they roll up. And that was the loss. And loss was the single number that summarized the quality of the neural net. And when loss is low, like almost zero, that means the neural net is predicting correctly. So we had a single number that that summarized the performance of the neural net. And everything here was differentiable and was stored in massive compute graph. And then we iterated over all the parameters we made sure that the gradients are set to zero. And we called lost a backward and lost a backward initiated back propagation at the final output node of loss. Right. So yeah, remember these expressions. We had lost all the way at the end. We start that propagation and we went all the way back. And we made sure that we populated all the parameters dot grad. So that grad started at zero, but back propagation filled it in. And then in the update, we iterated over the parameters and we simply did a parameter update where every single element of our parameters was nudged in the opposite direction of the gradient. And so we're going to do the exact same thing here. So I'm going to pull this up on the side here. So that we have it available and we're actually going to do the exact same thing. So this was the forward pass. So where we did this. And props is our white bread. So now we have to evaluate the loss, but we're not using the mean square there. We're using the negative log likelihood because we are doing classification. We're not doing regression as it's called. So here we want to calculate loss. Now the way we calculated is is just this average negative log likelihood. Now this probs here. Has a shape of five by 27. And so to get all the we basically want to pluck out the probabilities at the correct indices here. So in particular, because the labels are stored here in the array wise, basically what we're after is for the first example, we're looking at probability of five. Right, at the index five. For the second example at the second row or row index one, we are interested in the probability of science to index 13. At the second example, we also have 13. At the third row, we want one. And at the last row, which is four, we want zero. So these are the probabilities we're interested in. Right. And you can see that they're not amazing as we saw above. So these are the probabilities we want, but we want like a more efficient way to access these probabilities, not just listing them out in a tuple like this. So it turns out that the way to this in PyTorch, one of the ways at least, is we can basically pass in all of these, sorry about that, all of these integers and vectors. So the these ones, you see how they're just zero, one, two, three, four. We can actually create that using MP, not MP, sorry, torch dot arrange of five zero, one, two, three, four. So we can index here with torch dot arrange of five. And here we index with wise. And you see that that gives us exactly these numbers. So that plucks out the probabilities of that the neural network assigns to the correct next character. Now we take those probabilities and we don't we actually look at the log probability. So we want to dot log. And then we want to just average that up. So take the mean of all that. And then it's the negative average log likelihood that is the loss. So the loss here is 3.7 something and you see that this loss 3.76, 3.76 is exactly as we've obtained before. But this is a vectorized form of that expression. So we get the same loss. And the same loss we can consider sort of as part of this forward pass and we've achieved here now loss. Okay, so we made our way all the way to loss. We define the forward pass. We forwarded the network and the loss. Now we're ready to do backward pass. So backward pass. We want to first make sure that all the gradients are reset. So they're at zero. Now in pie torch, you can set the gradients to be zero. But you can also just set it to none and setting it to none is more efficient and pie torch will interpret none as like a lack of a gradient and is the same as zeros. So this is a way to set to zero the gradient. And now we do lost the backward. Before we do lost that backward, we need one more thing. If you remember from micro-grad, pie torch actually requires that we pass in requires grad is true. So that we tell pie torch that we are interested in calculating gradient for this lead tensor by default. This is false. So let me recalculate with that and then setting none and lost that backward. Now something magical happened when lost the backward was run because pie torch just like micro-grad, when we did the forward pass here, it keeps track of all the operations under the hood. It builds a full computational graph. Just like the graphs we produced in micro-grad, those graphs exist inside pie torch. And so it knows all the dependencies and all the mathematical operations of everything. And when you then calculate the loss, we can call a dot backward on it. And that backward then fills in the gradients of all the intermediates all the way back to W's, which are the parameters of our neural net. So now we can do WL grad and we see that it has structure. There's stuff inside it. And these gradients, every single element here, so W.shape is 27 by 27, W grads shape is the same, 27 by 27. And every element of W.grad is telling us the influence of that weight on the loss function. So for example, this number all the way here, if this element, the zero zero element of W, because the gradient is positive, it's telling us that this has a positive influence on the loss. Slightly nudging, W slightly taking W zero zero and adding a small h to it would increase the loss, mildly, because this gradient is positive. Some of these gradients are also negative. So that's telling us about the gradient information. And we can use this gradient information to update the weights of this neural network. So let's not do the update. It's going to be very similar to what we had in micrograd. We need no loop over all the parameters because we only have one parameter tensor and that is W. So we simply do W dot data plus equals. We can actually copy this almost exactly negative 0.1 times W dot grad. And that would be the update to the tensor. So that updates the tensor. And because the tensor is updated, we would expect that now the loss should decrease. So here, if I print loss. It was 3.76, right? So we've updated the W here. So if I recalculate forward pass, loss now should be slightly lower. So 3.76 goes to 3.74. And then we can again set to set grad to none and backward update. And now the parameters changed again. So if we recalculate the forward pass, we expect a lower loss again 3.72. And this is again doing the we're now doing reading the set. And when we achieve a low loss, that will mean that the network is assigning high probabilities to the correct next characters. Okay, so I rearranged everything and I put it all together from scratch. So here is where we construct our data set of diagrams. You see that we are still iterating only over the first word, Emma. I'm going to change that in a second. I added a number that counts the number of elements in axis so that we explicitly see that number of examples is 5. Because currently we're just working with Emma. There's five bygones there. And here I added a loop of exactly what we had before. So we had 10 iterations of very decent of forward pass, backward pass, and an update. And so running these two cells, initialization and creating the scent, gives us some improvement on the last function. But now I want to use all the words. And there's not five, but 228,000 bygones now. However, this should require no modification whatsoever. Everything should just run because all the code we wrote doesn't carry their five bygones or 228,000 bygones. And with everything we should just work. So you see that this will just run. But now we are optimizing over the entire training set of all the bygones. And you see now that we are decreasing very slightly. So actually we can probably afford a larger learning rate. And probably for even larger learning rate. Even 50 seems to work on this very, very simple example. So let me re-enact the lies and let's run 100 iterations. See what happens. Okay. We seem to be coming up to some pretty good losses here. 2.47. Let me run 100 more. What is the number that we expect by the way in the loss? We expect to get something around what we had originally actually. So all the way back, if you remember, in the beginning of this video, when we optimized just by counting, our loss was roughly 2.47 after we had its moving. But before its moving, we had roughly 2.45, likely it. Sorry, loss. And so that's actually roughly the vicinity of what we expect to achieve. But before we achieved it by counting, and here we are achieving roughly the same result, but with gradient-based optimization. So we come to about 2.46, 2.45, etc. And that makes sense because fundamentally we're not taking any additional information. We're still just taking in the previous character and trying to predict the next one, but instead of doing it explicitly by counting and normalizing, we are doing it with gradient-based learning. And it just so happens that the explicit approach happens to very well optimize the loss function. Without any need for a gradient-based optimization, because the setup for bi-gram language models is so straightforward, so simple, we can just afford to estimate as probably is directly and maintain them in a table. But the gradient-based approach is significantly more flexible. So we've actually gained a lot because what we can do now is we can expand this approach and complexify the neural net. So currently we're just taking a single character and feeding into a neural net, and the neural is extremely simple. But we're about to iterate on this substantially. We're going to be taking multiple previous characters, and we're going to be feeding them into increasingly more complex neural nets. But fundamentally, the output of the neural net will always just be low jits. And those low jits will go through the exact same transformation. We are going to take them through a softmax, calculate the loss function, and the negative log likelihood, and do gradient-based optimization. And so actually, as we complexify the neural nets, and work all the way up to transformers, none of this will really fundamentally change. None of this will fundamentally change. The only thing that will change is the way we do the forward pass, or we've taken some previous characters and calculated logits for the next character in a sequence. That will become more complex, and I will use the same machinery to optimize it. And it's not obvious how we would have extended this bi-gram approach into the case where there are many more characters at the input. Because eventually these tables would get way too large, because there's way too many combinations of what previous characters could be. If you only have one previous character, we can just keep everything in a table, the counts. But if you have the last 10 characters that are input, we can't actually keep everything in a table anymore. So this is fundamentally an unscatable approach, and the neural network approach is significantly more scalable, and it's something that actually we can improve on over time. So that's where we will be digging next. I wanted to point out two more things. Number one, I want you to notice that this x and k here, this is made up of one hot vectors, and then those one hot vectors are multiplied by this w matrix. And we think of this as a multiple neurons being forwarded in a fully connected manner. But actually what's happening here is that, for example, if you have a one hot vector here that has a one at say the fifth dimension, then because of the way the matrix multiplication works, multiplying that one hot vector with w actually ends up plucking out the fifth row of w. A lot of logits would become just the fifth row of w, and that's because of the way the matrix multiplication works. So that's actually what ends up happening. So, but that's actually exactly what happened before. Because remember, all the way up here, we have a by-gram, we took the first character, and then that first character indexed into a row of this array here. And that row gave us the probability distribution for the next character. So the first character was used as a lookup into a matrix here to get the probability distribution. Well, that's actually exactly what's happening here. Because we're taking the index. We're encoding it as one hot and multiplying it by w. So, logits literally becomes the appropriate row of w. And that gets just as before, expedited to create the counts, and then normalized and becomes probability. So, this w here is literally the same as this array here. But, w, remember, is the log counts, not the counts. So it's more precise to say that w, expedentially, w.x is this array. But this array was filled in by counting, and by basically, popularly in the counts of bi-grams, whereas in the gradient base framework, we initialize it randomly, and then we let the loss guide us to arrive at the exact same array. So, this array, exactly here, is basically the array w at the end of optimization, except we arrive at it, piece by piece by following the loss. And that's why we also obtain the same loss function at the end. And the second note is, if I come here, remember the smoothing where we added fake counts to our counts in order to smooth out and make more uniform the distributions of these probabilities. And that prevented us from assigning zero probability to any one bi-gram. Now, if I increase the count here, what's happening to the probability? As I increase the count, probability becomes more and more uniform, right? Because these counts go only up to like 900 or whatever. So if I'm adding plus a million to every single number here, you can see how the row, and its probability, then, when we divide, it's just going to become more and more close to exactly even probability in a form distribution. It turns out that the gradient base framework has an equivalent to smoothing. In particular, think through these w's here, which we initialize randomly. We could also think about initializing w's to be zero. If all the entries of w are zero, then you'll see that logits will become all zero. And then, expd. And those logits becomes all one. And then, the probabilities turn out to be exactly uniform. So basically, when w's are all equal to each other, or say, especially zero, then the probabilities come out completely uniform. So, trying to incentivize w to be near zero is basically equivalent to label smoothing. And the more you incentivize that in a loss function, the more smooth distribution you're going to achieve. So this brings us to something that's called regularization, where we can actually augment the loss function to have a small component that we call a regularization loss. In particular, what we're going to do is we can take w. And we can, for example, square all of its entries. And then we can, oops, sorry about that. We can take all the entries of w and we can sum them. And because we're squaring, there will be no signs anymore. Natives and positives all get squashed to be positive numbers. And then, the way this works is you achieve zero loss if w is exactly or zero. But if w has non-zero numbers, you accumulate loss. And so we can actually take this and we can add it on here. So we can do something like loss plus w square dot sum. Or let's actually, instead of sum, let's take a mean. Because otherwise the sum gets too large. So mean is like a little bit more manageable. And then we have a regularization loss here. I'll say 0.01 times. Or something like that. You can choose the regularization strength. And then we can just optimize this. And now this optimization actually has two components. Not only is it trying to make all the probabilities work out. But in addition to that, there's an additional component that simultaneously tries to make all w's b zero. Because if w's are non-zero, you feel a loss. And so minimizing this, the only way to achieve that is for w to b zero. And so you can think of this as adding like a spring force, or like a gravity force, that pushes w to b zero. So w wants to b zero. And the probabilities want to be uniform. But they also simultaneously want to match up your probabilities as indicated by the data. And so the strength of this regularization is exactly controlling the amount of counts that you add here. Adding a lot more counts here corresponds to increasing this number. Because the more you increase it, the more this part of the loss function dominates this part. And the more these weights will be unable to grow. Because as they grow, they accumulate way too much loss. And so if this is strong enough, then we are not able to overcome the force of this loss. And we will never, and basically everything will be uniform predictions. So I thought that's kind of cool. Okay, and lastly, before we wrap up, I wanted to show you how you would sample from this neural net model. And I copy-pasted the sampling code from before, where remember that we sampled five times. And all we did was start at zero, we grabbed the current ix row of p. And that was our probability row, from which we sampled the next index, and just accumulated that and break when zero. And running this gave us these results. I still have the p in memory, so this is fine. Now, this p doesn't come from the row of p. Instead it comes from this neural net. First, we take ix, and we encode it into a one-hot row of x-ank. This x-ank multiplies rw, which really just plugs out the row of w corresponding to ix. Really, that's what's happening. And that gets our logits, and then we normalize those logits, and we actually exponential to get counts, and then we normalize to get the distribution, and then we can sample from the distribution. So if I run this, kind of anti-climatic or climatic, depending on how you look at it, but we get the exact same result. And that's because this is the identical model. Not only does it achieve the same loss, but as I mentioned, these are identical models, but we came to this answer in a very different way, and it's got a very different interpretation. But fundamentally, this is basically the same model, and gives the same samples here. And so, that's kind of cool. Okay, so we've actually covered a lot of ground. We introduced the bi-gram character level language model. We saw how we can train the model, how we can sample from the model, and how we can evaluate the quality of the model, using the negative log likelihood loss. And then we actually train the model in two completely different ways, that actually get the same result and the same model. In the first way, we just count it up, the frequency of all the bi-grams, and normalized. In the second way, we used the negative log likelihood loss, as a guide, to optimizing the counts matrix, or the counts array, so that the loss is minimized, in the gradient-based framework. And we saw that both of them give the same result, and that's it. Now, the second one of these, the gradient-based framework, is much more flexible. And right now, our neural network is super simple. We're taking a single previous character, and we're taking it through a single linear layer, to calculate the logits. This is about to complexify. So, in the follow-up videos, we're going to be taking more and more of these characters, and we're going to be feeding them into a neural net. But this neural net will still output the exact same thing. The neural net will output logits. And these logits will still be normalized in the exact same way, and all the loss, and everything else, and the gradient-based framework, everything stays identical. It's just that this neural net will now complexify all the way to transformers. So, that's going to be pretty awesome, and I'm looking forward to it for now. Bye.
[{"start": 0.0, "end": 2.0, "text": " Hi everyone, hope you're well."}, {"start": 2.0, "end": 6.0, "text": " And next up what I'd like to do is I'd like to build out Makemore."}, {"start": 6.0, "end": 11.0, "text": " Like micrograd before it, Makemore is a repository that I have on my GitHub web page."}, {"start": 11.0, "end": 12.0, "text": " You can look at it."}, {"start": 12.0, "end": 16.0, "text": " But just like with micrograd, I'm going to build it out step by step"}, {"start": 16.0, "end": 18.0, "text": " and I'm going to spell everything out."}, {"start": 18.0, "end": 20.0, "text": " So we're going to build it out slowly and together."}, {"start": 20.0, "end": 22.0, "text": " Now, what is Makemore?"}, {"start": 22.0, "end": 27.0, "text": " Makemore, as the name suggests, makes more of things that you give it."}, {"start": 27.0, "end": 29.0, "text": " So here's an example."}, {"start": 29.0, "end": 32.0, "text": " Names.TXT is an example dataset to make more."}, {"start": 32.0, "end": 38.0, "text": " And when you look at names.TXT, you'll find that it's a very large dataset of names."}, {"start": 38.0, "end": 41.0, "text": " So here's lots of different types of names."}, {"start": 41.0, "end": 47.0, "text": " In fact, I believe there are 32,000 names that I've sort of found randomly on the government website."}, {"start": 47.0, "end": 54.0, "text": " And if you trade Makemore on this dataset, it will learn to make more of things like this."}, {"start": 54.0, "end": 60.0, "text": " And in particular, in this case, that will mean more things that sound name-like,"}, {"start": 60.0, "end": 62.0, "text": " but are actually unique names."}, {"start": 62.0, "end": 65.0, "text": " And maybe if you have a baby and you're trying to assign a name,"}, {"start": 65.0, "end": 69.0, "text": " maybe you're looking for a cool new sounding unique name, Makemore might help you."}, {"start": 69.0, "end": 73.0, "text": " So here are some examples of generations from the neural network."}, {"start": 73.0, "end": 76.0, "text": " Once we train it on our dataset."}, {"start": 76.0, "end": 79.0, "text": " So here's some examples of unique names that it will generate."}, {"start": 79.0, "end": 85.0, "text": " Don't tell, I rot, Zendi, and so on."}, {"start": 85.0, "end": 90.0, "text": " And so all these sort of sound name-like, but they're not, of course, names."}, {"start": 90.0, "end": 94.0, "text": " So under the hood, Makemore is a character-level language model."}, {"start": 94.0, "end": 99.0, "text": " So what that means is that it is treating every single line here as an example."}, {"start": 99.0, "end": 105.0, "text": " And within each example, it's treating them all as sequences of individual characters."}, {"start": 105.0, "end": 109.0, "text": " So R-E-E-S-E is this example."}, {"start": 109.0, "end": 111.0, "text": " And that's the sequence of characters."}, {"start": 111.0, "end": 114.0, "text": " And that's the level on which we are building out Makemore."}, {"start": 114.0, "end": 117.0, "text": " And what it means to be a character-level language model then"}, {"start": 117.0, "end": 120.0, "text": " is that it's just sort of modeling those sequences of characters"}, {"start": 120.0, "end": 123.0, "text": " and it knows how to predict the next character in the sequence."}, {"start": 123.0, "end": 128.0, "text": " Now, we're actually going to implement a large number of character-level language models"}, {"start": 128.0, "end": 132.0, "text": " in terms of the neural networks that are involved in predicting the next character in a sequence."}, {"start": 132.0, "end": 136.0, "text": " So very simple, bi-gram and bag of word models, multilaylor perceptrons,"}, {"start": 136.0, "end": 140.0, "text": " recurring neural networks, all the way to modern transformers."}, {"start": 140.0, "end": 145.0, "text": " In fact, a transformer that we will build will be basically the equivalent transformer to GPT2"}, {"start": 145.0, "end": 147.0, "text": " if you have heard of GPT."}, {"start": 147.0, "end": 149.0, "text": " So that's kind of a big deal."}, {"start": 149.0, "end": 152.0, "text": " It's a modern network and by the end of the series,"}, {"start": 152.0, "end": 155.0, "text": " you will actually understand how that works on the level of characters."}, {"start": 155.0, "end": 159.0, "text": " Now, to give you a sense of the extensions here,"}, {"start": 159.0, "end": 162.0, "text": " after characters, we will probably spend some time on the word level"}, {"start": 162.0, "end": 167.0, "text": " so that we can generate documents of words, not just little segments of characters,"}, {"start": 167.0, "end": 170.0, "text": " but we can generate entire much larger documents."}, {"start": 170.0, "end": 175.0, "text": " And then we're probably going to go into images and image text networks,"}, {"start": 175.0, "end": 178.0, "text": " such as Dali, stable diffusion, and so on."}, {"start": 178.0, "end": 182.0, "text": " But for now, we have to start here, careful level language modeling."}, {"start": 182.0, "end": 183.0, "text": " Let's go."}, {"start": 183.0, "end": 186.0, "text": " So like before, we are starting with a completely blank GPNodebit page."}, {"start": 186.0, "end": 191.0, "text": " The first thing is I would like to basically load up the dataset, names.txt."}, {"start": 191.0, "end": 195.0, "text": " So we're going to open up names.txt for reading."}, {"start": 195.0, "end": 199.0, "text": " And we're going to read in everything into a massive string."}, {"start": 199.0, "end": 201.0, "text": " And then because it's a massive string,"}, {"start": 201.0, "end": 204.0, "text": " we'd only like the individual words and put them in the list."}, {"start": 204.0, "end": 211.0, "text": " So let's call split lines on that string to get all of our words as a Python list of strings."}, {"start": 211.0, "end": 216.0, "text": " So basically we can look at, for example, the first 10 words."}, {"start": 216.0, "end": 221.0, "text": " And we have that it's a list of Emma, Olivia, Eva, and so on."}, {"start": 221.0, "end": 227.0, "text": " And if we look at the top of the page here, that is indeed what we see."}, {"start": 227.0, "end": 229.0, "text": " So that's good."}, {"start": 229.0, "end": 235.0, "text": " This list actually makes me feel that this is probably sorted by frequency."}, {"start": 235.0, "end": 238.0, "text": " But okay, so these are the words."}, {"start": 238.0, "end": 241.0, "text": " Now we'd like to actually learn a little bit more about this dataset."}, {"start": 241.0, "end": 243.0, "text": " Let's look at the total number of words."}, {"start": 243.0, "end": 246.0, "text": " We expect this to be roughly 32,000."}, {"start": 246.0, "end": 249.0, "text": " And then what is the, for example, shortest word."}, {"start": 249.0, "end": 253.0, "text": " So min of length of each word for W in words."}, {"start": 253.0, "end": 258.0, "text": " So the shortest word will be length 2."}, {"start": 258.0, "end": 261.0, "text": " And max of length W for W in words."}, {"start": 261.0, "end": 264.0, "text": " So the longest word will be 15 characters."}, {"start": 264.0, "end": 267.0, "text": " So let's now think through our very first language model."}, {"start": 267.0, "end": 272.0, "text": " As I mentioned, a character level and good model is predicting the next character in a sequence"}, {"start": 272.0, "end": 276.0, "text": " given already some concrete sequence of characters before it."}, {"start": 276.0, "end": 280.0, "text": " Now what we have to realize here is that every single word here, like Isabella,"}, {"start": 280.0, "end": 285.0, "text": " is actually quite a few examples packed in to that single word."}, {"start": 285.0, "end": 289.0, "text": " Because what is an existence of a word like Isabella and the dataset telling us really?"}, {"start": 289.0, "end": 296.0, "text": " It's saying that the character I is a very likely character to come first in a sequence"}, {"start": 296.0, "end": 298.0, "text": " of a name."}, {"start": 298.0, "end": 303.0, "text": " The character S is likely to come after I."}, {"start": 303.0, "end": 307.0, "text": " The character A is likely to come after I.S."}, {"start": 307.0, "end": 310.0, "text": " The character B is very likely to come after I.S.A."}, {"start": 310.0, "end": 314.0, "text": " And someone all the way to A following Isabella."}, {"start": 314.0, "end": 317.0, "text": " And then there's one more example actually packed in here."}, {"start": 317.0, "end": 321.0, "text": " And that is that after there's Isabella,"}, {"start": 321.0, "end": 323.0, "text": " the word is very likely to end."}, {"start": 323.0, "end": 327.0, "text": " So that's one more sort of explicit piece of information that we have here,"}, {"start": 327.0, "end": 329.0, "text": " that we have to be careful with."}, {"start": 329.0, "end": 334.0, "text": " And so there's a lot packed into a single individual word in terms of the statistical structure"}, {"start": 334.0, "end": 337.0, "text": " of what's likely to follow in these character sequences."}, {"start": 337.0, "end": 340.0, "text": " And then of course we don't have just an individual word."}, {"start": 340.0, "end": 342.0, "text": " We actually have 32,000 of these."}, {"start": 342.0, "end": 344.0, "text": " And so there's a lot of structure here to model."}, {"start": 344.0, "end": 349.0, "text": " Now in the beginning, what I'd like to start with is I'd like to start with building a"}, {"start": 349.0, "end": 353.0, "text": " program language model. Now in a by-gram language model,"}, {"start": 353.0, "end": 357.0, "text": " we're always working with just two characters at a time."}, {"start": 357.0, "end": 361.0, "text": " So we're only looking at one character that we are given"}, {"start": 361.0, "end": 364.0, "text": " and we're trying to predict the next character in the sequence."}, {"start": 364.0, "end": 369.0, "text": " So what characters are likely to follow are what characters are likely to follow."}, {"start": 369.0, "end": 370.0, "text": " Hey, and so on."}, {"start": 370.0, "end": 373.0, "text": " And we're just modeling that kind of a little local structure."}, {"start": 373.0, "end": 377.0, "text": " And we're forgetting the fact that we may have a lot more information."}, {"start": 377.0, "end": 380.0, "text": " We're always just looking at the previous character to predict the next one."}, {"start": 380.0, "end": 384.0, "text": " So it's a very simple and weak language model, but I think it's a great place to start."}, {"start": 384.0, "end": 388.0, "text": " So now let's begin by looking at these by-grams in our data set and what they look like."}, {"start": 388.0, "end": 391.0, "text": " And these by-grams again are just two characters in a row."}, {"start": 391.0, "end": 393.0, "text": " So for WNWords,"}, {"start": 393.0, "end": 396.0, "text": " HW here is an individual word string."}, {"start": 396.0, "end": 399.0, "text": " We want to iterate for,"}, {"start": 399.0, "end": 403.0, "text": " we want to iterate this word with consecutive characters."}, {"start": 403.0, "end": 406.0, "text": " So two characters at a time, sliding it through the word."}, {"start": 406.0, "end": 410.0, "text": " Now, a interesting nice way, cute way to this in Python, by the way,"}, {"start": 410.0, "end": 412.0, "text": " is doing something like this."}, {"start": 412.0, "end": 419.0, "text": " For character on character two, in zip of W and W at one."}, {"start": 419.0, "end": 421.0, "text": " One call."}, {"start": 421.0, "end": 424.0, "text": " Print, character on character two."}, {"start": 424.0, "end": 426.0, "text": " And let's not do all the words."}, {"start": 426.0, "end": 427.0, "text": " Let's just do the first three words."}, {"start": 427.0, "end": 429.0, "text": " And I'm going to show you in a second how this works."}, {"start": 429.0, "end": 431.0, "text": " But from now, basically as an example,"}, {"start": 431.0, "end": 433.0, "text": " let's just do the very first word alone."}, {"start": 433.0, "end": 436.0, "text": " You see how we have a M up."}, {"start": 436.0, "end": 439.0, "text": " And this will just print EM, M-M, M-A."}, {"start": 439.0, "end": 443.0, "text": " And the reason this works is because W is the string M up."}, {"start": 443.0, "end": 447.0, "text": " W at one column is the string M-A."}, {"start": 447.0, "end": 450.0, "text": " And zip takes two iterators."}, {"start": 450.0, "end": 456.0, "text": " And it pairs them up and then creates an iterator over the tuples of their consecutive entries."}, {"start": 456.0, "end": 459.0, "text": " And if any one of these lists is shorter than the other,"}, {"start": 459.0, "end": 463.0, "text": " then it will just halt and return."}, {"start": 463.0, "end": 469.0, "text": " So basically that's why we return EM, M-M, M-M, M-A."}, {"start": 469.0, "end": 473.0, "text": " But then because this iterator's second one here runs out of elements,"}, {"start": 473.0, "end": 475.0, "text": " zip just ends."}, {"start": 475.0, "end": 477.0, "text": " And that's why we only get these tuples."}, {"start": 477.0, "end": 479.0, "text": " So pretty cute."}, {"start": 479.0, "end": 482.0, "text": " So these are the consecutive elements in the first word."}, {"start": 482.0, "end": 487.0, "text": " Now we have to be careful because we actually have more information here than just these three examples."}, {"start": 487.0, "end": 492.0, "text": " As I mentioned, we know that E is very likely to come first."}, {"start": 492.0, "end": 495.0, "text": " And we know that A in this case is coming last."}, {"start": 495.0, "end": 502.0, "text": " So one way to do this is basically we're going to create a special array here, our characters."}, {"start": 502.0, "end": 508.0, "text": " And we're going to hallucinate a special start token here."}, {"start": 508.0, "end": 512.0, "text": " I'm going to call it like special start."}, {"start": 512.0, "end": 514.0, "text": " So this is a list of one element."}, {"start": 514.0, "end": 518.0, "text": " And plus W."}, {"start": 518.0, "end": 521.0, "text": " And then plus a special end character."}, {"start": 521.0, "end": 526.0, "text": " And the reason I'm wrapping a list of W here is because W is a string, M-A,"}, {"start": 526.0, "end": 531.0, "text": " list of W will just have the individual characters in the list."}, {"start": 531.0, "end": 538.0, "text": " And then doing this again now, but not iterating over W's, but over the characters."}, {"start": 538.0, "end": 540.0, "text": " We'll give us something like this."}, {"start": 540.0, "end": 545.0, "text": " So E is likely, so this is a bygram of the start character and E."}, {"start": 545.0, "end": 549.0, "text": " And this is a bygram of the A in the special end character."}, {"start": 549.0, "end": 554.0, "text": " And now we can look at, for example, what this looks like for Olivia or Eva."}, {"start": 554.0, "end": 558.0, "text": " And indeed, we can actually, especially this for the entire dataset."}, {"start": 558.0, "end": 561.0, "text": " But we won't print that. That's going to be too much."}, {"start": 561.0, "end": 565.0, "text": " But these are the individual character bygrams and we can print them."}, {"start": 565.0, "end": 570.0, "text": " Now, in order to learn the statistics about which characters are likely to follow other characters,"}, {"start": 570.0, "end": 574.0, "text": " the simplest way in the bygram language models is to simply do it by counting."}, {"start": 574.0, "end": 582.0, "text": " So we're basically just going to count how often any one of these combinations occurs in the training set in these words."}, {"start": 582.0, "end": 587.0, "text": " So we're going to need some kind of a dictionary that's going to maintain some counts for every one of these bygrams."}, {"start": 587.0, "end": 590.0, "text": " So let's use a dictionary B."}, {"start": 590.0, "end": 596.0, "text": " And this will map these bygrams. So bygram is a tuple of character on character two."}, {"start": 596.0, "end": 604.0, "text": " And then B at bygram will be B dot get of bygram, which is basically the same as B at bygram."}, {"start": 604.0, "end": 613.0, "text": " But in the case that bygram is not in the dictionary B, we would like to buy default or term zero plus one."}, {"start": 613.0, "end": 618.0, "text": " So this will basically add up all the bygrams and count how often they occur."}, {"start": 618.0, "end": 627.0, "text": " Let's get rid of printing or rather let's keep the printing and let's just inspect what B is in this case."}, {"start": 627.0, "end": 630.0, "text": " And we see that many bygrams occur just a single time."}, {"start": 630.0, "end": 633.0, "text": " This one allegedly occluder three times."}, {"start": 633.0, "end": 635.0, "text": " So A was an ending character three times."}, {"start": 635.0, "end": 638.0, "text": " And that's true for all of these words."}, {"start": 638.0, "end": 641.0, "text": " All of Emma, Olivia and Eva and with A."}, {"start": 641.0, "end": 646.0, "text": " So that's why this occurred three times."}, {"start": 646.0, "end": 651.0, "text": " Now let's do it for all the words."}, {"start": 651.0, "end": 655.0, "text": " Oops, I should not have printed it."}, {"start": 655.0, "end": 657.0, "text": " I'm going to erase that."}, {"start": 657.0, "end": 659.0, "text": " Let's kill this."}, {"start": 659.0, "end": 664.0, "text": " Let's just run and now B will have the statistics of the entire data set."}, {"start": 664.0, "end": 668.0, "text": " So these are the counts across all the words of the individual bygrams."}, {"start": 668.0, "end": 673.0, "text": " And we could, for example, look at some of the most common ones and least common ones."}, {"start": 673.0, "end": 679.0, "text": " This kind of grows in Python, but the way to do this, the simplest way I like is we just use B dot items."}, {"start": 679.0, "end": 685.0, "text": " B dot items returns the tuples of key value."}, {"start": 685.0, "end": 691.0, "text": " In this case, the keys are the character bygrams and the values are the counts."}, {"start": 691.0, "end": 698.0, "text": " And so then what we want to do is we want to do sort it of this."}, {"start": 698.0, "end": 705.0, "text": " But by default, sort is on the first on the first item of a tuple."}, {"start": 705.0, "end": 710.0, "text": " But we want to sort by the values, which are the second element of a tuple that is the key value."}, {"start": 710.0, "end": 724.0, "text": " So we want to use the key equals lambda that takes the key value and returns the key value at one, not at zero, but at one, which is the count."}, {"start": 724.0, "end": 730.0, "text": " So we want to sort by the count of these elements."}, {"start": 730.0, "end": 733.0, "text": " And actually, we wanted to go backwards."}, {"start": 733.0, "end": 741.0, "text": " So here what we have is the bygram q and r occurs only a single time, d z occurred only a single time."}, {"start": 741.0, "end": 746.0, "text": " And when we sort this the other way around, we're going to see the most likely bygrams."}, {"start": 746.0, "end": 752.0, "text": " So we see that n was very often an ending character many, many times."}, {"start": 752.0, "end": 759.0, "text": " And apparently n always always follows an a and that's a very likely combination as well."}, {"start": 759.0, "end": 765.0, "text": " So this is kind of the individual counts that we achieve over the entire data set."}, {"start": 765.0, "end": 774.0, "text": " Now it's actually going to be significantly more convenient for us to keep this information in a two dimensional array instead of a high-thond dictionary."}, {"start": 774.0, "end": 785.0, "text": " So we're going to store this information in a 2D array and the rows are going to be the first character of the bygram and the columns are going to be the second character."}, {"start": 785.0, "end": 793.0, "text": " And each entry in the two dimensional array will tell us how often that first character follows the second character in the data set."}, {"start": 793.0, "end": 799.0, "text": " So in particular, the array representation that we're going to use or the library is that of PyTorch."}, {"start": 799.0, "end": 810.0, "text": " And PyTorch is a deep learning neural framework, but part of it is also this torch.tensor which allows us to create multi-dimensional arrays and manipulate them very efficiently."}, {"start": 810.0, "end": 815.0, "text": " So let's import PyTorch, which you can do by import Torch."}, {"start": 815.0, "end": 821.0, "text": " And then we can create a race. So let's create a array of zeros."}, {"start": 821.0, "end": 827.0, "text": " And we give it a size of this array. Let's create a 3 by 5 array as an example."}, {"start": 827.0, "end": 837.0, "text": " And this is a 3 by 5 array of zeros. And by default, you'll notice a dot d type, which is short for data type, is flow 32."}, {"start": 837.0, "end": 846.0, "text": " So these are single precision floating point numbers. Because we are going to represent counts, let's actually use d type as Torch.t in 32."}, {"start": 846.0, "end": 850.0, "text": " So these are 32 bit integers."}, {"start": 850.0, "end": 861.0, "text": " So now you see that we have integer data inside this tensor. Now, tensors allow us to really manipulate all the individual entries and do it very efficiently."}, {"start": 861.0, "end": 873.0, "text": " So for example, if we want to change this bit, we have to index into the tensor. And in particular, here, this is the first row. And the, because it's zero indexed."}, {"start": 873.0, "end": 884.0, "text": " So this is row index one and column index zero, one, two, three. So a at one comma three, we can set that to one."}, {"start": 884.0, "end": 893.0, "text": " And then a will have a one over there. We can of course also do things like this. So now a will be to over there."}, {"start": 893.0, "end": 903.0, "text": " And also we can, for example, say a zero zero is five. And then a will have a five over here. So that's how we can index into the arrays."}, {"start": 903.0, "end": 910.0, "text": " Now, of course, the array that we are interested in is much much bigger. So for our purposes, we have 26 letters of the alphabet."}, {"start": 910.0, "end": 919.0, "text": " And then we have two special characters as and e. So we want 26 plus two or 28 by 28 array."}, {"start": 919.0, "end": 926.0, "text": " And let's call it the capital N because it's going to represent the sort of the counts. Let me raise this stuff."}, {"start": 926.0, "end": 940.0, "text": " So that's the array that starts at zeroes, 28 by 28. And now let's copy paste this here. But instead of having an dictionary B, which we're going to erase, we now have an N."}, {"start": 940.0, "end": 949.0, "text": " Now the problem here is that we have these characters, which are strings, but we have to now basically index into a array."}, {"start": 949.0, "end": 955.0, "text": " And we have to index using integers. So we need some kind of a lockup table from characters to integers."}, {"start": 955.0, "end": 962.0, "text": " So let's construct such a character array. And the way we're going to do this is we're going to take all the words, which is a list of strings."}, {"start": 962.0, "end": 978.0, "text": " We're going to concatenate all of it into a massive string. So this is just simply the entire data set as a single string. We're going to pass this to the set constructor, which takes this massive string and throws out duplicates because sets do not allow duplicates."}, {"start": 978.0, "end": 988.0, "text": " So set of this will just be the set of all the lowercase characters. And there should be a total of 26 of them."}, {"start": 988.0, "end": 999.0, "text": " And now we actually don't want a set. We want a list. But we don't want a list sorted in some weird arbitrary way. We wanted to be sorted from a to Z."}, {"start": 999.0, "end": 1005.0, "text": " So sorted list. So those are our characters."}, {"start": 1005.0, "end": 1013.0, "text": " Now we want is this lookup table, as I mentioned. So let's create a special S to I. I will call it."}, {"start": 1013.0, "end": 1024.0, "text": " S is string or character. And this will be an S to I mapping for I S in a numerate of these characters."}, {"start": 1024.0, "end": 1035.0, "text": " So enumerate basically gives us this iterator over the integer index and the actual element of the list. And then we are mapping the character to the integer."}, {"start": 1035.0, "end": 1044.0, "text": " So S to I is a mapping from a to zero, B to one, etc. All the way from Z to 25."}, {"start": 1044.0, "end": 1056.0, "text": " And that's going to be useful here. But we actually also have to specifically set that S will be 26. And S to I at E will be 27. Right? Because Z was 25."}, {"start": 1056.0, "end": 1065.0, "text": " So those are the lookups. And now we can come here and we can map both character one and character to to their integers. So this will be S to I at character one."}, {"start": 1065.0, "end": 1079.0, "text": " And I X to will be S to I of character two. And now we should be able to do this line, but using our array. So in it, I X one, I X to this is the two dimensional array indexing."}, {"start": 1079.0, "end": 1094.0, "text": " I showed you before. And honestly, just plus equals one because everything starts at zero. So this should work and give us a large 28 by 20 array of all these counts."}, {"start": 1094.0, "end": 1105.0, "text": " So if we print in this is the array, but of course it looks ugly. So let's erase this ugly mess. And let's try to visualize it a bit more nicer."}, {"start": 1105.0, "end": 1116.0, "text": " So for that, we're going to use a library called matplotlib. So matplotlib allows us to create figures. So we can do things like PILT I'm show off the counter ray."}, {"start": 1116.0, "end": 1130.0, "text": " So this is the 20 by 28 array. And this is structure, but even this, I would say is still pretty ugly. So we're going to try to create a much nicer visualization of it. And I wrote a bunch of code for that."}, {"start": 1130.0, "end": 1143.0, "text": " The first thing we're going to need is we're going to need to invert this array here, this dictionary. So S to I is mapping from S to I. And in I to S, we're going to reverse this dictionary."}, {"start": 1143.0, "end": 1154.0, "text": " So it rate of all the items and just reverse that array. So I to S maps inversely from zero to a want to be, etc. So we'll need that."}, {"start": 1154.0, "end": 1160.0, "text": " And then here's the code that I came up with to try to make this a little bit nicer."}, {"start": 1160.0, "end": 1172.0, "text": " We create a figure we plot and then we do and then we visualize a bunch of things here. Let me just run it so you get a sense of what it is."}, {"start": 1172.0, "end": 1185.0, "text": " So you see here that we have the array spaced out and every one of these is basically like B follows G zero times B follows H 41 times."}, {"start": 1185.0, "end": 1197.0, "text": " So a follows J 175 times. And so what you can see that I'm doing here is first I show that entire array. And then I iterate over all the individual little cells here."}, {"start": 1197.0, "end": 1209.0, "text": " And I create a character string here, which is the inverse mapping I to S of the integer I and the integer J. So that's the diagrams in a character representation."}, {"start": 1209.0, "end": 1223.0, "text": " And then I plot just the diagram text and then I plot the number of times that this diagram occurs. Now the reason that there's a dot item here is because when you index into these arrays, these are torch tensors."}, {"start": 1223.0, "end": 1232.0, "text": " You see that we still get a tensor back. So the type of this thing you think it would be just an integer long 49 but it's actually a torched up tensor."}, {"start": 1232.0, "end": 1245.0, "text": " And so if you do dot item, then it will pop out that individual integer. So it will just be 149. So that's what's happening there. And these are just some options to make it look nice."}, {"start": 1245.0, "end": 1254.0, "text": " So what is this structure of this array? We have all these counts and we see that some of them occur often and some of them do not occur often."}, {"start": 1254.0, "end": 1264.0, "text": " Now if you scrutinize this carefully, you will notice that we're not actually being very clever. That's because when you come over here, you'll notice that for example, we have an entire row of completely zeroes."}, {"start": 1264.0, "end": 1274.0, "text": " And that's because the end character is never possibly going to be the first character of a diagram because we're always placing these end tokens all at the end of the diagram."}, {"start": 1274.0, "end": 1287.0, "text": " Similarly, we have entire column zeros here because the S character will never possibly be the second element of a diagram because we always start with S and we end with E and we only have the words in between."}, {"start": 1287.0, "end": 1298.0, "text": " So we have an entire column of zeros and entire row of zeros. And in this little two by two matrix here as well, the only one that can possibly happen is if S directly follows E."}, {"start": 1298.0, "end": 1307.0, "text": " That can be non-zero if we have a word that has no letters. So in that case, there's no letters in a word. It's an empty word and we just have S follows E."}, {"start": 1307.0, "end": 1315.0, "text": " But the other ones are just not possible. And so we're basically wasting space and not only that, but the S and the E are getting very crowded here."}, {"start": 1315.0, "end": 1325.0, "text": " I was using these brackets because there's convention and natural language processing to use these kinds of brackets to denote special tokens. But we're going to use something else."}, {"start": 1325.0, "end": 1333.0, "text": " So let's fix all this and make it prettier. We're not actually going to have two special tokens. We're only going to have one special token."}, {"start": 1333.0, "end": 1344.0, "text": " So we're going to have n by n array of 27 by set 27 instead. Instead of having two, we will just have one and I will call it a dot."}, {"start": 1344.0, "end": 1350.0, "text": " Okay. Let me swing this over here."}, {"start": 1350.0, "end": 1356.0, "text": " Now, one more thing that I would like to do is I would actually like to make this special character half position zero."}, {"start": 1356.0, "end": 1362.0, "text": " And I would like to offset all the other letters off. I find that a little bit more pleasing."}, {"start": 1362.0, "end": 1369.0, "text": " So we need a plus one here so that the first character, which is A, will start at one."}, {"start": 1369.0, "end": 1382.0, "text": " So S to I will now be a starts at one and dot is zero. And I to us, of course, we're not changing this because I to us just creates reverse mapping and this will work fine."}, {"start": 1382.0, "end": 1386.0, "text": " So one is a to us B zero is dot."}, {"start": 1386.0, "end": 1392.0, "text": " So we reverse that here. We have a dot and a dot."}, {"start": 1392.0, "end": 1402.0, "text": " This should work fine. Make sure I started zeros count. And then here we don't go up to 28. We go up to 27."}, {"start": 1402.0, "end": 1411.0, "text": " And this should just work."}, {"start": 1411.0, "end": 1416.0, "text": " Okay. So we see that dot dot never happened. It's at zero because we don't have empty words."}, {"start": 1416.0, "end": 1423.0, "text": " Then this row here now is just very simply the counts for all the first letters."}, {"start": 1423.0, "end": 1433.0, "text": " So G J starts a word H starts word I starts a word, etc. And then these are all the ending characters."}, {"start": 1433.0, "end": 1437.0, "text": " And in between we have the structure of what characters follow each other."}, {"start": 1437.0, "end": 1441.0, "text": " So this is the counts array of our entire data set."}, {"start": 1441.0, "end": 1449.0, "text": " So this array actually has all the information necessary for us to actually sample from this by gram character level language model."}, {"start": 1449.0, "end": 1455.0, "text": " And roughly speaking we're going to do is we're just going to start following these probabilities and these counts."}, {"start": 1455.0, "end": 1458.0, "text": " And we're going to start sampling from the model."}, {"start": 1458.0, "end": 1464.0, "text": " So in the beginning, of course, we start with the dot, the start token dot."}, {"start": 1464.0, "end": 1470.0, "text": " So to sample the first character of a name, we're looking at this row here."}, {"start": 1470.0, "end": 1479.0, "text": " So we see that we have the counts and those counts externally are telling us how often any one of these characters is to start a word."}, {"start": 1479.0, "end": 1493.0, "text": " So if we take this n and we grab the first row, we can do that by using just indexing at zero and then using this notation colon for the rest of that row."}, {"start": 1493.0, "end": 1502.0, "text": " So n zero colon is indexing into the zero row and then grabbing all the columns."}, {"start": 1502.0, "end": 1506.0, "text": " And so this will give us a one dimensional array of the first row."}, {"start": 1506.0, "end": 1513.0, "text": " So zero for four ten, you know, zero for four ten, one three oh six one five four two, etc."}, {"start": 1513.0, "end": 1514.0, "text": " It's just the first row."}, {"start": 1514.0, "end": 1519.0, "text": " The shape of this is 27, just the row of 27."}, {"start": 1519.0, "end": 1526.0, "text": " And the other way that you can do this also is you just you don't actually give this you just grab the zero row like this."}, {"start": 1526.0, "end": 1528.0, "text": " This is equivalent."}, {"start": 1528.0, "end": 1535.0, "text": " Now these are the counts and now what we'd like to do is we'd like to basically sample from this."}, {"start": 1535.0, "end": 1539.0, "text": " Since these are the raw counts, we actually have to convert this to probabilities."}, {"start": 1539.0, "end": 1542.0, "text": " So we create a probability vector."}, {"start": 1542.0, "end": 1550.0, "text": " So we'll take n of zero and we'll actually convert this to float first."}, {"start": 1550.0, "end": 1554.0, "text": " Okay, so these integers are converted to float, float, equine numbers."}, {"start": 1554.0, "end": 1559.0, "text": " And the reason we're creating floats is because we're about to normalize these counts."}, {"start": 1559.0, "end": 1569.0, "text": " So to create a probability distribution here, we want to divide, we basically want to the peak, p divide p dot sum."}, {"start": 1569.0, "end": 1574.0, "text": " And now we get a vector of smaller numbers and these are now probabilities."}, {"start": 1574.0, "end": 1579.0, "text": " So of course, because we divided by the sum, the sum of p now is one."}, {"start": 1579.0, "end": 1581.0, "text": " So this is a nice proper probability distribution."}, {"start": 1581.0, "end": 1588.0, "text": " It sums to one and this is giving us the probability for any single character to be the first character of a word."}, {"start": 1588.0, "end": 1592.0, "text": " So now we can try to sample from this distribution to sample from these distributions."}, {"start": 1592.0, "end": 1596.0, "text": " We're going to use torshtut multinomial, which I've pulled up here."}, {"start": 1596.0, "end": 1603.0, "text": " So torshtut multinomial returns samples from the multinomial probability distribution,"}, {"start": 1603.0, "end": 1608.0, "text": " which is a complicated way of saying, you give me probabilities and I will give you integers,"}, {"start": 1608.0, "end": 1611.0, "text": " which are sampled according to the probability distribution."}, {"start": 1611.0, "end": 1613.0, "text": " So this is the signature of the method."}, {"start": 1613.0, "end": 1619.0, "text": " And to make everything deterministic, we're going to use a generator object in PyTorch."}, {"start": 1619.0, "end": 1621.0, "text": " So this makes everything deterministic."}, {"start": 1621.0, "end": 1627.0, "text": " So when you run this on your computer, you're going to get the exact same results that I'm getting here on my computer."}, {"start": 1627.0, "end": 1632.0, "text": " So let me show you how this works."}, {"start": 1632.0, "end": 1638.0, "text": " Here's the deterministic way of creating a torch generator object,"}, {"start": 1638.0, "end": 1641.0, "text": " seeding it with some number that we can agree on."}, {"start": 1641.0, "end": 1645.0, "text": " So that seeds a generator, gets us an object g."}, {"start": 1645.0, "end": 1652.0, "text": " And then we can pass that g to a function that creates here random numbers, torshtut random,"}, {"start": 1652.0, "end": 1655.0, "text": " creates random numbers, three of them."}, {"start": 1655.0, "end": 1660.0, "text": " And it's using this generator object to, as a source of randomness."}, {"start": 1660.0, "end": 1666.0, "text": " So without normalizing it, I can just print."}, {"start": 1666.0, "end": 1671.0, "text": " This is sort of like numbers between zero and one that are random according to this thing."}, {"start": 1671.0, "end": 1677.0, "text": " And whenever I run it again, I'm always going to get the same result because I keep using the same generator object,"}, {"start": 1677.0, "end": 1679.0, "text": " which I'm seeding here."}, {"start": 1679.0, "end": 1687.0, "text": " And then if I divide to normalize, I'm going to get a nice probability distribution of just three elements."}, {"start": 1687.0, "end": 1691.0, "text": " And then we can use torshtut multinomial to draw samples from it."}, {"start": 1691.0, "end": 1694.0, "text": " So this is what that looks like."}, {"start": 1694.0, "end": 1701.0, "text": " So torshtut multinomial will take the torshtensor of probability distributions."}, {"start": 1701.0, "end": 1705.0, "text": " Then we can ask for a number of samples like C20."}, {"start": 1705.0, "end": 1711.0, "text": " Replacement equals true means that when we draw an element, we can draw it,"}, {"start": 1711.0, "end": 1716.0, "text": " and then we can put it back into the list of eligible indices to draw again."}, {"start": 1716.0, "end": 1722.0, "text": " And we have to specify replacement as true because by default, for some reason, it's false."}, {"start": 1722.0, "end": 1726.0, "text": " So I think it's just something to be careful with."}, {"start": 1726.0, "end": 1728.0, "text": " And the generator is passed in here."}, {"start": 1728.0, "end": 1732.0, "text": " So we are going to always get deterministic results, the same results."}, {"start": 1732.0, "end": 1738.0, "text": " So if I run these two, we're going to get a bunch of samples from this distribution."}, {"start": 1738.0, "end": 1745.0, "text": " Now you'll notice here that the probability for the first element in this tensor is 60%."}, {"start": 1745.0, "end": 1751.0, "text": " So in these 20 samples, we'd expect 60% of them to be zero."}, {"start": 1751.0, "end": 1754.0, "text": " We'd expect 30% of them to be one."}, {"start": 1754.0, "end": 1762.0, "text": " And because the element index 2 has only 10% probability, very few of these samples should be two."}, {"start": 1762.0, "end": 1765.0, "text": " And indeed, we only have a small number of twos."}, {"start": 1765.0, "end": 1769.0, "text": " And we can sample as many as we would like."}, {"start": 1769.0, "end": 1776.0, "text": " And the more we sample, the more these numbers should, roughly, have the distribution here."}, {"start": 1776.0, "end": 1782.0, "text": " So we should have lots of zeros, half as many ones."}, {"start": 1782.0, "end": 1791.0, "text": " And we should have three times s few, sorry, s few ones, and three times s few twos."}, {"start": 1791.0, "end": 1793.0, "text": " So you see that we have very few twos."}, {"start": 1793.0, "end": 1796.0, "text": " We have some ones and most of them are zero."}, {"start": 1796.0, "end": 1799.0, "text": " So that's what torsion multilimals doing."}, {"start": 1799.0, "end": 1802.0, "text": " For us here, we are interested in this row."}, {"start": 1802.0, "end": 1806.0, "text": " We've created this p here."}, {"start": 1806.0, "end": 1809.0, "text": " And now we can sample from it."}, {"start": 1809.0, "end": 1817.0, "text": " So if we use the same seed, and then we sample from this distribution, let's just get one sample."}, {"start": 1817.0, "end": 1822.0, "text": " Then we see that the sample is, say, 13."}, {"start": 1822.0, "end": 1825.0, "text": " So this will be the index."}, {"start": 1825.0, "end": 1828.0, "text": " And let's, you see how it's a tensor that wraps 13."}, {"start": 1828.0, "end": 1832.0, "text": " We again have to use dot item to pop out that integer."}, {"start": 1832.0, "end": 1837.0, "text": " And now index would be just number 13."}, {"start": 1837.0, "end": 1846.0, "text": " And of course, we can map the I2S of IX to figure out exactly which character we're sampling here."}, {"start": 1846.0, "end": 1848.0, "text": " We're sampling M."}, {"start": 1848.0, "end": 1853.0, "text": " So we're saying that the first character is in our generation."}, {"start": 1853.0, "end": 1855.0, "text": " And just look at the row here."}, {"start": 1855.0, "end": 1860.0, "text": " M was drawn, and we can see that M actually starts a large number of words."}, {"start": 1860.0, "end": 1865.0, "text": " M started 2,500 words out of 32,000 words."}, {"start": 1865.0, "end": 1869.0, "text": " So almost a bit less than 10% of the words start with M."}, {"start": 1869.0, "end": 1875.0, "text": " So this was actually fairly likely character to draw."}, {"start": 1875.0, "end": 1877.0, "text": " So that would be the first character of our word."}, {"start": 1877.0, "end": 1880.0, "text": " And now we can continue to sample more characters."}, {"start": 1880.0, "end": 1883.0, "text": " Because now we know that M started."}, {"start": 1883.0, "end": 1885.0, "text": " Now M is already sampled."}, {"start": 1885.0, "end": 1889.0, "text": " So now to draw the next character, we will come back here."}, {"start": 1889.0, "end": 1893.0, "text": " And we will work for the row that starts with M."}, {"start": 1893.0, "end": 1897.0, "text": " So you see M and we have a row here."}, {"start": 1897.0, "end": 1904.0, "text": " So we see that M dot is 516, M A is this many, M B is this many, etc."}, {"start": 1904.0, "end": 1906.0, "text": " So these are the counts for the next row."}, {"start": 1906.0, "end": 1909.0, "text": " And that's the next character that we are going to now generate."}, {"start": 1909.0, "end": 1912.0, "text": " So I think we are ready to actually just write out a loop."}, {"start": 1912.0, "end": 1915.0, "text": " So we are going to start to get a sense of how this is going to go."}, {"start": 1915.0, "end": 1922.0, "text": " We always begin at index 0, because that's the start token."}, {"start": 1922.0, "end": 1930.0, "text": " And then while true, we are going to grab the row corresponding to index that we are currently on."}, {"start": 1930.0, "end": 1934.0, "text": " So that's n array at ix."}, {"start": 1934.0, "end": 1939.0, "text": " Converted to float is rp."}, {"start": 1939.0, "end": 1945.0, "text": " Then we normalize this p to sum to 1."}, {"start": 1945.0, "end": 1948.0, "text": " I accidentally ran the infinite loop."}, {"start": 1948.0, "end": 1951.0, "text": " We normalize p to sum to 1."}, {"start": 1951.0, "end": 1954.0, "text": " Then we need this generator object."}, {"start": 1954.0, "end": 1956.0, "text": " And we are going to initialize up here."}, {"start": 1956.0, "end": 1961.0, "text": " And we are going to draw a single sample from this distribution."}, {"start": 1961.0, "end": 1966.0, "text": " And then this is going to tell us what index is going to be next."}, {"start": 1966.0, "end": 1972.0, "text": " If the index sampled is 0, then that's now the end token."}, {"start": 1972.0, "end": 1975.0, "text": " So we will break."}, {"start": 1975.0, "end": 1981.0, "text": " Otherwise, we are going to print s2i of ix."}, {"start": 1981.0, "end": 1985.0, "text": " i2s of ix."}, {"start": 1985.0, "end": 1987.0, "text": " And that's pretty much it."}, {"start": 1987.0, "end": 1990.0, "text": " We're just... this should work."}, {"start": 1990.0, "end": 1992.0, "text": " Okay, more."}, {"start": 1992.0, "end": 1994.0, "text": " So that's the name that we've sampled."}, {"start": 1994.0, "end": 1997.0, "text": " We started with m."}, {"start": 1997.0, "end": 2001.0, "text": " The next step was o, then r, and then dot."}, {"start": 2001.0, "end": 2005.0, "text": " And this dot, we printed here as well."}, {"start": 2005.0, "end": 2010.0, "text": " So let's not do this a few times."}, {"start": 2010.0, "end": 2017.0, "text": " So let's actually create an out list here."}, {"start": 2017.0, "end": 2020.0, "text": " And instead of printing, we're going to append."}, {"start": 2020.0, "end": 2024.0, "text": " So out that append this character."}, {"start": 2024.0, "end": 2027.0, "text": " And then here, let's just print it at the end."}, {"start": 2027.0, "end": 2030.0, "text": " So let's just join up all the outs."}, {"start": 2030.0, "end": 2032.0, "text": " And we're just going to print more."}, {"start": 2032.0, "end": 2035.0, "text": " Now we're always getting the same result because of the generator."}, {"start": 2035.0, "end": 2037.0, "text": " So who want to do this a few times?"}, {"start": 2037.0, "end": 2041.0, "text": " We can go for i and range 10."}, {"start": 2041.0, "end": 2043.0, "text": " We can sample 10 names."}, {"start": 2043.0, "end": 2046.0, "text": " And we can just do that 10 times."}, {"start": 2046.0, "end": 2049.0, "text": " And these are the names that we're getting out."}, {"start": 2049.0, "end": 2054.0, "text": " 10, 20."}, {"start": 2054.0, "end": 2057.0, "text": " I'll be honest with you, this doesn't look right."}, {"start": 2057.0, "end": 2061.0, "text": " So I've started a few minutes to convince myself that it actually is right."}, {"start": 2061.0, "end": 2068.0, "text": " The reason these samples are so terrible is that by-gram language model is actually just like really terrible."}, {"start": 2068.0, "end": 2070.0, "text": " We can generate a few more here."}, {"start": 2070.0, "end": 2075.0, "text": " And you can see that they're kind of like their name, like a little bit, like yanu, irailly, etc."}, {"start": 2075.0, "end": 2079.0, "text": " But they're just like totally messed up."}, {"start": 2079.0, "end": 2083.0, "text": " And I mean the reason that this is so bad, like we're generating h as a name."}, {"start": 2083.0, "end": 2086.0, "text": " But you have to think through it from the model's eyes."}, {"start": 2086.0, "end": 2089.0, "text": " It doesn't know that this h is the very first h."}, {"start": 2089.0, "end": 2092.0, "text": " All it knows is that h was previously."}, {"start": 2092.0, "end": 2095.0, "text": " And now how likely is h the last character?"}, {"start": 2095.0, "end": 2098.0, "text": " Well, it's somewhat likely."}, {"start": 2098.0, "end": 2100.0, "text": " And so it just makes it last character."}, {"start": 2100.0, "end": 2104.0, "text": " It doesn't know that there were other things before it or there were not other things before it."}, {"start": 2104.0, "end": 2108.0, "text": " And so that's why I'm generating all these like nonsense names."}, {"start": 2108.0, "end": 2121.0, "text": " And the other way to do this is to convince yourself that it is actually doing something reasonable, even though it's so terrible, is these little piece here are 27, right?"}, {"start": 2121.0, "end": 2123.0, "text": " Like 27."}, {"start": 2123.0, "end": 2126.0, "text": " So how about if we did something like this?"}, {"start": 2126.0, "end": 2129.0, "text": " Instead of having any structure whatsoever."}, {"start": 2129.0, "end": 2137.0, "text": " How about if p was just a torch dot ones of 27?"}, {"start": 2137.0, "end": 2139.0, "text": " By default, this is a float 32."}, {"start": 2139.0, "end": 2140.0, "text": " So this is fine."}, {"start": 2140.0, "end": 2143.0, "text": " Divide 27."}, {"start": 2143.0, "end": 2150.0, "text": " So what I'm doing here is this is the uniform distribution, which will make everything equally likely."}, {"start": 2150.0, "end": 2152.0, "text": " And we can sample from that."}, {"start": 2152.0, "end": 2154.0, "text": " So let's see if that doesn't need better."}, {"start": 2154.0, "end": 2155.0, "text": " Okay."}, {"start": 2155.0, "end": 2160.0, "text": " So it's this is what you have from a model that is completely untrained where everything is equally likely."}, {"start": 2160.0, "end": 2162.0, "text": " So it's obviously garbage."}, {"start": 2162.0, "end": 2168.0, "text": " And then if we have a trained model, which is trained on just by grams, this is what we get."}, {"start": 2168.0, "end": 2172.0, "text": " So you can see that it is more name like it is actually working."}, {"start": 2172.0, "end": 2176.0, "text": " It's just by gram is so terrible and we have to do better."}, {"start": 2176.0, "end": 2180.0, "text": " Now next I would like to fix an inefficiency that we have going on here."}, {"start": 2180.0, "end": 2186.0, "text": " Because what we're doing here is we're always fetching a row of n from the counts matrix up ahead."}, {"start": 2186.0, "end": 2188.0, "text": " And we're always doing the same things."}, {"start": 2188.0, "end": 2190.0, "text": " We're converting to float and we're dividing."}, {"start": 2190.0, "end": 2193.0, "text": " And we're doing this every single iteration of the slope."}, {"start": 2193.0, "end": 2195.0, "text": " And we just keep normalizing these rows over and over again."}, {"start": 2195.0, "end": 2197.0, "text": " And it's extremely inefficient and wasteful."}, {"start": 2197.0, "end": 2204.0, "text": " So what I'd like to do is I'd like to actually prepare a matrix capital p that will just have the probabilities in it."}, {"start": 2204.0, "end": 2208.0, "text": " So in other words, it's going to be the same as the capital n matrix here of counts."}, {"start": 2208.0, "end": 2213.0, "text": " But every single row will have the row of probabilities that is normalized to one,"}, {"start": 2213.0, "end": 2218.0, "text": " indicating the probability distribution for the next character given the character before it."}, {"start": 2218.0, "end": 2221.0, "text": " As defined by which row we're in."}, {"start": 2221.0, "end": 2225.0, "text": " So basically what we'd like to do is we'd like to just do it up front here."}, {"start": 2225.0, "end": 2228.0, "text": " And then we would like to just use that row here."}, {"start": 2228.0, "end": 2233.0, "text": " So here we would like to just do p equals p of ix instead."}, {"start": 2233.0, "end": 2234.0, "text": " Okay."}, {"start": 2234.0, "end": 2239.0, "text": " The other reason I want to do this is not just for efficiency, but also I would like us to practice these"}, {"start": 2239.0, "end": 2241.0, "text": " and dimensional tensors."}, {"start": 2241.0, "end": 2244.0, "text": " And I'd like us to practice their manipulation."}, {"start": 2244.0, "end": 2247.0, "text": " And especially something that's called broadcasting that we'll go into in a second."}, {"start": 2247.0, "end": 2253.0, "text": " We're actually going to have to become very good at these tensor manipulations because we're going to build out all the way to transformers."}, {"start": 2253.0, "end": 2257.0, "text": " We're going to be doing some pretty complicated array operations for efficiency."}, {"start": 2257.0, "end": 2262.0, "text": " And we need to really understand that and be very good at it."}, {"start": 2262.0, "end": 2268.0, "text": " So in doing what we want to do is we first want to grab the floating point copy of n."}, {"start": 2268.0, "end": 2271.0, "text": " And I'm mimicking the line here basically."}, {"start": 2271.0, "end": 2276.0, "text": " And then we want to divide all the rows so that they sum to one."}, {"start": 2276.0, "end": 2278.0, "text": " So we'd like to do something like this."}, {"start": 2278.0, "end": 2280.0, "text": " p divide p dot sum."}, {"start": 2280.0, "end": 2288.0, "text": " But now we have to be careful because p dot sum actually produces a sum."}, {"start": 2288.0, "end": 2289.0, "text": " Sorry."}, {"start": 2289.0, "end": 2298.0, "text": " P equals n dot float copy. p dot sum produces a sums up all of the counts of this entire matrix n."}, {"start": 2298.0, "end": 2301.0, "text": " And gives us a single number of just the summation of everything."}, {"start": 2301.0, "end": 2303.0, "text": " So that's not the way we want to divide."}, {"start": 2303.0, "end": 2310.0, "text": " We want to simultaneously and in parallel divide all the rows by their respective sums."}, {"start": 2310.0, "end": 2315.0, "text": " So what we have to do now is we have to go into documentation for tors.sum."}, {"start": 2315.0, "end": 2323.0, "text": " And we can scroll down here to a definition that is relevant to us, which is where we don't only provide an input array that we want to sum."}, {"start": 2323.0, "end": 2327.0, "text": " But we also provide the dimension along which we want to sum."}, {"start": 2327.0, "end": 2332.0, "text": " And in particular, we want to sum up over rows."}, {"start": 2332.0, "end": 2338.0, "text": " Now one more argument that I want you to pay attention to here is the keep them as false."}, {"start": 2338.0, "end": 2342.0, "text": " If keep them is true and the output tensor is of the same size as input,"}, {"start": 2342.0, "end": 2347.0, "text": " except of course the dimension along which you summed, which will become just one."}, {"start": 2347.0, "end": 2354.0, "text": " But if you pass in, keep them as false, then this dimension is squeezed out."}, {"start": 2354.0, "end": 2364.0, "text": " And so tors.sum not only does the sum and collapses dimension to be of size one, but in addition, it does what's called a squeeze where it squeezes out, it squeezes out that dimension."}, {"start": 2364.0, "end": 2370.0, "text": " So basically what we want here is we instead want to do p dot sum of sum axis."}, {"start": 2370.0, "end": 2375.0, "text": " And in particular, notice that p dot shape is 27 by 27."}, {"start": 2375.0, "end": 2382.0, "text": " So when we sum up across axis zero, then we would be taking the zero dimension and we would be summing across it."}, {"start": 2382.0, "end": 2391.0, "text": " So when keep them as true, then this thing will not only give us the counts across along the columns,"}, {"start": 2391.0, "end": 2395.0, "text": " but notice that basically the shape of this is one by 27."}, {"start": 2395.0, "end": 2400.0, "text": " We just get a row vector and the reason we get a row vector here again is because we pass in zero dimension."}, {"start": 2400.0, "end": 2405.0, "text": " So this zero dimension becomes one and we've done a sum and we get a row."}, {"start": 2405.0, "end": 2415.0, "text": " And so basically we've done the sum this way vertically and arrived at just a single one by 27 vector of counts."}, {"start": 2415.0, "end": 2419.0, "text": " What happens when you take out keep them is that we just get 27."}, {"start": 2419.0, "end": 2428.0, "text": " So it squeezes out that dimension and we just get one dimensional vector of size 27."}, {"start": 2428.0, "end": 2439.0, "text": " Now we don't actually want one by 27 row vector because that gives us the counts or the sums across the columns."}, {"start": 2439.0, "end": 2445.0, "text": " We actually want to sum the other way along dimension one and you'll see that the shape of this is 27 by 1."}, {"start": 2445.0, "end": 2452.0, "text": " So it's a column vector. It's a 27 by 1 vector of counts."}, {"start": 2452.0, "end": 2463.0, "text": " And that's because what's happened here is that we're going horizontally and this 27 by 27 matrix becomes a 27 by 1 array."}, {"start": 2463.0, "end": 2470.0, "text": " Now you'll notice by the way that the actual numbers of these counts are identical."}, {"start": 2470.0, "end": 2477.0, "text": " And that's because this special array of counts here comes from by grams to the sticks and actually it just so happens by chance."}, {"start": 2477.0, "end": 2485.0, "text": " Or because of the way this array is constructed that the sums along the columns or along the rows horizontally or vertically is identical."}, {"start": 2485.0, "end": 2492.0, "text": " But actually what we want to do in this case is we want to sum across the rows horizontally."}, {"start": 2492.0, "end": 2499.0, "text": " So what we want here is be that some of one with keep them true 27 by 1 column vector."}, {"start": 2499.0, "end": 2504.0, "text": " And now what we want to do is we want to divide by that."}, {"start": 2504.0, "end": 2512.0, "text": " Now we have to be careful here again. Is it possible to take what's a p. shape you see here is 27 by 27."}, {"start": 2512.0, "end": 2521.0, "text": " Is it possible to take a 27 by 27 array and divide it by what is a 27 by 1 array?"}, {"start": 2521.0, "end": 2528.0, "text": " Is that an operation that you can do? And whether or not you can perform this operation is determined by what's called broadcasting rules."}, {"start": 2528.0, "end": 2543.0, "text": " So if you just search broadcasting semantics in torch, you'll notice that there's a special definition for what's called broadcasting that for whether or not these two arrays can be combined in a binary operation like division."}, {"start": 2543.0, "end": 2548.0, "text": " So the first condition is each tensor has at least one dimension, which is the case for us."}, {"start": 2548.0, "end": 2557.0, "text": " And then when iterating over the dimension sizes starting at the trailing dimension, the dimension sizes must either be equal, one of them is one or one of them does not exist."}, {"start": 2557.0, "end": 2567.0, "text": " So let's do that. We need to align the two arrays and their shapes, which is very easy because both of these shapes have two elements."}, {"start": 2567.0, "end": 2572.0, "text": " So they're aligned. Then we iterate over from the right and going to the left."}, {"start": 2572.0, "end": 2578.0, "text": " Each dimension must be either equal, one of them is a one or one of them does not exist."}, {"start": 2578.0, "end": 2586.0, "text": " So in this case, they're not equal, but one of them is a one. So this is fine. And then this dimension, they're both equal. So this is fine."}, {"start": 2586.0, "end": 2592.0, "text": " So all the dimensions are fine. And therefore the this operation is broadcastable."}, {"start": 2592.0, "end": 2600.0, "text": " So that means that this operation is allowed. And what is it that these arrays do when you divide 27 by 27 by 27 by one?"}, {"start": 2600.0, "end": 2609.0, "text": " What it does is that it takes this dimension one and it stretches it out. It copies it to match 27 here. In this case."}, {"start": 2609.0, "end": 2620.0, "text": " So in our case, it takes this column vector, which is 27 by one. And it copies it 27 times to make these both be 27 by 27 internally."}, {"start": 2620.0, "end": 2628.0, "text": " You can think of it that way. And so it copies those counts. And then it does an element wise division, which is what we want."}, {"start": 2628.0, "end": 2634.0, "text": " Because these counts we want to divide by them on every single one of these columns in this matrix."}, {"start": 2634.0, "end": 2645.0, "text": " So this actually we expect will normalize every single row. And we can check that this is true by taking the first row, for example, and taking it some."}, {"start": 2645.0, "end": 2657.0, "text": " We expect this to be one because it's not normalized. And then we expect this now, because if we actually correctly normalize all the rows, we expect to get the exact same result here."}, {"start": 2657.0, "end": 2665.0, "text": " So let's run this. It's the exact same result. So this is correct. So now I would like to scare you a little bit."}, {"start": 2665.0, "end": 2672.0, "text": " You actually have to like basically encourage you very strongly to read through broadcasting semantics. And I encourage you to treat this with respect."}, {"start": 2672.0, "end": 2683.0, "text": " And it's not something to play fast and smooth. It's something to really respect, really understand and look up maybe some tutorials for broadcasting and practice it and be careful with it because you can very quickly run into bugs."}, {"start": 2683.0, "end": 2690.0, "text": " Let me show you what I mean. You see how here we have p.1. Keep them this true."}, {"start": 2690.0, "end": 2698.0, "text": " The shape of this is 27 by 1. Let me take out this line just so we have the n and then we can see the counts."}, {"start": 2698.0, "end": 2706.0, "text": " We can see that this is all the counts across all the rows. And it's 27 by 1 column vector."}, {"start": 2706.0, "end": 2713.0, "text": " Now suppose that I tried to do the following, but I erased keep them this true here."}, {"start": 2713.0, "end": 2722.0, "text": " What does that do? If keep them is not true, it's false. Then remember according to documentation, it gets rid of this dimension 1. It squeezes it out."}, {"start": 2722.0, "end": 2731.0, "text": " So basically we just get all the same counts, the same result except the shape of it is not 27 by 1. It is just 27 by 1 disappears."}, {"start": 2731.0, "end": 2740.0, "text": " But all the counts are the same. So you'd think that this divide that would work."}, {"start": 2740.0, "end": 2746.0, "text": " First of all, can we even write this? Is it even expected to run? Is it broadcastable?"}, {"start": 2746.0, "end": 2760.0, "text": " Let's determine if this result is broadcastable. p.1 is shape. It's 27. This is 27 by 27. So 27 by 27 broadcasting into 27."}, {"start": 2760.0, "end": 2770.0, "text": " So now suppose broadcasting number 1 align all the dimensions on the right done. Now iteration over all the dimensions started from the right going to the left."}, {"start": 2770.0, "end": 2780.0, "text": " All the dimensions must either be equal. One of them must be 1 or one then does not exist. So here they are or equal. Here the dimension does not exist."}, {"start": 2780.0, "end": 2792.0, "text": " So internally what broadcasting will do is it will create a 1 here and then we see that one of them is a 1 and this will get copied and this will run. This will broadcast."}, {"start": 2792.0, "end": 2808.0, "text": " Okay, so you'd expect this to work because we are this broadcast and this we can divide this. Now if I run this, you'd expect it to work but it doesn't."}, {"start": 2808.0, "end": 2817.0, "text": " Now you actually get garbage. You got a wrong result because this is actually a bug. This keeps them equal to true."}, {"start": 2817.0, "end": 2831.0, "text": " Makes it work. This is a bug. In both cases we are doing the correct counts. We are summing up across the rows but keep them saving us and making it work."}, {"start": 2831.0, "end": 2842.0, "text": " So in this case, I'd like you to encourage you to potentially like pause this video at this point and try to think about why this is buggy and why the keep them was necessary here."}, {"start": 2842.0, "end": 2849.0, "text": " Okay, so the reason to do for this is I'm trying to hint it here when I was giving you a bit of a hint on how this works."}, {"start": 2849.0, "end": 2866.0, "text": " This 27 vector internally inside the broadcasting, this becomes a 1 by 27 and 1 by 27 is a row vector. Right? And now we are dividing 27 by 27 by 1 by 27 and torch will replicate this dimension."}, {"start": 2866.0, "end": 2880.0, "text": " So basically it will take this row vector and it will copy it vertically now 27 times so the 27 by 27 lies exactly and element wise divides."}, {"start": 2880.0, "end": 2889.0, "text": " And so basically what's happening here is we're actually normalizing the columns instead of normalizing the rows."}, {"start": 2889.0, "end": 2903.0, "text": " So you can check what's happening here is that p at 0 which is the first row of p dot sum is not 1 it's 7. It is the first column as an example that sums to 1."}, {"start": 2903.0, "end": 2916.0, "text": " So to summarize where does the issue come from? The issue comes from the silent adding of the dimension here because in broadcasting rules you align on the right and go from right to left and if the dimension doesn't exist you create it."}, {"start": 2916.0, "end": 2926.0, "text": " So that's where the problem happens. We still did the counts correctly. We did the counts across the rows and we got the counts on the right here as a column vector."}, {"start": 2926.0, "end": 2937.0, "text": " But because the key things was true this dimension was discarded and now we just have a vector 27. And because of broadcasting the way it works this vector of 27 suddenly becomes a row vector."}, {"start": 2937.0, "end": 2947.0, "text": " And then this row vector gets replicated vertically and that every single point we are dividing by the count in the opposite direction."}, {"start": 2947.0, "end": 2954.0, "text": " So this thing just doesn't work. This needs to be keep them as true in this case."}, {"start": 2954.0, "end": 2964.0, "text": " So then then we have that p at 0 is normalized. And conversely the first column you'd expect to potentially not be normalized."}, {"start": 2964.0, "end": 2975.0, "text": " And this is what makes it work. So pretty subtle and hopefully this helps to scare you that you should have respect for broadcasting. Be careful."}, {"start": 2975.0, "end": 2985.0, "text": " Check your work and understand how it works under the hood and make sure that it's broadcasting in the direction that you like. Otherwise you're going to introduce very subtle bugs, very hard to find bugs."}, {"start": 2985.0, "end": 2994.0, "text": " And just be careful. One more note on efficiency. We don't want to be doing this here because this creates a completely new tensor that we store into p."}, {"start": 2994.0, "end": 3004.0, "text": " We prefer to use in place operations if possible. So this would be an in place operation has the potential to be faster. It doesn't create new memory under the hood."}, {"start": 3004.0, "end": 3016.0, "text": " And then let's erase this. We don't need it. And let's also just do fewer. Just so I'm not wasting space. Okay. So we're actually in the pre-good spot now."}, {"start": 3016.0, "end": 3027.0, "text": " We trained a bi-gram language model and we trained it really just by counting how frequently any pairing occurs and then normalizing so that we get a nice property distribution."}, {"start": 3027.0, "end": 3036.0, "text": " So really these elements of this array p are really the parameters of our bi-gram language model given us and summarizing the statistics of these diagrams."}, {"start": 3036.0, "end": 3046.0, "text": " So we trained a model and then we know how to sample from a model. We just iteratively sampled the next character and feed it in each time and get a next character."}, {"start": 3046.0, "end": 3058.0, "text": " Now what I'd like to do is I'd like to somehow evaluate the quality of this model. We'd like to somehow summarize the quality of this model into a single number. How good is it at predicting the training set?"}, {"start": 3058.0, "end": 3071.0, "text": " And as an example, so in the training set, we can evaluate now the training loss and this training loss is telling us about sort of the quality of this model in a single number just like we saw in micrograd."}, {"start": 3071.0, "end": 3083.0, "text": " So let's try to think through the quality of the model and how we would evaluate it. Basically what we're going to do is we're going to copy paste this code that we previously used for counting."}, {"start": 3083.0, "end": 3095.0, "text": " Okay. And let me just print these bi-grams first. We're going to use f strings and I'm going to print character one followed by character two. These are the bi-grams. And then I didn't want to do it for all the words. Let's just do first three words."}, {"start": 3095.0, "end": 3108.0, "text": " So here we have Emma, Olivia and Eva bi-grams. Now what we'd like to do is we'd like to basically look at the probability that the model assigns to every one of these bi-grams."}, {"start": 3108.0, "end": 3116.0, "text": " So in other words, we can look at the probability, which is summarized in the matrix B of Ix1, Ix2."}, {"start": 3116.0, "end": 3129.0, "text": " And then we can print it here as probability. And because these probabilities are way too large, let me percent or call on.4F to like truncated a bit."}, {"start": 3129.0, "end": 3135.0, "text": " So what do we have here? We're looking at the probabilities that the model assigns to every one of these bi-grams in the dataset."}, {"start": 3135.0, "end": 3152.0, "text": " And so we can see some of them are 4% 3% etc. Just to have a measuring stick in our mind, by the way, we have 27 possible characters or tokens. And if everything was equally likely, then you'd expect all these probabilities to be 4% roughly."}, {"start": 3152.0, "end": 3170.0, "text": " So anything above 4% means that we've learned something useful from these bi-grams statistics. And you see that roughly some of these are 4%, but some of them are as high as 40%, 35%. And so on. So you see that the model actually assigned a pretty high probability to whatever's in the training set. And so that's a good thing."}, {"start": 3170.0, "end": 3183.0, "text": " Basically, if you have a very good model, you'd expect that these probabilities should be near one, because that means that your model is correctly predicting what's going to come next, especially in the training set where you trained your model."}, {"start": 3183.0, "end": 3192.0, "text": " So now we'd like to think about how can we summarize these probabilities into a single number that measures the quality of this model."}, {"start": 3192.0, "end": 3202.0, "text": " Now when you look at the literature into maximum likelihood estimation and statistical modeling and so on, you'll see that what's typically used here is something called the likelihood."}, {"start": 3202.0, "end": 3218.0, "text": " And the likelihood is the product of all of these probabilities. And so the product of all of these probabilities is the likelihood. And it's really telling us that the probability of the entire dataset assigned by the model that we've trained."}, {"start": 3218.0, "end": 3230.0, "text": " And that is a measure of quality. So the product of these should be as high as possible when you are training the model and when you have a good model, your product of these probabilities should be very high."}, {"start": 3230.0, "end": 3241.0, "text": " Now because the product of these probabilities and is an unwieldy thing to work with, you can see that all of them are between 0 and 1. So your product of these probabilities will be a very tiny number."}, {"start": 3241.0, "end": 3248.0, "text": " So for convenience, what people work with usually is not the likelihood, but they work with what's called the log likelihood."}, {"start": 3248.0, "end": 3255.0, "text": " So the product of these is the likelihood to get the log likelihood. We just have to take the log of the probability."}, {"start": 3255.0, "end": 3264.0, "text": " And so the log of the probability here, I have the log of x from 0 to 1. The log is a, you see here monotonic transformation of the probability."}, {"start": 3264.0, "end": 3281.0, "text": " Where if you pass in 1, you get 0. So probability 1 gets your log probability of 0. And then as you go lower and lower probability, the log will grow more and more negative until all the way to negative infinity at 0."}, {"start": 3281.0, "end": 3290.0, "text": " So here we have a log prob, which is religious, the torshtod log of probability. Let's print it out to get a sense of what that looks like."}, {"start": 3290.0, "end": 3297.0, "text": " Log prob also point for f."}, {"start": 3297.0, "end": 3303.0, "text": " So as you can see, when we plug in numbers that are very close, some of our higher numbers, we get closer and closer to 0."}, {"start": 3303.0, "end": 3310.0, "text": " And then if we plug in very bad probabilities, we get more and more negative number. That's bad."}, {"start": 3310.0, "end": 3325.0, "text": " And the reason we work with this is for large extent convenience, right, because we have mathematically that if you have some product, a times b times c, of all these probabilities, right, the likelihood is the product of all these probabilities."}, {"start": 3325.0, "end": 3340.0, "text": " Then the log of these is just log of a plus log of b plus log of c. If you remember your logs from your high school or undergrad and so on."}, {"start": 3340.0, "end": 3349.0, "text": " So we have that basically the likelihood of the product probabilities, the what likelihood is just the sum of the logs of the individual probabilities."}, {"start": 3349.0, "end": 3360.0, "text": " So log likelihood starts at 0 and then log likelihood here, we can just accumulate simply."}, {"start": 3360.0, "end": 3374.0, "text": " And then the end we can print this print the log likelihood, f strings, maybe you're familiar with this."}, {"start": 3374.0, "end": 3381.0, "text": " So what likelihood is negative 38."}, {"start": 3381.0, "end": 3385.0, "text": " Now we actually want."}, {"start": 3385.0, "end": 3393.0, "text": " So how high can log likelihood get? It can go to zero. So when all the probabilities are one log likelihood to zero."}, {"start": 3393.0, "end": 3397.0, "text": " And then when all the probabilities are lower, this will grow more and more negative."}, {"start": 3397.0, "end": 3408.0, "text": " Now we don't actually like this because what we'd like is a loss function and a loss function has the semantics that low is good because we're trying to minimize the loss."}, {"start": 3408.0, "end": 3416.0, "text": " So we actually need to invert this and that's what gives us something called the negative log likelihood."}, {"start": 3416.0, "end": 3424.0, "text": " Negative log likelihood is just negative of the log likelihood."}, {"start": 3424.0, "end": 3429.0, "text": " These are f strings, by the way, if you'd like to look this up, negative log likelihood equals."}, {"start": 3429.0, "end": 3432.0, "text": " So the negative log likelihood now is just negative of it."}, {"start": 3432.0, "end": 3439.0, "text": " And so the negative log likelihood is a very nice loss function because the lowest it can get is zero."}, {"start": 3439.0, "end": 3444.0, "text": " And the higher it is, the worse off the predictions are that you're making."}, {"start": 3444.0, "end": 3453.0, "text": " And then one more modification to this that sometimes people do is that for convenience, they actually like to normalize by they like to make it an average instead of a sum."}, {"start": 3453.0, "end": 3458.0, "text": " And so here, let's just keep some counts as well."}, {"start": 3458.0, "end": 3461.0, "text": " So n plus equals one starts at zero."}, {"start": 3461.0, "end": 3467.0, "text": " And then here we can have sort of like a normalized log likelihood."}, {"start": 3467.0, "end": 3474.0, "text": " If we just normalize it by the count, then we will sort of get the average log likelihood."}, {"start": 3474.0, "end": 3481.0, "text": " So this would be usually our loss function here is this we would this is what we would use."}, {"start": 3481.0, "end": 3486.0, "text": " So our loss function for the training set assigned by the model is 2.4."}, {"start": 3486.0, "end": 3488.0, "text": " That's the quality of this model."}, {"start": 3488.0, "end": 3491.0, "text": " And the lower it is, the better off we are."}, {"start": 3491.0, "end": 3493.0, "text": " And the higher it is, the worse off we are."}, {"start": 3493.0, "end": 3502.0, "text": " And the job of our training is to find the parameters that minimize the negative log likelihood loss."}, {"start": 3502.0, "end": 3505.0, "text": " And that would be like a high quality model."}, {"start": 3505.0, "end": 3508.0, "text": " Okay, so to summarize, I actually wrote it out here."}, {"start": 3508.0, "end": 3515.0, "text": " So our goal is to maximize likelihood, which is the product of all the probabilities assigned by the model."}, {"start": 3515.0, "end": 3519.0, "text": " And we want to maximize this likelihood with respect to the model parameters."}, {"start": 3519.0, "end": 3523.0, "text": " And in our case, the model parameters here are defined in the table."}, {"start": 3523.0, "end": 3530.0, "text": " These numbers, the probabilities are the model parameters, sort of in our background language model so far."}, {"start": 3530.0, "end": 3534.0, "text": " But you have to keep in mind that here we are storing everything in a table format, the probabilities."}, {"start": 3534.0, "end": 3540.0, "text": " But what's coming up as a brief preview is that these numbers will not be kept explicitly."}, {"start": 3540.0, "end": 3543.0, "text": " But these numbers will be calculated by neural network."}, {"start": 3543.0, "end": 3544.0, "text": " So that's coming up."}, {"start": 3544.0, "end": 3548.0, "text": " And we want to change and tune the parameters of these neural networks."}, {"start": 3548.0, "end": 3553.0, "text": " We want to change these parameters to maximize the likelihood, the product of the probabilities."}, {"start": 3553.0, "end": 3559.0, "text": " Now, maximizing the likelihood is equivalent to maximizing the log likelihood because log is a monotonic function."}, {"start": 3559.0, "end": 3566.0, "text": " Here's the graph of log. And basically, all it is doing is it's just scaling your..."}, {"start": 3566.0, "end": 3569.0, "text": " You can look at it as just a scaling of the loss function."}, {"start": 3569.0, "end": 3575.0, "text": " And so the optimization problem here and here are actually equivalent because this is just a scaling."}, {"start": 3575.0, "end": 3577.0, "text": " You can look at it that way."}, {"start": 3577.0, "end": 3580.0, "text": " And so these are two identical optimization problems."}, {"start": 3580.0, "end": 3585.0, "text": " Maximizing the log likelihood is equivalent to minimizing the negative log likelihood."}, {"start": 3585.0, "end": 3592.0, "text": " And then in practice, people actually minimize the average negative log likelihood to get numbers like 2.4."}, {"start": 3592.0, "end": 3596.0, "text": " And then this summarizes the quality of your model."}, {"start": 3596.0, "end": 3599.0, "text": " And we'd like to minimize it and make it as small as possible."}, {"start": 3599.0, "end": 3602.0, "text": " And the lowest it can get is zero."}, {"start": 3602.0, "end": 3609.0, "text": " And the lower it is, the better off your model is because it's signing high probabilities to your data."}, {"start": 3609.0, "end": 3614.0, "text": " Now let's estimate the probability over the entire training set just to make sure that we get something around 2.4."}, {"start": 3614.0, "end": 3616.0, "text": " Let's run this over the entire..."}, {"start": 3616.0, "end": 3620.0, "text": " Oops! Let's take out the print statement as well."}, {"start": 3620.0, "end": 3624.0, "text": " Okay, 2.45 or the entire training set."}, {"start": 3624.0, "end": 3628.0, "text": " Now what I'd like to show you is that you can actually evaluate the probability for any word that you want."}, {"start": 3628.0, "end": 3635.0, "text": " Like for example, if we just test a single word, Andre, and bring back the print statement,"}, {"start": 3635.0, "end": 3639.0, "text": " then you see that Andre is actually kind of like an unlikely word."}, {"start": 3639.0, "end": 3644.0, "text": " Like on average, we take three log probability to represent it."}, {"start": 3644.0, "end": 3650.0, "text": " And roughly that's because EJ apparently is very uncommon as an example."}, {"start": 3650.0, "end": 3653.0, "text": " Now, think through this."}, {"start": 3653.0, "end": 3660.0, "text": " I'm going to take Andre and I append Q, and I test the probability of it. Andre Q."}, {"start": 3660.0, "end": 3663.0, "text": " We actually get infinity."}, {"start": 3663.0, "end": 3669.0, "text": " And that's because JQ has a 0% probability according to our model. So the log likelihood..."}, {"start": 3669.0, "end": 3674.0, "text": " So the log of 0 will be negative infinity. We get infinite loss."}, {"start": 3674.0, "end": 3679.0, "text": " So this is kind of undesirable, right? Because we plugged in a string that could be like a somewhat reasonable name."}, {"start": 3679.0, "end": 3686.0, "text": " But basically what this is saying is that this model is exactly 0% likely to predict this name."}, {"start": 3686.0, "end": 3690.0, "text": " And our loss is infinity on this example."}, {"start": 3690.0, "end": 3698.0, "text": " And really, the reason for that is that J is followed by Q 0 times, where is Q?"}, {"start": 3698.0, "end": 3702.0, "text": " JQ is 0. And so JQ is 0% likely."}, {"start": 3702.0, "end": 3705.0, "text": " So this is actually kind of gross and people don't like this too much."}, {"start": 3705.0, "end": 3710.0, "text": " To fix this, there's a very simple fix that people like to do to sort of smooth out your model a little bit."}, {"start": 3710.0, "end": 3712.0, "text": " And it's called model smoothing."}, {"start": 3712.0, "end": 3716.0, "text": " And roughly what's happening is that we will add some eight counts."}, {"start": 3716.0, "end": 3721.0, "text": " So imagine adding a count of one to everything."}, {"start": 3721.0, "end": 3727.0, "text": " So we add a count of one like this. And then we recalculate the probabilities."}, {"start": 3727.0, "end": 3730.0, "text": " And that's model smoothing. And you can add as much as you like."}, {"start": 3730.0, "end": 3733.0, "text": " You can add five and that will give you a smoother model."}, {"start": 3733.0, "end": 3737.0, "text": " And the more you add here, the more uniform model you're going to have."}, {"start": 3737.0, "end": 3741.0, "text": " And the less you add, the more peaked model you're going to have."}, {"start": 3741.0, "end": 3746.0, "text": " Of course, so one is like a pretty decent count to add."}, {"start": 3746.0, "end": 3751.0, "text": " And that will ensure that there will be no zeros in our probability matrix P."}, {"start": 3751.0, "end": 3754.0, "text": " And so this will of course change the generations a little bit."}, {"start": 3754.0, "end": 3757.0, "text": " In this case, it didn't but it in principle it could."}, {"start": 3757.0, "end": 3761.0, "text": " But what that's going to do now is that nothing will be infinity unlikely."}, {"start": 3761.0, "end": 3765.0, "text": " So now our model will predict some other probability."}, {"start": 3765.0, "end": 3768.0, "text": " And we see that JQ now has a very small probability."}, {"start": 3768.0, "end": 3772.0, "text": " So the model still finds it's very surprising that this was a word or a by gram."}, {"start": 3772.0, "end": 3774.0, "text": " But we don't get negative infinity."}, {"start": 3774.0, "end": 3777.0, "text": " So it's kind of like a nice fix that people like to apply sometimes and it's called models moving."}, {"start": 3777.0, "end": 3782.0, "text": " Okay, so we've now trained a respectable by gram character level language model."}, {"start": 3782.0, "end": 3788.0, "text": " And we saw that we both sort of trained the model by looking at the counts of all the by grams."}, {"start": 3788.0, "end": 3792.0, "text": " And normalizing the rows to get probability distributions."}, {"start": 3792.0, "end": 3800.0, "text": " So we saw that we can also then use those parameters of this model to perform sampling of new words."}, {"start": 3800.0, "end": 3803.0, "text": " So we sample new names according to those distributions."}, {"start": 3803.0, "end": 3806.0, "text": " And we also saw that we can evaluate the quality of this model."}, {"start": 3806.0, "end": 3810.0, "text": " And the quality of this model is summarized in a single number, which is the negative log likelihood."}, {"start": 3810.0, "end": 3820.0, "text": " And the lower this number is the better the model is because it is giving high probabilities to the actual next characters in all the by grams in our training set."}, {"start": 3820.0, "end": 3822.0, "text": " So that's all well and good."}, {"start": 3822.0, "end": 3826.0, "text": " But we've arrived at this model explicitly by doing something that felt sensible."}, {"start": 3826.0, "end": 3828.0, "text": " We were just performing counts."}, {"start": 3828.0, "end": 3831.0, "text": " And then we were normalizing those counts."}, {"start": 3831.0, "end": 3834.0, "text": " Now what I would like to do is I would like to take an alternative approach."}, {"start": 3834.0, "end": 3838.0, "text": " We will end up in a very, very similar position, but the approach will look very different."}, {"start": 3838.0, "end": 3844.0, "text": " Because I would like to cast the problem of by gram character level language modeling into the neural network framework."}, {"start": 3844.0, "end": 3850.0, "text": " And in neural network framework, we're going to approach things slightly differently, but again, end up in a very similar spot."}, {"start": 3850.0, "end": 3852.0, "text": " I'll go into that later."}, {"start": 3852.0, "end": 3857.0, "text": " Now, our neural network is going to be a still a by gram character level language model."}, {"start": 3857.0, "end": 3860.0, "text": " So it receives a single character as an input."}, {"start": 3860.0, "end": 3864.0, "text": " Then there's neural network with some weights or some parameters w."}, {"start": 3864.0, "end": 3869.0, "text": " And it's going to output the probability distribution over the next character in a sequence."}, {"start": 3869.0, "end": 3875.0, "text": " It's going to make guesses as to what is likely to follow this character that was input to the model."}, {"start": 3875.0, "end": 3881.0, "text": " And then in addition to that, we're going to be able to evaluate any setting of the parameters of the neural net,"}, {"start": 3881.0, "end": 3885.0, "text": " because we have a loss function, the negative lot likelihood."}, {"start": 3885.0, "end": 3889.0, "text": " So we're going to take a look at its probability distributions, and we're going to use the labels,"}, {"start": 3889.0, "end": 3894.0, "text": " which are basically just the identity of the next character in that by gram, the second character."}, {"start": 3894.0, "end": 3903.0, "text": " So knowing what the second character actually comes next in the by gram allows us to then look at what, how high of probability the model assigns to that character."}, {"start": 3903.0, "end": 3906.0, "text": " And then we of course want the probability to be very high."}, {"start": 3906.0, "end": 3910.0, "text": " And that is another way of saying that the loss is low."}, {"start": 3910.0, "end": 3915.0, "text": " So we're going to use gradient based optimization then to tune the parameters of this network,"}, {"start": 3915.0, "end": 3918.0, "text": " because we have the loss function, and we're going to minimize it."}, {"start": 3918.0, "end": 3924.0, "text": " So we're going to tune the weights so that the neural net is correctly predicting the probabilities for the next character."}, {"start": 3924.0, "end": 3926.0, "text": " So let's get started."}, {"start": 3926.0, "end": 3929.0, "text": " The first thing I want to do is I want to compile the training set of this neural network."}, {"start": 3929.0, "end": 3936.0, "text": " So create the training set of all the by grams."}, {"start": 3936.0, "end": 3947.0, "text": " And here I'm going to copy paste this code, because this code iterates over all the by grams."}, {"start": 3947.0, "end": 3950.0, "text": " So here we start with the words, we iterate over all the by grams."}, {"start": 3950.0, "end": 3953.0, "text": " And previously, as you recall, we did the counts."}, {"start": 3953.0, "end": 3956.0, "text": " But now we're not going to do counts. We're just creating a training set."}, {"start": 3956.0, "end": 3962.0, "text": " Now this training set will be made up of two lists."}, {"start": 3962.0, "end": 3969.0, "text": " We have the inputs and the targets, the labels."}, {"start": 3969.0, "end": 3973.0, "text": " And these by grams will denote x, y. Those are the characters, right?"}, {"start": 3973.0, "end": 3977.0, "text": " And so we're given the first character of the by gram, and then we're trying to predict the next one."}, {"start": 3977.0, "end": 3979.0, "text": " Both of these are going to be integers."}, {"start": 3979.0, "end": 3987.0, "text": " So here we'll take x's that append is just x1, y's that append, x2."}, {"start": 3987.0, "end": 3991.0, "text": " And then here we actually don't want lists of integers."}, {"start": 3991.0, "end": 3994.0, "text": " We will create tensors out of these."}, {"start": 3994.0, "end": 4001.0, "text": " So x's is torched up tensor of x's, and y's is torched up tensor of y's."}, {"start": 4001.0, "end": 4004.0, "text": " And then we don't actually want to take all the words just yet,"}, {"start": 4004.0, "end": 4007.0, "text": " because I want everything to be manageable."}, {"start": 4007.0, "end": 4011.0, "text": " So let's just do the first word, which is m, i."}, {"start": 4011.0, "end": 4015.0, "text": " And then it's clear what these x's and y's would be."}, {"start": 4015.0, "end": 4019.0, "text": " Here, let me print character one character two."}, {"start": 4019.0, "end": 4022.0, "text": " Just so you see what's going on here."}, {"start": 4022.0, "end": 4028.0, "text": " So the by grams of these characters is dot e, e m, m, m, a dot."}, {"start": 4028.0, "end": 4034.0, "text": " So this single word, as I mentioned, has one, two, three, four, five examples for our neural network."}, {"start": 4034.0, "end": 4039.0, "text": " There are five separate examples in m, and those examples I show my is here."}, {"start": 4039.0, "end": 4042.0, "text": " When the input to the neural neural network is integer zero,"}, {"start": 4042.0, "end": 4047.0, "text": " the desired label is integer five, which corresponds to e."}, {"start": 4047.0, "end": 4054.0, "text": " When the input to the neural network is five, we want its weights to be arranged so that 13 gets a very high probability."}, {"start": 4054.0, "end": 4058.0, "text": " When 13 is put in, we want 13 to have a high probability."}, {"start": 4058.0, "end": 4062.0, "text": " When 13 is put in, we also want one to have a high probability."}, {"start": 4062.0, "end": 4066.0, "text": " When one is input, we want zero to have a very high probability."}, {"start": 4066.0, "end": 4072.0, "text": " So there are five separate input examples to a neural net in this data set."}, {"start": 4074.0, "end": 4080.0, "text": " I wanted to add a tangent of a note of caution to be careful with a lot of the APIs of some of these frameworks."}, {"start": 4080.0, "end": 4087.0, "text": " You saw me silently use torched dot tensor with a lowercase t, and the output looked right."}, {"start": 4087.0, "end": 4091.0, "text": " But you should be aware that there's actually two ways of constructing a tensor."}, {"start": 4091.0, "end": 4098.0, "text": " There's a torched dot lowercase tensor, and there's also a torched dot capital tensor class, which you can also construct."}, {"start": 4098.0, "end": 4100.0, "text": " So you can actually call both."}, {"start": 4100.0, "end": 4105.0, "text": " You can also do torched dot capital tensor, and you get a nexus and wise as well."}, {"start": 4105.0, "end": 4107.0, "text": " So that's not confusing at all."}, {"start": 4107.0, "end": 4111.0, "text": " There are threads on what is the difference between these two."}, {"start": 4111.0, "end": 4116.0, "text": " And unfortunately, the docs are just like not clear on the difference."}, {"start": 4116.0, "end": 4123.0, "text": " And when you look at the docs of lowercase tensor, construct tensor with no undergrad history by copying data."}, {"start": 4123.0, "end": 4126.0, "text": " It's just like it doesn't, it doesn't make sense."}, {"start": 4126.0, "end": 4131.0, "text": " So the actual difference, as far as I can tell, is explained eventually in this random thread that you can google."}, {"start": 4131.0, "end": 4138.0, "text": " And really, it comes down to, I believe, that, where is this?"}, {"start": 4138.0, "end": 4144.0, "text": " Torched dot tensor refers to the d type, the data type automatically, while torched dot tensor just returns a flow tensor."}, {"start": 4144.0, "end": 4147.0, "text": " I would recommend to stick to torched dot lowercase tensor."}, {"start": 4147.0, "end": 4157.0, "text": " So, indeed, we see that when I construct this with a capital T, the data type here of x is flow 32."}, {"start": 4157.0, "end": 4166.0, "text": " But torched dot lowercase tensor, you see how it's now x dot d type is now integer."}, {"start": 4166.0, "end": 4174.0, "text": " So, it's advised that you use lowercase t, and you can read more about it if you like in some of these threads."}, {"start": 4174.0, "end": 4182.0, "text": " But basically, I'm pointing out some of these things because I want to caution you, and I want you to get used to reading a lot of documentation,"}, {"start": 4182.0, "end": 4187.0, "text": " and reading through a lot of Q and A's and threads like this."}, {"start": 4187.0, "end": 4192.0, "text": " And, you know, some of the stuff is unfortunately not easy and not very well documented, and you have to be careful out there."}, {"start": 4192.0, "end": 4197.0, "text": " What we want here is integers, because that's what makes sense."}, {"start": 4197.0, "end": 4201.0, "text": " And so, lowercase tensor is what we are using."}, {"start": 4201.0, "end": 4206.0, "text": " Okay, now we want to think through how we're going to feed in these examples into a neural network."}, {"start": 4206.0, "end": 4212.0, "text": " Now, it's not quite as straightforward as plugging it in, because these examples right now are integers."}, {"start": 4212.0, "end": 4220.0, "text": " So, there's like a 0, 5, or 13. It gives us the index of the character, and you can't just plug an integer index into a neural net."}, {"start": 4220.0, "end": 4227.0, "text": " These neural nets are sort of made up of these neurons, and these neurons have weights."}, {"start": 4227.0, "end": 4234.0, "text": " And, as you saw in micrograd, these weights act multiplicatively on the inputs, wx, plus b, there's 10HS, and so on."}, {"start": 4234.0, "end": 4241.0, "text": " And so, it doesn't really make sense to make an input neuron take on integer values that you feed in, and then multiply on with weights."}, {"start": 4241.0, "end": 4247.0, "text": " So, instead, a common way of encoding integers is what's called one-hot encoding."}, {"start": 4247.0, "end": 4257.0, "text": " In one-hot encoding, we take an integer like 13, and we create a vector that is all zeros, except for the 13th dimension, which we turn to a 1."}, {"start": 4257.0, "end": 4261.0, "text": " And then that vector can feed into a neural net."}, {"start": 4261.0, "end": 4270.0, "text": " Now, conveniently, PyTorch actually has something called the one-hot function inside Torch and in Functional."}, {"start": 4270.0, "end": 4275.0, "text": " It takes a tensor made up of integers."}, {"start": 4275.0, "end": 4287.0, "text": " Long is an integer, and it also takes a number of classes, which is how large you want your vector to be."}, {"start": 4287.0, "end": 4294.0, "text": " So, here, let's import Torch. and end.functional.sf. This is a common way of importing it."}, {"start": 4294.0, "end": 4300.0, "text": " And then let's do f.1 hot, and we feed in the integers that we want to encode."}, {"start": 4300.0, "end": 4307.0, "text": " So, we can actually feed in the entire array of x's, and we can tell that num class is 27."}, {"start": 4307.0, "end": 4314.0, "text": " So, it doesn't have to try to guess it. It may have guessed that it's only 13, and would give us an incorrect result."}, {"start": 4314.0, "end": 4321.0, "text": " So, this is the one-hot. Let's call this x-inc for x-incoded."}, {"start": 4321.0, "end": 4326.0, "text": " And then we see that x-incoded.shape is 5x27."}, {"start": 4326.0, "end": 4335.0, "text": " And we can also visualize it at plt.imShowofxinc to make it a little bit more clear because this is a little messy."}, {"start": 4335.0, "end": 4340.0, "text": " So, we see that we've encoded all the five examples into vectors."}, {"start": 4340.0, "end": 4346.0, "text": " We have five examples, so we have five rows, and each row here is now an example into a neural mat."}, {"start": 4346.0, "end": 4351.0, "text": " And we see that the appropriate bit is turned on as a one, and everything else is zero."}, {"start": 4351.0, "end": 4358.0, "text": " So, here, for example, the zero-th bit is turned on, the fifth bit is turned on,"}, {"start": 4358.0, "end": 4364.0, "text": " 13th bits are turned on for both of these examples, and then the first bit here is turned on."}, {"start": 4364.0, "end": 4372.0, "text": " So, that's how we can encode integers into vectors, and then these vectors can feed in to neural mats."}, {"start": 4372.0, "end": 4377.0, "text": " One more issue to be careful with here, by the way, is let's look at the data type of echoing."}, {"start": 4377.0, "end": 4383.0, "text": " We always want to be careful with data types. What would you expect x-incodings data type to be?"}, {"start": 4383.0, "end": 4386.0, "text": " When we're plugging numbers into neural mats, we don't want them to be integers."}, {"start": 4386.0, "end": 4390.0, "text": " We want them to be floating point numbers that can take on various values."}, {"start": 4390.0, "end": 4394.0, "text": " But the D-type here is actually 64-bit integer."}, {"start": 4394.0, "end": 4399.0, "text": " And the reason for that, I suspect, is that one hot received a 64-bit integer here,"}, {"start": 4399.0, "end": 4402.0, "text": " and it returned to the same data type."}, {"start": 4402.0, "end": 4408.0, "text": " And when you look at the signature of one hot, it doesn't even take a desired data type of the output tensor."}, {"start": 4408.0, "end": 4414.0, "text": " And so, we can't, in a lot of functions in tortuary, be able to do something like D-type equals torshtotflot32,"}, {"start": 4414.0, "end": 4418.0, "text": " which is what we want, but one hot does not support that."}, {"start": 4418.0, "end": 4423.0, "text": " So instead, we're going to want to cast this to float like this."}, {"start": 4423.0, "end": 4426.0, "text": " So that these, everything is the same."}, {"start": 4426.0, "end": 4430.0, "text": " Everything looks the same, but the D-type is float32."}, {"start": 4430.0, "end": 4433.0, "text": " And floats can feed into neural mats."}, {"start": 4433.0, "end": 4436.0, "text": " So now let's construct our first neuron."}, {"start": 4436.0, "end": 4440.0, "text": " This neuron will look at these input vectors."}, {"start": 4440.0, "end": 4446.0, "text": " And as you remember from micrograd, these neurons basically perform a very simple function, Wx plus B,"}, {"start": 4446.0, "end": 4449.0, "text": " where Wx is a dot product."}, {"start": 4449.0, "end": 4452.0, "text": " So we can achieve the same thing here."}, {"start": 4452.0, "end": 4455.0, "text": " Let's first define the weights of this neuron, basically."}, {"start": 4455.0, "end": 4459.0, "text": " We're the initial weights at initialization for this neuron."}, {"start": 4459.0, "end": 4461.0, "text": " Let's initialize them with torshtotrandin."}, {"start": 4461.0, "end": 4469.0, "text": " Torshtotrandin is fills a tensor with random numbers, drawn from a normal distribution."}, {"start": 4469.0, "end": 4474.0, "text": " And a normal distribution has a probability density function like this."}, {"start": 4474.0, "end": 4479.0, "text": " And so most of the numbers drawn from this distribution will be around zero,"}, {"start": 4479.0, "end": 4482.0, "text": " but some of them will be as high as almost three and so on."}, {"start": 4482.0, "end": 4486.0, "text": " And very few numbers will be above three in magnitude."}, {"start": 4486.0, "end": 4490.0, "text": " So we need to take a size as an input here."}, {"start": 4490.0, "end": 4494.0, "text": " And I'm going to use size as to be 27 by one."}, {"start": 4494.0, "end": 4498.0, "text": " So 27 by one, and then let's visualize W."}, {"start": 4498.0, "end": 4503.0, "text": " So W is a column vector of 27 numbers."}, {"start": 4503.0, "end": 4508.0, "text": " And these weights are then multiplied by the inputs."}, {"start": 4508.0, "end": 4512.0, "text": " So now to perform this multiplication, we can take x encoding,"}, {"start": 4512.0, "end": 4515.0, "text": " and we can multiply it with W."}, {"start": 4515.0, "end": 4520.0, "text": " This is a matrix multiplication operator in PyTorch."}, {"start": 4520.0, "end": 4523.0, "text": " And the output of this operation is 5 by 1."}, {"start": 4523.0, "end": 4526.0, "text": " The reason it's 5 by 5 is the following."}, {"start": 4526.0, "end": 4529.0, "text": " We took x encoding, which is 5 by 27,"}, {"start": 4529.0, "end": 4533.0, "text": " and we multiplied it by 27 by 1."}, {"start": 4533.0, "end": 4540.0, "text": " And in matrix multiplication, you see that the output will become 5 by 1,"}, {"start": 4540.0, "end": 4545.0, "text": " because these 27 will multiply and add."}, {"start": 4545.0, "end": 4549.0, "text": " So basically what we're seeing here out of this operation"}, {"start": 4549.0, "end": 4558.0, "text": " is we are seeing the five activations of this neuron on these five inputs."}, {"start": 4558.0, "end": 4561.0, "text": " And we've evaluated all of them in parallel."}, {"start": 4561.0, "end": 4564.0, "text": " We didn't feed in just a single input to the single neuron."}, {"start": 4564.0, "end": 4568.0, "text": " We fed in simultaneously all the five inputs into the same neuron."}, {"start": 4568.0, "end": 4573.0, "text": " And in parallel, PyTorch has evaluated the Wx plus B,"}, {"start": 4573.0, "end": 4576.0, "text": " but here is just Wx. There's no bias."}, {"start": 4576.0, "end": 4581.0, "text": " It is value W times x for all of them, independently."}, {"start": 4581.0, "end": 4584.0, "text": " Now instead of a single neuron, though, I would like to have 27 neurons."}, {"start": 4584.0, "end": 4587.0, "text": " And I'll show you in a second why I've gone 27 neurons."}, {"start": 4587.0, "end": 4592.0, "text": " So instead of having just a one here, which is indicating this presence of one single neuron,"}, {"start": 4592.0, "end": 4594.0, "text": " we can use 27."}, {"start": 4594.0, "end": 4597.0, "text": " And then when W is 27 by 27,"}, {"start": 4597.0, "end": 4606.0, "text": " this will in parallel evaluate all the 27 neurons on all five inputs,"}, {"start": 4606.0, "end": 4609.0, "text": " giving us a much better, much, much bigger result."}, {"start": 4609.0, "end": 4613.0, "text": " So now what we've done is 5 by 27 multiplied, 27 by 27,"}, {"start": 4613.0, "end": 4617.0, "text": " and the output of this is now 5 by 27."}, {"start": 4617.0, "end": 4623.0, "text": " So we can see that the shape of this is 5 by 27."}, {"start": 4623.0, "end": 4626.0, "text": " So what is every element here telling us?"}, {"start": 4626.0, "end": 4632.0, "text": " It's telling us for every one of 27 neurons that we created."}, {"start": 4632.0, "end": 4639.0, "text": " What is the firing rate of those neurons on every one of those five examples?"}, {"start": 4639.0, "end": 4651.0, "text": " So the element, for example, 3 comma 13 is giving us the firing rate of the 13th neuron looking at the third input."}, {"start": 4651.0, "end": 4658.0, "text": " And the way this was achieved is by a dot product between the third input"}, {"start": 4658.0, "end": 4664.0, "text": " and the 13th column of this W matrix here."}, {"start": 4664.0, "end": 4671.0, "text": " So using a major multiplication, we can very efficiently evaluate the dot product"}, {"start": 4671.0, "end": 4675.0, "text": " between lots of input examples in a batch."}, {"start": 4675.0, "end": 4681.0, "text": " And lots of neurons where all of those neurons have weights in the columns of those W's."}, {"start": 4681.0, "end": 4685.0, "text": " And in major multiplication, we're just doing those dot products in parallel."}, {"start": 4685.0, "end": 4692.0, "text": " Just to show you that this is the case, we can take X and we can take the third row."}, {"start": 4692.0, "end": 4697.0, "text": " And we can take the W and take its 13th column."}, {"start": 4697.0, "end": 4708.0, "text": " And then we can do X and get 3, element wise multiply with W at 13 and sum that up."}, {"start": 4708.0, "end": 4713.0, "text": " This WX plus B. Well, there's no plus B. It's just WX dot product."}, {"start": 4713.0, "end": 4715.0, "text": " And that's this number."}, {"start": 4715.0, "end": 4720.0, "text": " So you see that this is just being done efficiently by the matrix multiplication operation"}, {"start": 4720.0, "end": 4726.0, "text": " for all the input examples and for all the output neurons of this first layer."}, {"start": 4726.0, "end": 4732.0, "text": " Okay, so we fed our 27 dimensional inputs into a first layer of a neural net that has 27 neurons."}, {"start": 4732.0, "end": 4736.0, "text": " Right? So we have 27 inputs and now we have 27 neurons."}, {"start": 4736.0, "end": 4743.0, "text": " These neurons perform W times X. They don't have a bias and they don't have a non-linearity like 10H."}, {"start": 4743.0, "end": 4746.0, "text": " We're going to leave them to be a linear layer."}, {"start": 4746.0, "end": 4750.0, "text": " In addition to that, we're not going to have any other layers. This is going to be it."}, {"start": 4750.0, "end": 4756.0, "text": " It's just going to be the dumbest, smallest, simplest neural net, which is just a single linear layer."}, {"start": 4756.0, "end": 4761.0, "text": " And now I'd like to explain what I want those 27 outputs to be."}, {"start": 4761.0, "end": 4767.0, "text": " Intuitively, what we're trying to produce here for every single input example is we're trying to produce some kind of a probability distribution"}, {"start": 4767.0, "end": 4771.0, "text": " for the next character in a sequence. And there's 27 of them."}, {"start": 4771.0, "end": 4779.0, "text": " But we have to come up with like precise semantics for exactly how we're going to interpret these 27 numbers that these neural state come on."}, {"start": 4779.0, "end": 4785.0, "text": " Now intuitively, you see here that these numbers are negative and some of them are positive, etc."}, {"start": 4785.0, "end": 4794.0, "text": " And that's because these are coming out of the neural net layer initialized with these normal distribution parameters."}, {"start": 4794.0, "end": 4801.0, "text": " But what we want is we want something like we had here, like each row here told us the counts."}, {"start": 4801.0, "end": 4806.0, "text": " And then we normalize the counts to get probabilities. And we want something similar to come out of the neural net."}, {"start": 4806.0, "end": 4810.0, "text": " But what we just have right now is just some negative and positive numbers."}, {"start": 4810.0, "end": 4815.0, "text": " Now, we want those numbers to somehow represent the probabilities for the next character."}, {"start": 4815.0, "end": 4819.0, "text": " But you see that probabilities, they have a special structure."}, {"start": 4819.0, "end": 4823.0, "text": " They're positive numbers and they sum to 1."}, {"start": 4823.0, "end": 4826.0, "text": " And so that doesn't just come out of a neural net."}, {"start": 4826.0, "end": 4833.0, "text": " And then they can't be counts because these counts are positive and counts are integers."}, {"start": 4833.0, "end": 4837.0, "text": " So counts are also not really a good thing to output from a neural net."}, {"start": 4837.0, "end": 4850.0, "text": " So instead what the neural net is going to output and how we are going to interpret the 27 numbers is that these 27 numbers are giving us log counts, basically."}, {"start": 4850.0, "end": 4856.0, "text": " So instead of giving us counts directly, lock in this table, they're giving us log counts."}, {"start": 4856.0, "end": 4861.0, "text": " And to get the counts, we're going to take the log counts and we're going to exponentially eat them."}, {"start": 4861.0, "end": 4867.0, "text": " Now, exponentiation takes the following form."}, {"start": 4867.0, "end": 4873.0, "text": " It takes numbers that are negative or they are positive. It takes the entire real line."}, {"start": 4873.0, "end": 4880.0, "text": " And then if you plug in negative numbers, you're going to get e to the x, which is always below 1."}, {"start": 4880.0, "end": 4883.0, "text": " So you're getting numbers lower than 1."}, {"start": 4883.0, "end": 4891.0, "text": " And if you plug in numbers greater than 0, you're getting numbers greater than 1 all the way growing to the infinity."}, {"start": 4891.0, "end": 4893.0, "text": " And this here grows to 0."}, {"start": 4893.0, "end": 4909.0, "text": " So basically we're going to take these numbers here. And instead of them being positive and negative in all of the place, we're going to interpret them as log counts."}, {"start": 4909.0, "end": 4913.0, "text": " And then we're going to element wise, exponentiate these numbers."}, {"start": 4913.0, "end": 4916.0, "text": " Exponentiating them now gives us something like this."}, {"start": 4916.0, "end": 4924.0, "text": " And you see that these numbers now, because they went through an exponent, all the negative numbers turned into numbers below 1, like 0,338."}, {"start": 4924.0, "end": 4930.0, "text": " And all the positive numbers originally turned into even more positive numbers, sort of greater than 1."}, {"start": 4930.0, "end": 4940.0, "text": " So like for example, 7 is some positive number over here. That is greater than 0."}, {"start": 4940.0, "end": 4951.0, "text": " But exponentiated outputs here basically give us something that we can use and interpret as the equivalent of counts originally."}, {"start": 4951.0, "end": 4956.0, "text": " So you see these counts here, 1, 12, 7, 51, 1, etc."}, {"start": 4956.0, "end": 4961.0, "text": " The neural net is kind of now predicting counts."}, {"start": 4961.0, "end": 4967.0, "text": " And these counts are positive numbers. They can never be below 0. So that makes sense."}, {"start": 4967.0, "end": 4974.0, "text": " And they can now take on various values depending on the settings of W."}, {"start": 4974.0, "end": 4981.0, "text": " So let me break this down. We're going to interpret these to be the log counts."}, {"start": 4981.0, "end": 4988.0, "text": " In other words, for this, that is often used is so called logits. These are logits log counts."}, {"start": 4988.0, "end": 5000.0, "text": " And these will be sort of the counts, logits exponentiated. And this is equivalent to the n matrix, sort of the n array that we used previously."}, {"start": 5000.0, "end": 5012.0, "text": " Remember this was the n. This is the array of counts. And each row here are the counts for the next character, sort of."}, {"start": 5012.0, "end": 5019.0, "text": " So those are the counts. And now the probabilities are just the counts normalized."}, {"start": 5019.0, "end": 5026.0, "text": " And so I'm not going to find the same. But basically I'm not going to scroll all the place."}, {"start": 5026.0, "end": 5034.0, "text": " We've already done this. We want to count that sum along the first dimension. And we want to keep them as true."}, {"start": 5034.0, "end": 5043.0, "text": " We've went over this. And this is how we normalize the rows of our counts matrix to get our probabilities."}, {"start": 5043.0, "end": 5061.0, "text": " So now these are the probabilities. And these are the counts that we have currently. And now when I show the probabilities, you see that every row here, of course, will sum to 1."}, {"start": 5061.0, "end": 5075.0, "text": " Because they're normalized. And the shape of this is 5 by 27. And so really what we've achieved is for every one of our five examples, we now have a row that came out of a neural net."}, {"start": 5075.0, "end": 5084.0, "text": " And because of the transformations here, we made sure that this output of this neural net now are probabilities, or we can interpret to be probabilities."}, {"start": 5084.0, "end": 5094.0, "text": " So our WX here gave us logits. And then we interpret those to be log counts. We exponentiate to get something that looks like counts."}, {"start": 5094.0, "end": 5100.0, "text": " And then we normalize those counts to get a probability distribution. And all of these are differentiable operations."}, {"start": 5100.0, "end": 5110.0, "text": " So what we've done now is we are taking inputs. We have differentiable operations that we can back propagate through. And we're getting out probability distributions."}, {"start": 5110.0, "end": 5128.0, "text": " So, for example, for the zero example that fed in, right, which was the zero example here was a one-half vector of zero. And it basically corresponded to feeding in this example here."}, {"start": 5128.0, "end": 5143.0, "text": " So we're feeding an adot into a neural net. And the way we fed the dot into a neural net is that we first got its index. Then we one hot encoded it. Then it went into the neural net. And out came this distribution of probabilities."}, {"start": 5143.0, "end": 5159.0, "text": " And its shape is 27, there's 27 numbers. And we're going to interpret this as the neural net's assignment for how likely every one of these characters, 27 characters are to come next."}, {"start": 5159.0, "end": 5167.0, "text": " And as we tune the weights W, we're going to be, of course, getting different probabilities out for any character that you input."}, {"start": 5167.0, "end": 5177.0, "text": " And so now the question is just, can we optimize and find a good W such that the probabilities coming out are pretty good. And the way we measure pretty good is by the loss function."}, {"start": 5177.0, "end": 5182.0, "text": " Okay, so I organized everything into a single summary so that hopefully it's a bit more clear. So it starts here."}, {"start": 5182.0, "end": 5192.0, "text": " With an input dataset, we have some inputs through the neural net. And we have some labels for the correct next character in a sequence. These are integers."}, {"start": 5192.0, "end": 5207.0, "text": " Here, I'm using torsion generators now so that you see the same numbers that I see. And I'm generating 27 neurons weights. And each neuron here receives 27 inputs."}, {"start": 5207.0, "end": 5215.0, "text": " Then here we're going to plug in all the input examples x's into a neural net. So here, this is a forward pass."}, {"start": 5215.0, "end": 5229.0, "text": " First, we have to encode all of the inputs into one hot representations. So we have 27 classes. We pass in these integers. And x-nc becomes a array that is 5 by 27."}, {"start": 5229.0, "end": 5236.0, "text": " Zeroes, except for a few ones. We then multiply this in the first layer of a neural net to get low-gets."}, {"start": 5236.0, "end": 5251.0, "text": " Expand and shade the low-gets to get fake counts, sort of, and normalize these counts to get probabilities. So these last two lines, by the way, here, are called the softmax, which I pulled up here."}, {"start": 5251.0, "end": 5263.0, "text": " Softmax is a very often used layer in a neural net that takes these z's, which are low-gets. Expand and shade them. And the rise in normalizes."}, {"start": 5263.0, "end": 5278.0, "text": " It's a way of taking outputs of a neural net layer. And these outputs can be positive or negative. And it outputs probability distributions. It outputs something that is always sums to one in our positive numbers, just like probabilities."}, {"start": 5278.0, "end": 5288.0, "text": " So it's going to look like a normalization function if you want to think of it that way. And you can put it on top of any other linear layer inside a neural net. And it basically makes a neural net output probabilities."}, {"start": 5288.0, "end": 5297.0, "text": " That's very often used. And we used it as well here. So this is the forward pass. And that's how we made a neural net output probability."}, {"start": 5297.0, "end": 5313.0, "text": " Now, you'll notice that all of these, this entire forward pass is made up of differentiable layers. Everything here, we can back propagate through. And we saw some of the back propagation in micrograd."}, {"start": 5313.0, "end": 5324.0, "text": " This is just multiplication and addition. All that's happening here is just multiply and add. And we know how to back propagate through them. Expandentiation, we know how to back propagate through."}, {"start": 5324.0, "end": 5337.0, "text": " And then here we are summing. And sum is easily back propagated as well. And division as well. So everything here is the differential operation. And we can back propagate through."}, {"start": 5337.0, "end": 5346.0, "text": " Now, we achieve these probabilities, which are five by 27 for every single example. We have a vector of probabilities that sum to one."}, {"start": 5346.0, "end": 5356.0, "text": " And then here I wrote a bunch of stuff to sort of like break down the examples. So we have five examples making up Emma, right?"}, {"start": 5356.0, "end": 5368.0, "text": " And there are five by grams inside Emma. So by gram example, a by gram example one is that E is the beginning character right after dot."}, {"start": 5368.0, "end": 5381.0, "text": " And the indexes for these are zero and five. So then we feed in a zero. That's the input as a neural net. We get probabilities from the neural net that are 27 numbers."}, {"start": 5381.0, "end": 5394.0, "text": " And then the label is five because he actually comes after dot. So that's the label. And then we use this label five to index into the probability distribution here."}, {"start": 5394.0, "end": 5404.0, "text": " So this index five here is zero one two three four five. It's this number here, which is here."}, {"start": 5404.0, "end": 5417.0, "text": " So that's basically the probability assigned by the neural net to the actual correct character. You see that the net work currently thinks that this next character that E following dot is only one percent likely, which is of course not very good, right?"}, {"start": 5417.0, "end": 5427.0, "text": " Because this actually is a training example. And the network thinks that this is currently very, very unlikely. But that's just because we didn't get very lucky in generating a good setting of W."}, {"start": 5427.0, "end": 5436.0, "text": " So right now this network thinks it's unlikely and zero point zero one is not a good outcome. So the log likelihood then is very negative."}, {"start": 5436.0, "end": 5445.0, "text": " And the negative log likelihood is very positive. And so four is a very high negative log likelihood. And that means we're going to have a high loss."}, {"start": 5445.0, "end": 5452.0, "text": " Because what is the loss? The loss is just the average negative log likelihood."}, {"start": 5452.0, "end": 5461.0, "text": " And the second character is E and you see here that also the network thought that M following E is very unlikely one percent."}, {"start": 5461.0, "end": 5472.0, "text": " Of the for M following M it thought it was 2% and for a following M it actually thought it was 7% likely. So just by chance, this one actually has a pretty good probability."}, {"start": 5472.0, "end": 5478.0, "text": " And therefore pretty low negative log likelihood. And finally here it thought this was 1% likely."}, {"start": 5478.0, "end": 5487.0, "text": " So overall our average negative log likelihood, which is the loss, the total loss that summarizes basically the how well this network currently works."}, {"start": 5487.0, "end": 5492.0, "text": " At least on this one word, not on the full data suggest the one word is 3.76."}, {"start": 5492.0, "end": 5496.0, "text": " Which is actually very fairly high loss. This is not a very good setting of W's."}, {"start": 5496.0, "end": 5501.0, "text": " Now here's what we can do. We're currently getting 3.76."}, {"start": 5501.0, "end": 5508.0, "text": " We can actually come here and we can change our W. We can resample it. So let me just add one to have a different seed."}, {"start": 5508.0, "end": 5518.0, "text": " And then we get a different W. And then we can rerun this. And with this different seed, with this different setting of W's, we now get 3.37."}, {"start": 5518.0, "end": 5528.0, "text": " So this is a much better W. And it's better because the probabilities just happen to come out higher for the characters that actually are next."}, {"start": 5528.0, "end": 5534.0, "text": " And so you can imagine actually just resampling this, you know, we can try two."}, {"start": 5534.0, "end": 5540.0, "text": " So, okay, this was not very good. Let's try one more. We can try three."}, {"start": 5540.0, "end": 5549.0, "text": " Okay, this was terrible setting because we have a very high loss. So anyway, I'm going to erase this."}, {"start": 5549.0, "end": 5559.0, "text": " What I'm doing here, which is just guess and check of randomly assigning parameters and seeing if the network is good. That is amateur hour. That's not how you optimize in your alert."}, {"start": 5559.0, "end": 5565.0, "text": " The way you optimize in your alert is you start with some random guess. And we're going to commit to this one, even though it's not very good."}, {"start": 5565.0, "end": 5568.0, "text": " But not the big deal is we have a loss function."}, {"start": 5568.0, "end": 5585.0, "text": " So this loss is made up only of the furniture operations. And we can minimize the loss by tuning W's by computing the gradients of the loss with respect to these W matrices."}, {"start": 5585.0, "end": 5593.0, "text": " And so then we can tune W to minimize the loss and find a good setting of W using gradient based optimization. So let's see how that will work."}, {"start": 5593.0, "end": 5602.0, "text": " Now, things are actually going to look almost identical to what we had with micrograd. So here I pulled up the lecture from micrograd, the notebook."}, {"start": 5602.0, "end": 5608.0, "text": " It's from this repository. And when I scroll all the way to the end where we left off with micrograd, we had something very, very similar."}, {"start": 5608.0, "end": 5617.0, "text": " We had a number of input examples. In this case, we had four input examples inside X's. And we had their targets. These are targets."}, {"start": 5617.0, "end": 5624.0, "text": " Just like here we have our X's now, but we have five of them. And they're now integers instead of vectors."}, {"start": 5624.0, "end": 5631.0, "text": " But we're going to convert our integers to vectors, except our vectors will be 27 large instead of three large."}, {"start": 5631.0, "end": 5640.0, "text": " And then here what we did is first we did a forward pass where we ran a neural net on all the inputs to get predictions."}, {"start": 5640.0, "end": 5650.0, "text": " Our neural net at the time this n effects was a net of multilayer perceptron. Our neural net is going to look different because our neural net is just a single layer."}, {"start": 5650.0, "end": 5655.0, "text": " Single linear layer followed by a softmax. So that's our neural net."}, {"start": 5655.0, "end": 5663.0, "text": " And the loss here was the mean squared error. So we simply subtracted the prediction from the ground truth and squared it and some they roll up."}, {"start": 5663.0, "end": 5676.0, "text": " And that was the loss. And loss was the single number that summarized the quality of the neural net. And when loss is low, like almost zero, that means the neural net is predicting correctly."}, {"start": 5676.0, "end": 5682.0, "text": " So we had a single number that that summarized the performance of the neural net."}, {"start": 5682.0, "end": 5687.0, "text": " And everything here was differentiable and was stored in massive compute graph."}, {"start": 5687.0, "end": 5699.0, "text": " And then we iterated over all the parameters we made sure that the gradients are set to zero. And we called lost a backward and lost a backward initiated back propagation at the final output node of loss."}, {"start": 5699.0, "end": 5706.0, "text": " Right. So yeah, remember these expressions. We had lost all the way at the end. We start that propagation and we went all the way back."}, {"start": 5706.0, "end": 5714.0, "text": " And we made sure that we populated all the parameters dot grad. So that grad started at zero, but back propagation filled it in."}, {"start": 5714.0, "end": 5727.0, "text": " And then in the update, we iterated over the parameters and we simply did a parameter update where every single element of our parameters was nudged in the opposite direction of the gradient."}, {"start": 5727.0, "end": 5738.0, "text": " And so we're going to do the exact same thing here. So I'm going to pull this up on the side here."}, {"start": 5738.0, "end": 5746.0, "text": " So that we have it available and we're actually going to do the exact same thing. So this was the forward pass. So where we did this."}, {"start": 5746.0, "end": 5752.0, "text": " And props is our white bread. So now we have to evaluate the loss, but we're not using the mean square there."}, {"start": 5752.0, "end": 5758.0, "text": " We're using the negative log likelihood because we are doing classification. We're not doing regression as it's called."}, {"start": 5758.0, "end": 5761.0, "text": " So here we want to calculate loss."}, {"start": 5761.0, "end": 5770.0, "text": " Now the way we calculated is is just this average negative log likelihood. Now this probs here."}, {"start": 5770.0, "end": 5780.0, "text": " Has a shape of five by 27. And so to get all the we basically want to pluck out the probabilities at the correct indices here."}, {"start": 5780.0, "end": 5789.0, "text": " So in particular, because the labels are stored here in the array wise, basically what we're after is for the first example, we're looking at probability of five."}, {"start": 5789.0, "end": 5800.0, "text": " Right, at the index five. For the second example at the second row or row index one, we are interested in the probability of science to index 13."}, {"start": 5800.0, "end": 5807.0, "text": " At the second example, we also have 13. At the third row, we want one."}, {"start": 5807.0, "end": 5814.0, "text": " And at the last row, which is four, we want zero. So these are the probabilities we're interested in. Right."}, {"start": 5814.0, "end": 5819.0, "text": " And you can see that they're not amazing as we saw above."}, {"start": 5819.0, "end": 5827.0, "text": " So these are the probabilities we want, but we want like a more efficient way to access these probabilities, not just listing them out in a tuple like this."}, {"start": 5827.0, "end": 5842.0, "text": " So it turns out that the way to this in PyTorch, one of the ways at least, is we can basically pass in all of these, sorry about that, all of these integers and vectors."}, {"start": 5842.0, "end": 5847.0, "text": " So the these ones, you see how they're just zero, one, two, three, four."}, {"start": 5847.0, "end": 5854.0, "text": " We can actually create that using MP, not MP, sorry, torch dot arrange of five zero, one, two, three, four."}, {"start": 5854.0, "end": 5861.0, "text": " So we can index here with torch dot arrange of five. And here we index with wise."}, {"start": 5861.0, "end": 5869.0, "text": " And you see that that gives us exactly these numbers."}, {"start": 5869.0, "end": 5876.0, "text": " So that plucks out the probabilities of that the neural network assigns to the correct next character."}, {"start": 5876.0, "end": 5883.0, "text": " Now we take those probabilities and we don't we actually look at the log probability. So we want to dot log."}, {"start": 5883.0, "end": 5894.0, "text": " And then we want to just average that up. So take the mean of all that. And then it's the negative average log likelihood that is the loss."}, {"start": 5894.0, "end": 5903.0, "text": " So the loss here is 3.7 something and you see that this loss 3.76, 3.76 is exactly as we've obtained before."}, {"start": 5903.0, "end": 5909.0, "text": " But this is a vectorized form of that expression. So we get the same loss."}, {"start": 5909.0, "end": 5916.0, "text": " And the same loss we can consider sort of as part of this forward pass and we've achieved here now loss."}, {"start": 5916.0, "end": 5924.0, "text": " Okay, so we made our way all the way to loss. We define the forward pass. We forwarded the network and the loss. Now we're ready to do backward pass."}, {"start": 5924.0, "end": 5928.0, "text": " So backward pass."}, {"start": 5928.0, "end": 5936.0, "text": " We want to first make sure that all the gradients are reset. So they're at zero. Now in pie torch, you can set the gradients to be zero."}, {"start": 5936.0, "end": 5946.0, "text": " But you can also just set it to none and setting it to none is more efficient and pie torch will interpret none as like a lack of a gradient and is the same as zeros."}, {"start": 5946.0, "end": 5950.0, "text": " So this is a way to set to zero the gradient."}, {"start": 5950.0, "end": 5954.0, "text": " And now we do lost the backward."}, {"start": 5954.0, "end": 5966.0, "text": " Before we do lost that backward, we need one more thing. If you remember from micro-grad, pie torch actually requires that we pass in requires grad is true."}, {"start": 5966.0, "end": 5974.0, "text": " So that we tell pie torch that we are interested in calculating gradient for this lead tensor by default. This is false."}, {"start": 5974.0, "end": 5981.0, "text": " So let me recalculate with that and then setting none and lost that backward."}, {"start": 5981.0, "end": 5993.0, "text": " Now something magical happened when lost the backward was run because pie torch just like micro-grad, when we did the forward pass here, it keeps track of all the operations under the hood."}, {"start": 5993.0, "end": 6001.0, "text": " It builds a full computational graph. Just like the graphs we produced in micro-grad, those graphs exist inside pie torch."}, {"start": 6001.0, "end": 6010.0, "text": " And so it knows all the dependencies and all the mathematical operations of everything. And when you then calculate the loss, we can call a dot backward on it."}, {"start": 6010.0, "end": 6020.0, "text": " And that backward then fills in the gradients of all the intermediates all the way back to W's, which are the parameters of our neural net."}, {"start": 6020.0, "end": 6029.0, "text": " So now we can do WL grad and we see that it has structure. There's stuff inside it."}, {"start": 6029.0, "end": 6040.0, "text": " And these gradients, every single element here, so W.shape is 27 by 27, W grads shape is the same, 27 by 27."}, {"start": 6040.0, "end": 6048.0, "text": " And every element of W.grad is telling us the influence of that weight on the loss function."}, {"start": 6048.0, "end": 6061.0, "text": " So for example, this number all the way here, if this element, the zero zero element of W, because the gradient is positive, it's telling us that this has a positive influence on the loss."}, {"start": 6061.0, "end": 6078.0, "text": " Slightly nudging, W slightly taking W zero zero and adding a small h to it would increase the loss, mildly, because this gradient is positive. Some of these gradients are also negative."}, {"start": 6078.0, "end": 6086.0, "text": " So that's telling us about the gradient information. And we can use this gradient information to update the weights of this neural network."}, {"start": 6086.0, "end": 6096.0, "text": " So let's not do the update. It's going to be very similar to what we had in micrograd. We need no loop over all the parameters because we only have one parameter tensor and that is W."}, {"start": 6096.0, "end": 6109.0, "text": " So we simply do W dot data plus equals. We can actually copy this almost exactly negative 0.1 times W dot grad."}, {"start": 6109.0, "end": 6119.0, "text": " And that would be the update to the tensor. So that updates the tensor."}, {"start": 6119.0, "end": 6130.0, "text": " And because the tensor is updated, we would expect that now the loss should decrease. So here, if I print loss."}, {"start": 6130.0, "end": 6145.0, "text": " It was 3.76, right? So we've updated the W here. So if I recalculate forward pass, loss now should be slightly lower. So 3.76 goes to 3.74."}, {"start": 6145.0, "end": 6155.0, "text": " And then we can again set to set grad to none and backward update. And now the parameters changed again."}, {"start": 6155.0, "end": 6162.0, "text": " So if we recalculate the forward pass, we expect a lower loss again 3.72."}, {"start": 6162.0, "end": 6175.0, "text": " And this is again doing the we're now doing reading the set. And when we achieve a low loss, that will mean that the network is assigning high probabilities to the correct next characters."}, {"start": 6175.0, "end": 6186.0, "text": " Okay, so I rearranged everything and I put it all together from scratch. So here is where we construct our data set of diagrams. You see that we are still iterating only over the first word, Emma."}, {"start": 6186.0, "end": 6196.0, "text": " I'm going to change that in a second. I added a number that counts the number of elements in axis so that we explicitly see that number of examples is 5."}, {"start": 6196.0, "end": 6200.0, "text": " Because currently we're just working with Emma. There's five bygones there."}, {"start": 6200.0, "end": 6209.0, "text": " And here I added a loop of exactly what we had before. So we had 10 iterations of very decent of forward pass, backward pass, and an update."}, {"start": 6209.0, "end": 6218.0, "text": " And so running these two cells, initialization and creating the scent, gives us some improvement on the last function."}, {"start": 6218.0, "end": 6226.0, "text": " But now I want to use all the words. And there's not five, but 228,000 bygones now."}, {"start": 6226.0, "end": 6236.0, "text": " However, this should require no modification whatsoever. Everything should just run because all the code we wrote doesn't carry their five bygones or 228,000 bygones."}, {"start": 6236.0, "end": 6240.0, "text": " And with everything we should just work. So you see that this will just run."}, {"start": 6240.0, "end": 6244.0, "text": " But now we are optimizing over the entire training set of all the bygones."}, {"start": 6244.0, "end": 6250.0, "text": " And you see now that we are decreasing very slightly. So actually we can probably afford a larger learning rate."}, {"start": 6250.0, "end": 6257.0, "text": " And probably for even larger learning rate."}, {"start": 6257.0, "end": 6263.0, "text": " Even 50 seems to work on this very, very simple example."}, {"start": 6263.0, "end": 6269.0, "text": " So let me re-enact the lies and let's run 100 iterations."}, {"start": 6269.0, "end": 6273.0, "text": " See what happens."}, {"start": 6273.0, "end": 6284.0, "text": " Okay. We seem to be coming up to some pretty good losses here. 2.47. Let me run 100 more."}, {"start": 6284.0, "end": 6287.0, "text": " What is the number that we expect by the way in the loss?"}, {"start": 6287.0, "end": 6292.0, "text": " We expect to get something around what we had originally actually."}, {"start": 6292.0, "end": 6299.0, "text": " So all the way back, if you remember, in the beginning of this video, when we optimized just by counting,"}, {"start": 6299.0, "end": 6304.0, "text": " our loss was roughly 2.47 after we had its moving."}, {"start": 6304.0, "end": 6310.0, "text": " But before its moving, we had roughly 2.45, likely it. Sorry, loss."}, {"start": 6310.0, "end": 6314.0, "text": " And so that's actually roughly the vicinity of what we expect to achieve."}, {"start": 6314.0, "end": 6319.0, "text": " But before we achieved it by counting, and here we are achieving roughly the same result,"}, {"start": 6319.0, "end": 6326.0, "text": " but with gradient-based optimization. So we come to about 2.46, 2.45, etc."}, {"start": 6326.0, "end": 6330.0, "text": " And that makes sense because fundamentally we're not taking any additional information."}, {"start": 6330.0, "end": 6333.0, "text": " We're still just taking in the previous character and trying to predict the next one,"}, {"start": 6333.0, "end": 6338.0, "text": " but instead of doing it explicitly by counting and normalizing,"}, {"start": 6338.0, "end": 6340.0, "text": " we are doing it with gradient-based learning."}, {"start": 6340.0, "end": 6345.0, "text": " And it just so happens that the explicit approach happens to very well optimize the loss function."}, {"start": 6345.0, "end": 6348.0, "text": " Without any need for a gradient-based optimization,"}, {"start": 6348.0, "end": 6353.0, "text": " because the setup for bi-gram language models is so straightforward, so simple,"}, {"start": 6353.0, "end": 6359.0, "text": " we can just afford to estimate as probably is directly and maintain them in a table."}, {"start": 6359.0, "end": 6363.0, "text": " But the gradient-based approach is significantly more flexible."}, {"start": 6363.0, "end": 6371.0, "text": " So we've actually gained a lot because what we can do now is we can expand this approach"}, {"start": 6371.0, "end": 6373.0, "text": " and complexify the neural net."}, {"start": 6373.0, "end": 6376.0, "text": " So currently we're just taking a single character and feeding into a neural net,"}, {"start": 6376.0, "end": 6378.0, "text": " and the neural is extremely simple."}, {"start": 6378.0, "end": 6380.0, "text": " But we're about to iterate on this substantially."}, {"start": 6380.0, "end": 6383.0, "text": " We're going to be taking multiple previous characters,"}, {"start": 6383.0, "end": 6387.0, "text": " and we're going to be feeding them into increasingly more complex neural nets."}, {"start": 6387.0, "end": 6392.0, "text": " But fundamentally, the output of the neural net will always just be low jits."}, {"start": 6392.0, "end": 6395.0, "text": " And those low jits will go through the exact same transformation."}, {"start": 6395.0, "end": 6399.0, "text": " We are going to take them through a softmax, calculate the loss function,"}, {"start": 6399.0, "end": 6403.0, "text": " and the negative log likelihood, and do gradient-based optimization."}, {"start": 6403.0, "end": 6407.0, "text": " And so actually, as we complexify the neural nets,"}, {"start": 6407.0, "end": 6412.0, "text": " and work all the way up to transformers, none of this will really fundamentally change."}, {"start": 6412.0, "end": 6414.0, "text": " None of this will fundamentally change."}, {"start": 6414.0, "end": 6417.0, "text": " The only thing that will change is the way we do the forward pass,"}, {"start": 6417.0, "end": 6423.0, "text": " or we've taken some previous characters and calculated logits for the next character in a sequence."}, {"start": 6423.0, "end": 6429.0, "text": " That will become more complex, and I will use the same machinery to optimize it."}, {"start": 6429.0, "end": 6435.0, "text": " And it's not obvious how we would have extended this bi-gram approach"}, {"start": 6435.0, "end": 6439.0, "text": " into the case where there are many more characters at the input."}, {"start": 6439.0, "end": 6442.0, "text": " Because eventually these tables would get way too large,"}, {"start": 6442.0, "end": 6447.0, "text": " because there's way too many combinations of what previous characters could be."}, {"start": 6447.0, "end": 6452.0, "text": " If you only have one previous character, we can just keep everything in a table, the counts."}, {"start": 6452.0, "end": 6455.0, "text": " But if you have the last 10 characters that are input,"}, {"start": 6455.0, "end": 6457.0, "text": " we can't actually keep everything in a table anymore."}, {"start": 6457.0, "end": 6460.0, "text": " So this is fundamentally an unscatable approach,"}, {"start": 6460.0, "end": 6463.0, "text": " and the neural network approach is significantly more scalable,"}, {"start": 6463.0, "end": 6467.0, "text": " and it's something that actually we can improve on over time."}, {"start": 6467.0, "end": 6469.0, "text": " So that's where we will be digging next."}, {"start": 6469.0, "end": 6471.0, "text": " I wanted to point out two more things."}, {"start": 6471.0, "end": 6476.0, "text": " Number one, I want you to notice that this x and k here,"}, {"start": 6476.0, "end": 6479.0, "text": " this is made up of one hot vectors,"}, {"start": 6479.0, "end": 6483.0, "text": " and then those one hot vectors are multiplied by this w matrix."}, {"start": 6483.0, "end": 6488.0, "text": " And we think of this as a multiple neurons being forwarded in a fully connected manner."}, {"start": 6488.0, "end": 6492.0, "text": " But actually what's happening here is that, for example,"}, {"start": 6492.0, "end": 6497.0, "text": " if you have a one hot vector here that has a one at say the fifth dimension,"}, {"start": 6497.0, "end": 6501.0, "text": " then because of the way the matrix multiplication works,"}, {"start": 6501.0, "end": 6507.0, "text": " multiplying that one hot vector with w actually ends up plucking out the fifth row of w."}, {"start": 6507.0, "end": 6511.0, "text": " A lot of logits would become just the fifth row of w,"}, {"start": 6511.0, "end": 6515.0, "text": " and that's because of the way the matrix multiplication works."}, {"start": 6515.0, "end": 6519.0, "text": " So that's actually what ends up happening."}, {"start": 6519.0, "end": 6523.0, "text": " So, but that's actually exactly what happened before."}, {"start": 6523.0, "end": 6526.0, "text": " Because remember, all the way up here,"}, {"start": 6526.0, "end": 6529.0, "text": " we have a by-gram, we took the first character,"}, {"start": 6529.0, "end": 6534.0, "text": " and then that first character indexed into a row of this array here."}, {"start": 6534.0, "end": 6538.0, "text": " And that row gave us the probability distribution for the next character."}, {"start": 6538.0, "end": 6546.0, "text": " So the first character was used as a lookup into a matrix here to get the probability distribution."}, {"start": 6546.0, "end": 6548.0, "text": " Well, that's actually exactly what's happening here."}, {"start": 6548.0, "end": 6550.0, "text": " Because we're taking the index."}, {"start": 6550.0, "end": 6553.0, "text": " We're encoding it as one hot and multiplying it by w."}, {"start": 6553.0, "end": 6560.0, "text": " So, logits literally becomes the appropriate row of w."}, {"start": 6560.0, "end": 6562.0, "text": " And that gets just as before,"}, {"start": 6562.0, "end": 6564.0, "text": " expedited to create the counts,"}, {"start": 6564.0, "end": 6567.0, "text": " and then normalized and becomes probability."}, {"start": 6567.0, "end": 6574.0, "text": " So, this w here is literally the same as this array here."}, {"start": 6574.0, "end": 6579.0, "text": " But, w, remember, is the log counts, not the counts."}, {"start": 6579.0, "end": 6582.0, "text": " So it's more precise to say that w, expedentially,"}, {"start": 6582.0, "end": 6586.0, "text": " w.x is this array."}, {"start": 6586.0, "end": 6589.0, "text": " But this array was filled in by counting,"}, {"start": 6589.0, "end": 6592.0, "text": " and by basically,"}, {"start": 6592.0, "end": 6594.0, "text": " popularly in the counts of bi-grams,"}, {"start": 6594.0, "end": 6596.0, "text": " whereas in the gradient base framework,"}, {"start": 6596.0, "end": 6598.0, "text": " we initialize it randomly,"}, {"start": 6598.0, "end": 6603.0, "text": " and then we let the loss guide us to arrive at the exact same array."}, {"start": 6603.0, "end": 6606.0, "text": " So, this array, exactly here,"}, {"start": 6606.0, "end": 6610.0, "text": " is basically the array w at the end of optimization,"}, {"start": 6610.0, "end": 6612.0, "text": " except we arrive at it,"}, {"start": 6612.0, "end": 6615.0, "text": " piece by piece by following the loss."}, {"start": 6615.0, "end": 6618.0, "text": " And that's why we also obtain the same loss function at the end."}, {"start": 6618.0, "end": 6620.0, "text": " And the second note is, if I come here,"}, {"start": 6620.0, "end": 6624.0, "text": " remember the smoothing where we added fake counts to our counts"}, {"start": 6624.0, "end": 6628.0, "text": " in order to smooth out and make more uniform"}, {"start": 6628.0, "end": 6631.0, "text": " the distributions of these probabilities."}, {"start": 6631.0, "end": 6634.0, "text": " And that prevented us from assigning zero probability"}, {"start": 6634.0, "end": 6637.0, "text": " to any one bi-gram."}, {"start": 6637.0, "end": 6640.0, "text": " Now, if I increase the count here,"}, {"start": 6640.0, "end": 6643.0, "text": " what's happening to the probability?"}, {"start": 6643.0, "end": 6645.0, "text": " As I increase the count,"}, {"start": 6645.0, "end": 6648.0, "text": " probability becomes more and more uniform, right?"}, {"start": 6648.0, "end": 6652.0, "text": " Because these counts go only up to like 900 or whatever."}, {"start": 6652.0, "end": 6655.0, "text": " So if I'm adding plus a million to every single number here,"}, {"start": 6655.0, "end": 6657.0, "text": " you can see how the row,"}, {"start": 6657.0, "end": 6659.0, "text": " and its probability, then, when we divide,"}, {"start": 6659.0, "end": 6662.0, "text": " it's just going to become more and more close to exactly even"}, {"start": 6662.0, "end": 6665.0, "text": " probability in a form distribution."}, {"start": 6665.0, "end": 6667.0, "text": " It turns out that the gradient base framework"}, {"start": 6667.0, "end": 6670.0, "text": " has an equivalent to smoothing."}, {"start": 6670.0, "end": 6673.0, "text": " In particular,"}, {"start": 6673.0, "end": 6676.0, "text": " think through these w's here,"}, {"start": 6676.0, "end": 6678.0, "text": " which we initialize randomly."}, {"start": 6678.0, "end": 6682.0, "text": " We could also think about initializing w's to be zero."}, {"start": 6682.0, "end": 6685.0, "text": " If all the entries of w are zero,"}, {"start": 6685.0, "end": 6688.0, "text": " then you'll see that logits will become all zero."}, {"start": 6688.0, "end": 6690.0, "text": " And then, expd."}, {"start": 6690.0, "end": 6692.0, "text": " And those logits becomes all one."}, {"start": 6692.0, "end": 6695.0, "text": " And then, the probabilities turn out to be exactly uniform."}, {"start": 6695.0, "end": 6698.0, "text": " So basically, when w's are all equal to each other,"}, {"start": 6698.0, "end": 6701.0, "text": " or say, especially zero,"}, {"start": 6701.0, "end": 6704.0, "text": " then the probabilities come out completely uniform."}, {"start": 6704.0, "end": 6709.0, "text": " So, trying to incentivize w to be near zero"}, {"start": 6709.0, "end": 6713.0, "text": " is basically equivalent to label smoothing."}, {"start": 6713.0, "end": 6715.0, "text": " And the more you incentivize that in a loss function,"}, {"start": 6715.0, "end": 6718.0, "text": " the more smooth distribution you're going to achieve."}, {"start": 6718.0, "end": 6721.0, "text": " So this brings us to something that's called regularization,"}, {"start": 6721.0, "end": 6724.0, "text": " where we can actually augment the loss function"}, {"start": 6724.0, "end": 6728.0, "text": " to have a small component that we call a regularization loss."}, {"start": 6728.0, "end": 6731.0, "text": " In particular, what we're going to do is we can take w."}, {"start": 6731.0, "end": 6732.0, "text": " And we can, for example,"}, {"start": 6732.0, "end": 6734.0, "text": " square all of its entries."}, {"start": 6734.0, "end": 6738.0, "text": " And then we can, oops, sorry about that."}, {"start": 6738.0, "end": 6743.0, "text": " We can take all the entries of w and we can sum them."}, {"start": 6743.0, "end": 6745.0, "text": " And because we're squaring,"}, {"start": 6745.0, "end": 6748.0, "text": " there will be no signs anymore."}, {"start": 6748.0, "end": 6751.0, "text": " Natives and positives all get squashed to be positive numbers."}, {"start": 6751.0, "end": 6755.0, "text": " And then, the way this works is you achieve zero loss"}, {"start": 6755.0, "end": 6757.0, "text": " if w is exactly or zero."}, {"start": 6757.0, "end": 6761.0, "text": " But if w has non-zero numbers, you accumulate loss."}, {"start": 6761.0, "end": 6764.0, "text": " And so we can actually take this and we can add it on here."}, {"start": 6764.0, "end": 6771.0, "text": " So we can do something like loss plus w square dot sum."}, {"start": 6771.0, "end": 6774.0, "text": " Or let's actually, instead of sum, let's take a mean."}, {"start": 6774.0, "end": 6777.0, "text": " Because otherwise the sum gets too large."}, {"start": 6777.0, "end": 6781.0, "text": " So mean is like a little bit more manageable."}, {"start": 6781.0, "end": 6783.0, "text": " And then we have a regularization loss here."}, {"start": 6783.0, "end": 6785.0, "text": " I'll say 0.01 times."}, {"start": 6785.0, "end": 6786.0, "text": " Or something like that."}, {"start": 6786.0, "end": 6789.0, "text": " You can choose the regularization strength."}, {"start": 6789.0, "end": 6792.0, "text": " And then we can just optimize this."}, {"start": 6792.0, "end": 6795.0, "text": " And now this optimization actually has two components."}, {"start": 6795.0, "end": 6798.0, "text": " Not only is it trying to make all the probabilities work out."}, {"start": 6798.0, "end": 6799.0, "text": " But in addition to that,"}, {"start": 6799.0, "end": 6804.0, "text": " there's an additional component that simultaneously tries to make all w's b zero."}, {"start": 6804.0, "end": 6806.0, "text": " Because if w's are non-zero, you feel a loss."}, {"start": 6806.0, "end": 6810.0, "text": " And so minimizing this, the only way to achieve that is for w to b zero."}, {"start": 6810.0, "end": 6813.0, "text": " And so you can think of this as adding like a spring force,"}, {"start": 6813.0, "end": 6817.0, "text": " or like a gravity force, that pushes w to b zero."}, {"start": 6817.0, "end": 6819.0, "text": " So w wants to b zero."}, {"start": 6819.0, "end": 6821.0, "text": " And the probabilities want to be uniform."}, {"start": 6821.0, "end": 6825.0, "text": " But they also simultaneously want to match up your probabilities"}, {"start": 6825.0, "end": 6827.0, "text": " as indicated by the data."}, {"start": 6827.0, "end": 6830.0, "text": " And so the strength of this regularization"}, {"start": 6830.0, "end": 6836.0, "text": " is exactly controlling the amount of counts that you add here."}, {"start": 6836.0, "end": 6844.0, "text": " Adding a lot more counts here corresponds to increasing this number."}, {"start": 6844.0, "end": 6846.0, "text": " Because the more you increase it,"}, {"start": 6846.0, "end": 6849.0, "text": " the more this part of the loss function dominates this part."}, {"start": 6849.0, "end": 6853.0, "text": " And the more these weights will be unable to grow."}, {"start": 6853.0, "end": 6858.0, "text": " Because as they grow, they accumulate way too much loss."}, {"start": 6858.0, "end": 6861.0, "text": " And so if this is strong enough,"}, {"start": 6861.0, "end": 6864.0, "text": " then we are not able to overcome the force of this loss."}, {"start": 6864.0, "end": 6868.0, "text": " And we will never, and basically everything will be uniform predictions."}, {"start": 6868.0, "end": 6870.0, "text": " So I thought that's kind of cool."}, {"start": 6870.0, "end": 6872.0, "text": " Okay, and lastly, before we wrap up,"}, {"start": 6872.0, "end": 6876.0, "text": " I wanted to show you how you would sample from this neural net model."}, {"start": 6876.0, "end": 6880.0, "text": " And I copy-pasted the sampling code from before,"}, {"start": 6880.0, "end": 6884.0, "text": " where remember that we sampled five times."}, {"start": 6884.0, "end": 6886.0, "text": " And all we did was start at zero,"}, {"start": 6886.0, "end": 6890.0, "text": " we grabbed the current ix row of p."}, {"start": 6890.0, "end": 6892.0, "text": " And that was our probability row,"}, {"start": 6892.0, "end": 6894.0, "text": " from which we sampled the next index,"}, {"start": 6894.0, "end": 6898.0, "text": " and just accumulated that and break when zero."}, {"start": 6898.0, "end": 6903.0, "text": " And running this gave us these results."}, {"start": 6903.0, "end": 6907.0, "text": " I still have the p in memory, so this is fine."}, {"start": 6907.0, "end": 6912.0, "text": " Now, this p doesn't come from the row of p."}, {"start": 6912.0, "end": 6915.0, "text": " Instead it comes from this neural net."}, {"start": 6915.0, "end": 6917.0, "text": " First, we take ix,"}, {"start": 6917.0, "end": 6922.0, "text": " and we encode it into a one-hot row of x-ank."}, {"start": 6922.0, "end": 6925.0, "text": " This x-ank multiplies rw,"}, {"start": 6925.0, "end": 6929.0, "text": " which really just plugs out the row of w corresponding to ix."}, {"start": 6929.0, "end": 6930.0, "text": " Really, that's what's happening."}, {"start": 6930.0, "end": 6932.0, "text": " And that gets our logits,"}, {"start": 6932.0, "end": 6935.0, "text": " and then we normalize those logits,"}, {"start": 6935.0, "end": 6938.0, "text": " and we actually exponential to get counts,"}, {"start": 6938.0, "end": 6941.0, "text": " and then we normalize to get the distribution,"}, {"start": 6941.0, "end": 6944.0, "text": " and then we can sample from the distribution."}, {"start": 6944.0, "end": 6947.0, "text": " So if I run this,"}, {"start": 6947.0, "end": 6950.0, "text": " kind of anti-climatic or climatic,"}, {"start": 6950.0, "end": 6952.0, "text": " depending on how you look at it,"}, {"start": 6952.0, "end": 6955.0, "text": " but we get the exact same result."}, {"start": 6955.0, "end": 6958.0, "text": " And that's because this is the identical model."}, {"start": 6958.0, "end": 6961.0, "text": " Not only does it achieve the same loss,"}, {"start": 6961.0, "end": 6962.0, "text": " but as I mentioned,"}, {"start": 6962.0, "end": 6964.0, "text": " these are identical models,"}, {"start": 6964.0, "end": 6967.0, "text": " but we came to this answer in a very different way,"}, {"start": 6967.0, "end": 6969.0, "text": " and it's got a very different interpretation."}, {"start": 6969.0, "end": 6970.0, "text": " But fundamentally,"}, {"start": 6970.0, "end": 6971.0, "text": " this is basically the same model,"}, {"start": 6971.0, "end": 6973.0, "text": " and gives the same samples here."}, {"start": 6973.0, "end": 6975.0, "text": " And so, that's kind of cool."}, {"start": 6975.0, "end": 6978.0, "text": " Okay, so we've actually covered a lot of ground."}, {"start": 6978.0, "end": 6982.0, "text": " We introduced the bi-gram character level language model."}, {"start": 6982.0, "end": 6984.0, "text": " We saw how we can train the model,"}, {"start": 6984.0, "end": 6985.0, "text": " how we can sample from the model,"}, {"start": 6985.0, "end": 6988.0, "text": " and how we can evaluate the quality of the model,"}, {"start": 6988.0, "end": 6990.0, "text": " using the negative log likelihood loss."}, {"start": 6990.0, "end": 6993.0, "text": " And then we actually train the model in two completely different ways,"}, {"start": 6993.0, "end": 6996.0, "text": " that actually get the same result and the same model."}, {"start": 6996.0, "end": 6998.0, "text": " In the first way, we just count it up,"}, {"start": 6998.0, "end": 7000.0, "text": " the frequency of all the bi-grams,"}, {"start": 7000.0, "end": 7001.0, "text": " and normalized."}, {"start": 7001.0, "end": 7002.0, "text": " In the second way,"}, {"start": 7002.0, "end": 7006.0, "text": " we used the negative log likelihood loss,"}, {"start": 7006.0, "end": 7007.0, "text": " as a guide,"}, {"start": 7007.0, "end": 7010.0, "text": " to optimizing the counts matrix,"}, {"start": 7010.0, "end": 7011.0, "text": " or the counts array,"}, {"start": 7011.0, "end": 7013.0, "text": " so that the loss is minimized,"}, {"start": 7013.0, "end": 7015.0, "text": " in the gradient-based framework."}, {"start": 7015.0, "end": 7018.0, "text": " And we saw that both of them give the same result,"}, {"start": 7018.0, "end": 7021.0, "text": " and that's it."}, {"start": 7021.0, "end": 7023.0, "text": " Now, the second one of these,"}, {"start": 7023.0, "end": 7025.0, "text": " the gradient-based framework, is much more flexible."}, {"start": 7025.0, "end": 7028.0, "text": " And right now, our neural network is super simple."}, {"start": 7028.0, "end": 7030.0, "text": " We're taking a single previous character,"}, {"start": 7030.0, "end": 7033.0, "text": " and we're taking it through a single linear layer,"}, {"start": 7033.0, "end": 7034.0, "text": " to calculate the logits."}, {"start": 7034.0, "end": 7036.0, "text": " This is about to complexify."}, {"start": 7036.0, "end": 7037.0, "text": " So, in the follow-up videos,"}, {"start": 7037.0, "end": 7040.0, "text": " we're going to be taking more and more of these characters,"}, {"start": 7040.0, "end": 7043.0, "text": " and we're going to be feeding them into a neural net."}, {"start": 7043.0, "end": 7045.0, "text": " But this neural net will still output the exact same thing."}, {"start": 7045.0, "end": 7048.0, "text": " The neural net will output logits."}, {"start": 7048.0, "end": 7050.0, "text": " And these logits will still be normalized in the exact same way,"}, {"start": 7050.0, "end": 7052.0, "text": " and all the loss, and everything else,"}, {"start": 7052.0, "end": 7054.0, "text": " and the gradient-based framework,"}, {"start": 7054.0, "end": 7055.0, "text": " everything stays identical."}, {"start": 7055.0, "end": 7058.0, "text": " It's just that this neural net will now complexify"}, {"start": 7058.0, "end": 7060.0, "text": " all the way to transformers."}, {"start": 7060.0, "end": 7062.0, "text": " So, that's going to be pretty awesome,"}, {"start": 7062.0, "end": 7064.0, "text": " and I'm looking forward to it for now."}, {"start": 7064.0, "end": 7080.0, "text": " Bye."}]
Neural Networks: Zero to Hero
https://www.youtube.com/watch?v=P6sfmUTpUmc
Building makemore Part 3: Activations & Gradients, BatchNorm
We dive into some of the internals of MLPs with multiple layers and scrutinize the statistics of the forward pass activations, backward pass gradients, and some of the pitfalls when they are improperly scaled. We also look at the typical diagnostic tools and visualizations you'd want to use to understand the health of your deep network. We learn why training deep neural nets can be fragile and introduce the first modern innovation that made doing so much easier: Batch Normalization. Residual connections and the Adam optimizer remain notable todos for later video. Links: - makemore on github: https://github.com/karpathy/makemore - jupyter notebook I built in this video: https://github.com/karpathy/nn-zero-to-hero/blob/master/lectures/makemore/makemore_part3_bn.ipynb - collab notebook: https://colab.research.google.com/drive/1H5CSy-OnisagUgDUXhHwo1ng2pjKHYSN?usp=sharing - my website: https://karpathy.ai - my twitter: https://twitter.com/karpathy - Discord channel: https://discord.gg/Hp2m3kheJn Useful links: - "Kaiming init" paper: https://arxiv.org/abs/1502.01852 - BatchNorm paper: https://arxiv.org/abs/1502.03167 - Bengio et al. 2003 MLP language model paper (pdf): https://www.jmlr.org/papers/volume3/bengio03a/bengio03a.pdf Exercises: - E01: I did not get around to seeing what happens when you initialize all weights and biases to zero. Try this and train the neural net. You might think either that 1) the network trains just fine or 2) the network doesn't train at all, but actually it is 3) the network trains but only partially, and achieves a pretty bad final performance. Inspect the gradients and activations to figure out what is happening and why the network is only partially training, and what part is being trained exactly. - E02: BatchNorm, unlike other normalization layers like LayerNorm/GroupNorm etc. has the big advantage that after training, the batchnorm gamma/beta can be "folded into" the weights of the preceeding Linear layers, effectively erasing the need to forward it at test time. Set up a small 3-layer MLP with batchnorms, train the network, then "fold" the batchnorm gamma/beta into the preceeding Linear layer's W,b by creating a new W2, b2 and erasing the batch norm. Verify that this gives the same forward pass during inference. i.e. we see that the batchnorm is there just for stabilizing the training, and can be thrown out after training is done! pretty cool. Chapters: 00:00:00 intro 00:01:22 starter code 00:04:19 fixing the initial loss 00:12:59 fixing the saturated tanh 00:27:53 calculating the init scale: “Kaiming init” 00:40:40 batch normalization 01:03:07 batch normalization: summary 01:04:50 real example: resnet50 walkthrough 01:14:10 summary of the lecture 01:18:35 just kidding: part2: PyTorch-ifying the code 01:26:51 viz #1: forward pass activations statistics 01:30:54 viz #2: backward pass gradient statistics 01:32:07 the fully linear case of no non-linearities 01:36:15 viz #3: parameter activation and gradient statistics 01:39:55 viz #4: update:data ratio over time 01:46:04 bringing back batchnorm, looking at the visualizations 01:51:34 summary of the lecture for real this time
Hi everyone. Today we are continuing our implementation of Makemore. Now in the last lecture we implemented the Multalia Perceptron along the lines of Benjueil to Hal 2003 for Character Level Language modeling. So we followed this paper, took in a few characters in the past, and used an MLP to predict the next character in a sequence. So what we'd like to do now is we'd like to move on to more complex and larger neural networks, like recurrent neural networks, and their variations like the GRU, LSTM, and so on. Now before we do that though, we have to stick around the level of Multalia Perceptron for a bit longer. And I'd like to do this because I would like us to have a very good intuitive understanding of the activations in the neural net during training, and especially the gradients that are flowing backwards, and how they behave, and what they look like. This is going to be very important to understand the history of the development of these architectures, because we'll see that recurrent neural networks, while they are very expressive in that they are a universal approximator and can imprince the implement all the algorithms. We'll see that they are not very easily optimizable with the first order of gradient-based techniques that we have available to us, and that we use all the time. And the key to understanding why they are not optimizable easily is to understand the the activations and the gradients and how they behave during training. And we'll see that a lot of the variants since recurrent neural networks have tried to improve that situation. And so that's the path that we have to take, and let's go start it. So the starting code for this lecture is largely the code from before, but I've cleaned it up a little bit. So you'll see that we are importing all the torch and math plotlet utilities. We're reading in the words just like before. These are eight example words. There's a total of 32,000 of them. Here's a vocabulary of all the lowercase letters and the special dot token. Here we are reading the dataset and processing it, and creating three splits, the train, dev, and the test split. Now in MLP, this is the identical same MLP, except you see that I removed a bunch of magic numbers that we had here. And instead we have the dimensionality of the embedding space of the characters and the number of hidden units in the hidden layer. And so I've pulled them outside here so that we don't have to go and change all these magic numbers all the time. With the same neural net with 11,000 parameters that we optimize now over 200,000 steps with batch size of 32. And you'll see that I refactored the code here a little bit, but there are no functional changes. I just created a few extra variables, a few more comments, and I removed all the magic numbers. And otherwise is the exact same thing. Then when we optimize, we saw that our loss looked something like this. We saw that the train and valve loss were about 2.16 and so on. Here I refactored the code a little bit for the evaluation of arbitrary splits. So you pass in a string of which split you'd like to evaluate. And then here depending on train, valve, or test, I index in and I get the correct split. And then this is the forward pass of the network and evaluation of the loss and printing it. So just making it nicer, I want to think that you'll notice here is I'm using a decorator torched.no grad, which you can also look up and read documentation of. Basically what this decorator does on top of a function is that whatever happens in this function is synced by torched to never require an ingredients. So it will not do any of the bookkeeping that it does to keep track of all the gradients in anticipation of an eventual backward pass. It's almost as if all the tensors that get created here have a requires grad of false. And so it just makes everything much more efficient because you're telling torched that I will not call dot backward on any of this computation and you don't need to maintain the graph under the hood. So that's what this does. And you can also use a context manager with torched.no grad and you can let those out. Then here we have the sampling from a model just as before. Just a four pass of a neural nut getting the distribution sampling from it adjusting the context window and repeating until we get the special and token. And we see that we are starting to get much nicer looking words simple from the model. It's still not amazing and they're still not fully name like. But it's much better than what we had with the bi-gram model. So that's our starting point. Now the first thing I would like to scrutinize is the initialization. I can tell that our network is very improperly configured at initialization. And there's multiple things wrong with it. But let's just start with the first one. Look here on the zero iteration, the very first iteration, we are recording a loss of 27 and this rapidly comes down to roughly one or two or so. So I can tell that the initialization is all messed up because this is way too high. In training of neural nuts, it is almost always the case that you will have a rough idea for what loss to expect at initialization. And that just depends on the loss function and the problem setup. In this case, I do not expect 27. I expect a much lower number and we can calculate it together. Basically, at initialization, or we'd like is that there's 27 characters that could come next for anyone training example. At initialization, we have no reason to believe any characters to be much more likely than others. And so we'd expect that the probability distribution that comes out initially is a uniform distribution, assigning about equal probability to all the 27 characters. So basically what we'd like is the probability for any character would be roughly one over 27. That is the probability we should record. And then the loss is the negative log probability. So let's wrap this in a tensor. And then then we can take the log of it. And then the negative log probability is the loss we would expect, which is 3.29, much, much lower than 27. And so what's happening right now is that at initialization, the neural net is creating probability distributions that are all messed up. Some characters are very confident and some characters are very not confident. And then basically what's happening is that the network is very confidently wrong. And that makes that's what makes it record very high loss. So here's a smaller four dimensional example of the issue. Let's say we only have four characters and then we have logits that come out of the neural net and they are very very close to zero. Then when we take the softmax of all zeroes, we get probabilities there are a diffuse distribution. So sums to one and is exactly uniform. And then in this case, if the label is say two, it doesn't actually matter if this if the label is two or three or one or zero because it's a uniform distribution, we're recording the exact same loss in this case 1.38. So this is the loss we would expect for a four dimensional example. And I can see of course that as we start to manipulate these logits, we're going to be changing the loss here. So it could be that we lock out and by chance this could be a very high number like five or something like that. Then in that case, we'll record a very low loss because we're signing the correct probability at initialization by chance to the correct label. Much more likely, it is that some other dimension will have a high logit and then what will happen is we start to record much higher loss. And what can come what can happen is basically the logits come out like something like this, you know, and they take on extreme values and we record really high loss. For example, if we have torched out random of four, so these are uniform, sorry, these are normally distributed numbers, forum. And here we can also print the logits, probabilities that come out of it and the loss. And so because these logits are near zero, for the most part, the loss that comes out is okay. But suppose this is like time 10 now. You see how because these are more extreme values, it's very unlikely that you're going to be guessing the correct bucket and then you're confidently wrong and recording very high loss. If your logits are coming up even more extreme, you might get extremely same losses like infinity even at initialization. So basically this is not good and we want the logits to be roughly zero when the work is initialized. In fact, the logits can don't have to be just zero. They just have to be equal. So for example, if all the logits are one, then because of the normalization inside the softmax, this will actually come out okay. But by symmetry, we don't want it to be any arbitrary positive or negative number. We just want it to be all zeros and record the loss that we expect at initialization. So let's not completely see where things go wrong in our example. Here we have the initialization. Let me initialize the neural ad. And here let me break after the very first iteration. So we only see the initial loss, which is 27. So that's way too high. And intuitively now we can expect the variables involved. And we see that the logits here, if we just print some of these, if we just print the first row, we see that the logits take on quite extreme values. And that's what's creating the fake confidence and incorrect answers. And makes the loss get very, very high. So these logits should be much, much closer to zero. So now let's think through how we can achieve logits coming out of this neural net to be more closer to zero. You see here that the logits are calculated as they hit in states multiplied by W2 plus B2. So first of all, currently we're initializing B2 as random values of the right size. But because we want roughly zero, we don't actually want to be adding a bias of random numbers. So in fact, I'm going to add a times a zero here to make sure that B2 is just basically zero at initialization. And second, this is H multiplied by W2. So if we want logits to be very, very small, then we would be multiplying W2 and making that small. So for example, if we scale down W2 by 0.1 all the elements, then if I do again just a very first iteration, you see that we are getting much closer to what we expect. So roughly what we want is about 3.29, this is 4.2. I can make this maybe even smaller, 3.32. Okay, so we're getting closer and closer. Now, you're probably wondering, can we just set this to zero? Then we get, of course, exactly what we're looking for at initialization. And the reason I don't usually do this is because I'm very nervous and I'll show you in a second why you don't want to be setting W's or weights of a neural net exactly to zero. You usually want to be small numbers instead of exactly zero. For this output layer in this specific case, I think it would be fine, but I'll show you in a second where things go wrong very quickly if you do that. So let's just go with 0.01. In that case, our loss is close enough, but has some entropy. It's not exactly zero. It's got some low entropy and that's used for symmetry braking as we'll see in a second. Logits are now coming out much closer to zero and everything is well and good. So if I just erase these and I now take away the brake statement, we can run the optimization with this new initialization. And let's just see what losses we record. Okay, so I'll let it run and you see that we started off good and then we came down a bit. The plot of the loss now doesn't have this hockey shape appearance because basically what's happening in the hockey stick, the very first few iterations of the loss, what's happening during the optimization is the optimization is just squashing down the logits and then it's rearranging the logits. So basically we took away this easy part of the loss function where just the weights were just being shrunk down. And so therefore we don't get these easy gains in the beginning and we're just getting some of the hard gains of training the actual neural nut. And so there's no hockey stick appearance. So good things are happening in that both number one loss and initialization is what we expect. And the loss doesn't look like a hockey stick. And this is true for any neural nut you might train and something to look at for. And second, the loss that came out is actually quite a bit improved. Unfortunately, I erased what we had here before. I believe this was 2.12 and this was 2.16. So we get a slightly improved result. And the reason for that is because we're spending more cycles, more time optimizing the neural nut actually instead of just spending the first several thousand iterations probably just squashing down the weights because they are so way too high in the beginning and the initialization. So something to look out for and that's number one. Now let's look at the second problem. Let me re-initialize our neural nut and let me reintroduce the break statement. So we have a reasonable initial loss. So even though everything is looking good on the level of the loss and we get something that we expect, there's still a deeper problem lurking inside this neural nut and its initialization. So the logits are now okay. The problem now is with the values of H, the activations of the hidden states. Now if we just visualize this vector, sorry, this tensor H, it's kind of hard to see but the problem here roughly speaking is you see how many of the elements are one or negative one. Now recall that tortsh.10h, the 10h function is a squashing function. It takes arbitrary numbers and it squashes them into a range of negative one and one and it does so smoothly. So let's look at the histogram of H to get a better idea of the distribution of the values inside this tensor. We can do this first. Well we can see that H is 32 examples and 200 activations in each example. We can view it as negative one to stretch it out into one large vector and we can then call two lists to convert this into one large Python list of floats and then we can pass this into plt.hist for histogram and we say we want 50 bins and a semicolon to suppress a bunch of output we don't want. So we see this histogram and we see that most of the values by far take on value of negative one and one. So this 10h is very very active and we can also look at basically why that is we can look at the preactivations that feed into the 10h and we can see that the distribution of the preactivations are is very very broad. These take numbers between negative 15 and 15 and that's why in a tortsh.10h everything is being squashed and capped to be in the range of negative one and one and lots of numbers here take on very extreme values. Now if you are new to it in your networks you might not actually see this as an issue but if you're well burst in the dark arts of back propagation and then have an intuitive sense of how these radians float through in your own net you are looking at your distribution of 10h activations here and you are sweating. So let me show you why. We have to keep in mind that during back propagation just like we saw in micrograd we are doing backward pass starting at the loss and flowing through the network backwards. In particular we're going to back propagate through this tortsh.10h and this layer here is made up of 200 neurons for each one of these examples and it implements an element twice 10h. So let's look at what happens in 10h in the backward pass. We can actually go back to our previous micrograd code in the very first lecture and see how we implement the 10h. We saw that the input here was x and then we calculate t which is the 10h of x. So that's t and t is between negative one and one. It's the output of the 10h and then in the backward pass how do we back propagate through a 10h? We take out that grad and then we multiply it this is the chain rule with the local gradient which took the form of 1 minus t squared. So what happens if the outputs of your 10h are very close to negative one or one. If you plug in t equals one here you're going to get a zero multiplying out that grad. No matter what out that grad is we are killing the gradient and we're stopping effectively the back propagation through this 10h unit. Similarly when t is negative one this will again become zero and out that grad just stops and intuitively this makes sense because this is a 10h neuron and what's happening is if its output is very close to one then we are in the tail of this 10h. So changing basically the input is not going to impact the output of the 10h too much because it's in the flat region of the 10h and so therefore there's no impact on the loss. And so indeed the weights and the biases along with this 10h neuron do not impact the loss because the output of this 10h unit is in the flat region of the 10h and there's no influence. We can we can be changing them whatever we want however we want and the loss is not impacted. That's another way to justify that indeed the gradient would be basically zero advantages. Indeed when t equals zero we get one times out that grad so when the 10h takes on exactly value of zero then out that grad is just passed through. So basically what this is doing right is if t is equal to zero then this the 10h unit is sort of inactive and gradient just passes through but the more you are in the flat tails the more degrading is squashed. So in fact you'll see that the gradient flowing through 10h can only ever decrease in the amount that it decreases is proportional through a square here depending on how far you are in the flat tails of this 10h. And so that's kind of what's happening here and through this the concern here is that if all of these outputs h are in the flat regions of negative one to one then the gradients that are flowing through the network will just get destroyed at this layer. Now there is some redeeming quality here and that we can actually get a sense of the problem here as follows. I brought some code here and basically what we want to do here is we want to take a look at h, take the absolute value and see how often it is in the in a flat region. So say grading point 99 and what you get is the following and this is a Boolean tensor. So in the Boolean tensor you get a white if this is true and a black if this is false. And so basically what we have here is the 32 examples and a 200 hidden neurons and we see that a lot of this is white and what that's telling us is that all these 10h neurons were very very active and they're in the flat tail and so in all these cases the backward gradient would get destroyed. Now we would be in a lot of trouble if for for any one of these 200 neurons if it was the case that the entire column is white because in that case we have what's called a dead neuron and this could be a tenet neuron where the initialization of the weights in the biases could be such that no single example ever activates this 10h in the sort of active part of the 10h. If all the examples land in the tail then this neuron will never learn it is a dead neuron. And so just scrutinizing this and looking for columns of completely white we see that this is not the case. So I don't see a single neuron that is all of white and so therefore it is the case that for every one of these 10h neurons we do have some examples that activate them in the active part of the 10h and so some gradients will flow through this neuron will learn and neuron will change and it will move and it will do something. But you can sometimes get yourself in cases where you have dead neurons and weight this manifests is that for 10h neuron this would be when no matter what inputs you plug in from your data set this 10h neuron always fires completely one or completely negative one and then it will just not learn because all the gradients will be just zero that. This is true not just for 10h but for a lot of other non-linearities that people use in neural networks. So we certainly use 10h a lot but sigmoid will have the exact same issue because it is a squashing neuron and so the same will be true for sigmoid but you know basically the same will actually apply to sigmoid. The same will also apply to a relu so relu has a completely flat region here below zero. So if you have a relu neuron then it is a pass-through if it is positive and if the pre-activation is negative it will just shut it off. Since the region here is completely flat then during back propagation this would be exactly zeroing out the gradient. Like all of the gradient would be set exactly to zero instead of just like a very very small number depending on how positive or negative t is. And so you can get for example a dead relu neuron and a dead relu neuron would basically look like. Basically what it is is if a neuron with a relu non-linearity never activates it. So for any examples that you plug in in the data set it never turns on it is always in this flat region then this relu neuron is a dead neuron. Its weights and bias will never learn they will never get agradient because the neuron never activated. And this can sometimes happen at initialization because the weights in the bias is just make it so that by chance some neurons are just forever dead but it can also happen during optimization. If you have like a too high learning weight for example sometimes you have these neurons that get too much of a gradient and they get knocked out off the data manifold. And what happens is that from then on no example ever activates this neuron so this neuron remains dead forever. So it's kind of like a permanent brain damage in a in a mind of a network. And so sometimes what can happen is if your learning rate is very high for example and you have a neural net with relu neurons you train the neural net and you get some last loss. But then actually what you do is you go through the entire training set and you forward on your examples and you can find neurons that never activate they are dead neurons in your network and so those neurons will will never turn on. And usually what happens is that during training these relu neurons are changing moving etc and then because of a high gradient somewhere by chance they get knocked off and then nothing ever activates them and from then on they are just dead. So that's kind of like a permanent brain damage that can happen to some of these neurons. These other nonlinearities like leaky relu will not suffer from this issue as much because you can see that it doesn't have flat tails. You almost always get gradients and ilu is also fairly frequently used. It also might suffer from this issue because it has flat parts. So that's just something to be aware of and something to be concerned about. And in this case we have way too many activations age that take on extreme values and because there's no column of white I think we will be okay and indeed the network optimizes and gives us a pretty decent loss but it's just not optimal and this is not something you want especially during initialization. And so basically what's happening is that this age preactivation that's flowing to 10h it's too extreme it's too large it's creating very it's creating distribution that is too saturated in both sides of the 10h and it's not something you want because it needs that there's less training for these neurons because they update less frequently. So how do we fix this? Well, age preactivation is MCAT which comes from C. So these are uniform Gaussian but then multiply by W1 plus B1 and age preact is too far off from 0 and that's causing the issue. So we want this preactivation to be closer to 0 very similar to what we have with logists. So here we want actually something very very similar. Now it's okay to set the biases to very small number. We can either multiply by 0, 0, 1 to get like a little bit of entropy. I sometimes like to do that just so that there's like a little bit of variation and diversity in the original initialization of these 10h neurons and I find in practice that that can help optimization a little bit and then the weights we can also just like squash. So let's multiply everything by 0.1. Let's rerun the first batch and now let's look at this and well first let's look at here. You see now because we multiply W by 0.1 we have a much better histogram and that's because the preactivations are now between negative 1.5 and 1.5 and this we expect much much less white. Okay, there's no white. So basically that's because there are no neurons that saturated above 0.99 in either direction. So it's actually a pretty decent place to be. Maybe we can go up a little bit. Sorry am I changing W1 here? Maybe we can go to 0.2. Okay, so maybe something like this is a nice distribution. So maybe this is what our initialization should be. So let me now erase these and let me starting with initialization. Let me run the full optimization without the break and let's see what we get. Okay, so the optimization finished and I rerun the loss and this is the result that we get and then just as a reminder I put down all the losses that we saw previously in this lecture. So we see that we actually do get an improvement here and just as a reminder we started off with the validation loss of 2.17 when we started. By fixing the softmax being confidently wrong we came down to 2.13 and by fixing the 10H layer being way too saturated we came down to 2.10 and the reason this is happening of course is because our initialization is better and so we're spending more time being productive training instead of not very productive training because our gradients are set to zero and we have to learn very simple things like the overconfidence of the softmax in the beginning and we're spending cycles just like squashing down the white matrix. So this is illustrating basically initialization and its impacts on performance just by being aware of the intervals of these neural nets and their actuations and their gradients. Now we're working with a very small network. This is just one layer, multi-layer perception. So because the network is so shallow the optimization problem is actually quite easy and very forgiving. So even though our initialization was terrible the network still learned eventually it just got a bit worse result. This is not the case in general though. Once we actually start working with much deeper networks that have saved 50 layers things can get much more complicated and these problems stack up and so you can actually get into a place where the network is basically not training at all if your initialization is bad enough and the deeper your network is and the more complex it is the less forgiving it is to some of these errors. And so something to be definitely aware of something to scrutinize, something to plot and something to be careful with and yeah. Okay so that's great that that worked for us but what we have here now is all these magic numbers like point two like where do I come up with this and how am I supposed to set these if I have a large neural net with lots and lots of layers and so obviously no one does this by hand there's actually some relatively principled ways of setting these scales that I would like to introduce to you now. So let me paste some code here that I prepared just to motivate the discussion of this. So what I'm doing here is we have some random input here x that is drawn from a Gaussian and there's 1000 examples that are 10 dimensional and then we have a weight layer here that is also initialized using Gaussian just like we did here and we these neurons in the hidden layer look at 10 inputs and there are 200 neurons in this hidden layer and then we have here just like here in this case the multiplication x multiply by w to get the preactivations of these neurons. And basically the analysis here looks at okay suppose these are uniform Gaussian and these weights are uniform Gaussian. If I do x times w and we forget for now the bias and the nonlinearity then what is the mean and the standard deviation of these Gaussians? So in the beginning here the input is just a normal Gaussian distribution mean zero and the standard deviation is one and the standard deviation again is just the measure of a spread of the Gaussian. But then once we multiply here and we look at the histogram of y we see that the mean of course stays the same it's about zero because this is a symmetric operation but we see here that the standard deviation has expanded to three. So the inputs and deviation was one but now we've grown to three and so what you're seeing in the histogram is that this Gaussian is expanding and so we're expanding this Gaussian from the input and we don't want that we want most of the neural nets to have relatively similar activations so unit Gaussian roughly throughout the neural net. And so the question is how do we scale these w's to preserve the to preserve this distribution to remain a Gaussian. And so intuitively if I multiply here these elements of w by a large number like say by 5 then this Gaussian grows and grows in standard deviation so now we're at 15 so basically these numbers here in the output y take on more and more extreme values but if we scale it down like say 0.2 then conversely this Gaussian is getting smaller and smaller and it's shrinking and you can see that the standard deviation is 0.6 and so the question is what do I multiply by here to exactly preserve the standard deviation to be one and it turns out that the correct answer mathematically when you work out through the variance of this multiplication here is that you are supposed to divide by the square root of the fan in. The fan in is basically the number of input components here 10 so we are supposed to divide by 10 square root and this is one way to do the square root you raise it to a power of 0.5 and that's the same as doing a square root. So when you divide by the square root of 10 then we see that the output Gaussian it has exactly standard deviation of 1. Now unsurprisingly a number of papers have looked into how but to best initialize neural networks and in the case in multiple perceptions we can have fairly deep networks that had these non-linearities in between and we want to make sure that the activations are well behaved and they don't expand to infinity or shrink all the way to 0 and the question is how do we initialize the weights so that these activations take on reasonable values throughout the network. Now one paper that has studied this in quite a bit detail that is often referenced is this paper by Kaminghe Tull called delving deep interactive fires. Now in this case they actually study convolutional neural networks and they study especially the relu nullinarity and the p-relu nullinarity instead of a 10-h nullinarity but the analysis is very similar and basically what happens here is for them the the relu nullinarity that they care about quite a bit here is a squashing function where all the negative numbers are simply clamped to 0 so the positive numbers are a path through but everything negative is just set to 0 and because you are basically throwing away half of the distribution they find in their analysis of the forward activations in the neural net that you have to complicate for that with a gain and so here they find that basically when they initialize their weights they have to do it with a zero mean Gaussian whose standard deviation is square root of 2 over the fanning what we have here is we are initializing the Gaussian with the square root of fanning this NL here is the fanning so what we have is square root of 1 over the fanning because we have the division here now they have to add this factor of 2 because of the relu which basically discards half of the distribution and clamps it at 0 and so that's where you get an initial factor now in addition to that this paper also studies not just the sort of behavior of the activations in the forward path of the neural net but it also studies the back propagation and we have to make sure that the gradients also are well behaved and so because ultimately they end up updating our parameters and what they find here through a lot of the analysis that I am much into read through but it's not exactly approachable what they find is basically if you properly initialize the forward path the backward path is also approximately initialized up to a constant factor that has to do with the size of the number of hidden neurons and early and late layer and but basically they find empirically that this is not a choice that matters too much now this chiming initialization is also implemented in PyTorch so if you go to torshtat and end up in it documentation you'll find chiming normal and in my opinion this is probably the most common way of initializing neural networks now and it takes a few keyword arguments here so another one it wants to know the mode would you like to normalize the activations or would you like to normalize the gradients to to be always caution with zero mean and unit or one standard deviation and because they find a paper that this doesn't matter too much most of the people just leave it as the default which is pen in and then second pass in the nullinearity that you are using because depending on the nullinearity we need to calculate a slightly different gain and so if your nullinearity is just a linear so there's no nullinearity then the gain here will be one and we have these same kind of formula that we've caught here but if the nullinearity is something else we're going to get a slightly different gain and so if we come up here to the top we see that for example in the case of relu this gain is a square root of two and the reason is the square root because in this paper you see how the two is inside of the square root so the gain is a square root of two in a case of linear or identity we just get a gain of one in a case of 10h which is what we're using here the advised gain is a 5 over 3 and intuitively why do we need a gain on top of the initialization is because 10h just like relu is a contractive transformation so what that means is you're taking the output distribution from this matrix multiplication and then you are squashing it in some way now relu squashes it by taking everything below zero and clamping it to zero 10h also squashes it because it's a contractive operation it will take the tails and it will squeeze them in and so in order to fight the squeezing in we need to boost the weights a little bit so that we re-normalize everything back to unit standard deviation so that's why there's a little bit of a gain that comes out now i'm skipping through this section a little bit quickly and i'm doing that actually intentionally and the reason for that is because about seven years ago when this paper was written you had to actually be extremely careful with the activations and ingredients and their ranges and their histograms and you had to be very careful with the precise setting of gains and the scrutinizing of the null and your t's used and so on and everything was very finicky and very fragile and to be very properly arranged for the neural network to train especially if your neural network was very deep but there are a number of modern innovations that have made everything significantly more stable and more well behaved and it's become less important to initialize these networks exactly right and some of those modern innovations for example are residual connections which we will cover in the future the use of a number of normalization layers like for example bash normalization layer normalization work normalization we're going to go into a lot of these as well and number three much better optimizers not just to cast a gradient scent the simple optimizer we're basically using here but slightly more complex optimizers like armistrop and especially atom and so all of these modern innovations make it less important for you to precisely calibrate the initialization of the neural net all that being said in practice what should we do in practice when I initialize these neural nets I basically just normalize my weights by the square root of the pen and so basically roughly what we did here is what I do now if we want to be exactly accurate here we and go by in it of kind of normal this is how we would implement it we want to set the standard deviation to be gained over the square root of fan in right so to set the standard deviation of our weights we will proceed as follows basically when we have torshtot ran and let's say I just create a thousand numbers we can look at the standard deviation of this and of course that's one that's the amount of spread let's make this a bit bigger so it's closer to one so this is the spread of the Gaussian of zero mean and unit standard deviation now basically when you take these and you multiply by say 0.2 that basically scales down the Gaussian and that makes its standard deviation 0.2 so basically the number that you multiply by here ends up being the standard deviation of this Gaussian so here this is a standard deviation 0.2 Gaussian here when we sample our w1 but we want to set the standard deviation to gain over square root of fan mode which is fan in so in other words we want to multiply by gain which for 10 h is 5 over 3 5 over 3 is the gain and then times um I guess I divide square root of the fan in and in this example here the fan in was 10 and I just noticed that actually here the fan in for w1 is actually an embed times block size which as you will recall is actually 30 and that's because each character is 10 dimensional but then we have three of them and we can cutinate them so actually the fan in here was 30 and I should have used 30 here probably but basically we want 30 square root so this is the number this is what our standard deviation we want to be and this number turns out to be 0.3 whereas here just by fiddling with it and looking at the distribution and making sure it looks okay we came up with 0.2 and so instead what we want to do here is we want to make the standard deviation be um 5 over 3 which is our gain divide this amount times 0.2 square root and these brackets here are not that necessary but I'll just put them here for clarity this is basically what we want this is the kiming in it in our case for a 10 h nonlinearity and this is how we would initialize the neural net and so we're multiplying by 0.3 instead of multiplying by 0.2 and so we can we can initialize this way and then we can train the neural net and see what we got okay so I trained the neural net and we end up in roughly the same spot so looking at the values should loss we now get 2.10 and previously we also had 2.10 there's a little bit of a difference but that's just the randomness the process I suspect but the big deal of course is we get to the same spot but we did not have to introduce any um magic numbers that we got from just looking at histograms and guessing checking we have something that is semi-principled and will scale us to much bigger networks and uh something that we can sort of use as a guide so I mentioned that the precise setting of these initializations is not as important today due to some modern innovations and I think now is a pretty good time to introduce one of those modern innovations and that is best normalization so best normalization came out in uh 2015 from a team at Google and it was an extremely impactful paper because it made it possible to train very deep neural nets quite reliably and uh it basically just worked so here's what best normalization does and what's implemented um basically we have these uh hidden states h pre-act right and we were talking about how we don't want these uh these um pre-activation states to be way too small because that then the 10h is not um doing anything but we don't want them to be too large because then the 10h is saturated in fact we want them to be roughly roughly Gaussian so zero mean and a units or one standard deviation at least at initialization so the insight from the best normalization paper is okay you have these hidden states and you'd like them to be roughly Gaussian then why not take the hidden states and uh just normalize them to be Gaussian and it sounds kind of crazy but you can just do that because uh standardizing hidden states so that their unit Gaussian is a perfectly differentiable operation as we'll soon see and so that was kind of like the big insight in this paper and when I first read it my mind was blown because you can just normalize these hidden states and if you'd like unit Gaussian states in your network uh at least initialization you can just normalize them to be in Gaussian so uh let's see how that works so we're going to scroll to our pre-activations here just before they enter into the 10h now the idea again is remember we're trying to make these roughly Gaussian and that's because if these are way too small numbers then the 10h here is kind of an active but if these are very large numbers then the 10h is way to saturated and graded in the flow so we'd like this to be roughly Gaussian so the insight in best romanization again is that we can just standardize these activations so they are exactly Gaussian so here h preact has a shape of 32 by 200 32 examples by 200 neurons in the hidden layer so basically what we can do is we can take h preact and we can just calculate the mean um and the mean we want to calculate across the zero dimension and we want to also keep them as true so that we can easily broadcast this so the shape of this is 1 by 200 in other words we are doing the mean over all the uh elements in the batch and similarly we can calculate the standard deviation of these activations and then we'll also be 1 by 200 now in this paper they have the uh sort of prescription here and see here we are calculating the mean which is just taking the average value of any neurons activation and then the standard deviation is basically kind of like um this the measure of the spread that we've been using which is the distance of every one of these values away from the mean and that squared and averaged that's the that's the variance and then if you want to take the standard deviation you would the square root the variance to get the standard deviation so these are the two that we're calculating and now we're going to normalize or standardize these x's by subtracting the mean and um dividing by the standard deviation so basically we're taking edge preact and we subtract the mean and then we divide by the standard deviation this is exactly what these two STD and mean are calculating oops sorry this is the mean and this is the variance you see how the sigma is the standard deviation usually so this is sigma square which is the variance is the square of the standard deviation so this is how you standardize these values and what this will do is that every single neuron now and its firing rate will be exactly unit Gaussian on these 32 examples at least of this batch that's why it's called batch normalization we are normalizing these batches and then we could in principle train this notice that calculating the mean and standard deviation these are just mathematical formulas they're perfectly differentiable all this is perfectly differentiable and we can just train this the problem is you actually won't achieve a very good result with this and the reason for that is we want these to be roughly Gaussian but only at initialization but we don't want these to be forced to be Gaussian always we would like to allow the neural nets to move this around to potentially make it more diffuse to make it more sharp to make some 10 H neurons maybe mean more trigger more trigger happy or less trigger happy so we'd like this distribution to move around and we'd like the back propagation to tell us how the distribution should move around and so in addition to this idea of standardizing the activations at any point in the network we have to also introduce this additional component in the paper here describe the scale and shift and so basically what we're doing is we're taking these normalized inputs and we are additionally scaling them by some gain and offsetting them by some bias to get our final output from this layer and so what that amounts to is the following we are going to allow a bash normalization gain to be initialized at just a once and the once will be in the shape of 1 by n hidden and then we also will have a bn bias which will be torched at 0's and it will also be of the shape n by 1 by n hidden and then here the bn gain will multiply this and the bn bias will offset it here so because this is initialized to 1 and this to 0 at initialization each neurons firing values in this batch will be exactly unit Gaussian and will have nice numbers no matter what the distribution of the h preact is coming in coming out it will be unit Gaussian for each neuron and that's roughly what we want at least at initialization and then during optimization we'll be able to back propagate to bn gain and bn bias and change them so the network is given the full ability to do with this whatever it wants internally here we just have to make sure that we include these in the parameters of the neural nut because they will be trained with back propagation so let's initialize this and then we should be able to train and then we're going to also copy this line which is the bash normalization layer here on a single line of code and we're going to swing down here and we're also going to do the exact same thing at test time here so similar to train time we're going to normalize and then scale and that's going to give us our train and validation loss and we'll see in a second that we're actually going to change this a little bit but for now I'm going to keep it this way so I'm just going to wait for this to converge okay so I allowed the neural nut to converge here and when we scroll down we see that our validation loss here is 2.10 roughly which I wrote down here and we see that this is actually kind of comparable to some of the results that we've achieved previously now I'm not actually expecting an improvement in this case and that's because we are dealing with a very simple neural nut that has just a single hidden layer so in fact in this very simple case of just one hidden layer we were able to actually calculate what the scale of W should be to make these pre-activations already have a roughly Gaussian shape so the best normalization is not doing much here but you might imagine that once you have a much deeper neural nut that has lots of different types of operations and there's also for example residual connections which we'll cover and so on it will become basically very very difficult to tune the scales of your wake matrices such that all the activations throughout the neural nut are roughly Gaussian and so that's going to become very quickly intractable but compared to that it's going to be much much easier to sprinkle batch normalization layers throughout the neural nut so in particular it's common to to look at every single linear layer like this one this is a linear layer multiplied by weight matrix and adding the bias or for example convolutions which we'll cover later and also perform basically a multiplication with weight matrix but in a more spatially structured format it's customer it's customer to take these linear layer or convolutional layer and append a best normalization layer right after it to control the scale of these activations at every point in the neural nut so we'd be adding these best normal layers throughout the neural nut and then this controls the scale of these activations throughout the neural nut it doesn't require us to do a perfect mathematics and care about the activation distributions for all these different types of neural nut or Lego building blocks that you might want to introduce into your neural nut and it significantly stabilizes the train and that's why these layers are quite popular now the stability offered by batch normalization actually comes at a terrible cost and that cost is that if you think about what's happening here something's something terribly strange and unnatural is happening it used to be that we have a single example feeding into a neural nut and then we calculate this activations and its logits and this is a deterministic sort of process so you arrive at some logits for this example and then because of efficiency of training we suddenly started to use batches of examples but those batches of examples were processed independently and it was just an efficiency thing but now suddenly in batch normalization because of the normalization through the batch we are coupling these examples mathematically and in the forward pass and backward pass of the neural nut so now the hidden state activations H pre-oct and your logits for any one input example are not just a function of that example and its input but they're also a function of all the other examples that happen to come for a ride in that batch and these examples are sampled randomly and so what's happening is for example when you look at H pre-oct that's going to feed into H the hidden state activations for for example for for any one of these input examples is going to actually change slightly depending on what other examples there are in the batch and depending on what other examples happen to come for a ride H is going to change suddenly and it's going to look jitter if you imagine sampling different examples because the statistics of the mean and the standard deviation are going to be impacted and so you'll get a jitter for H and you'll get a jitter for logits and you think that this would be a bug or something undesirable but in a very strange way this actually turns out to be good in neural network training and as a side effect and the reason for that is that you can think of this as kind of like a regularizer because what's happening is you have your input and you get your H and then depending on the other examples this is jittering a bit and so what that does is that it's effectively padding out any one of these input examples and it's introducing a little bit of entropy and because of the padding out it's actually kind of like a form of a data augmentation which will cover in the future and it's not kind of like augmenting the input a little bit and jittering it and that makes it harder for the neural nets to overfit these concrete specific examples so by introducing all this noise it actually like pads out the examples and it regularizes the neural net and that's one of the reasons why deceiving me as a second order effect this is actually a regularizer and that has made it harder for us to remove the use of bachelor normalization because basically no one likes this property that the examples in the batch are coupled mathematically and in the forward pass and at least all kinds of like strange results will go into some of that in a second as well and at least do a lot of bugs and so on and so no one likes this property and so people have tried to deprecate the use of bachelor normalization and move to other normalization techniques that do not couple the examples of a batch examples are linear normalization instance normalization group normalization and so on and we'll come rest we'll come rest on these later but basically a long story short bachelor normalization was the first kind of normalization later to be introduced it worked extremely well it happens to have this regularizing effect it's stabilized training and people have been trying to remove it and move to some of the other normalization techniques but it's been hard because it it just works quite well and some of the reason that it works quite well is again because of this regularizing effect and because of the because it is quite effective at controlling the activations and their distributions so that's kind of like the brief story of bachelor normalization and I like to show you one of the other weird sort of outcomes of this coupling so here's one of the strange outcomes that I only lost over previously when I was evaluating the loss on the validation set basically once we've trained a neural net we'd like to deploy it in some kind of a setting and we'd like to be able to feed in a single individual example and get a prediction out from our neural net but how do we do that when our neural net now in a forward pass estimates the statistics of the mean understand deviation of a batch the neural net expects batches as an input now so how do we feed in a single example and get sensible results out and so the proposal in the bachelor normalization paper is the following what we would like to do here is we would like to basically have a step after training that calculates and sets the bachelor mean and standard deviation a single time over the training set and so I wrote this code here in interest of time and we're going to call what's called calibrate the bachelor statistics and basically what we do is torshot no grad telling by torshot that none of this we will call the dot backward on and it's going to be a bit more efficient we're going to take the training set get the preactivations for every single training example and then one single time estimate the mean and standard deviation over the entire training set and then we're going to get b and mean and b and standard deviation and now these are fixed numbers estimated over the entire training set and here instead of estimating it dynamically we are going to instead here use b and mean and here we're just going to use b and standard deviation and so at test time we are going to fix these clamp them and use them during inference and now you see that we get basically identical result but the benefit that we've gained is that we can now also forward a single example because the mean and standard deviation are now fixed all sorts of tensors that said nobody actually wants to estimate this mean and standard deviation as a second stage after neural network training because everyone is lazy and so this specialization paper actually introduced one more idea which is that we can we can estimate the mean and standard deviation in a running manner running manner during training of the neural network and then we can simply just have a single stage of training and on the side of that training we are estimating the running mean as a deviation so let's see what that would look like let me basically take the mean here that we are estimating on the batch and let me call this b and mean on the i-th iteration and then here this is b and sd d b and sd d at i okay and the mean comes here and the sd d comes here so so far I've done nothing I've just moved around and I created these extra variables for the mean and standard deviation and I've put them here so so far nothing has changed but what we're going to do now is we're going to keep a running mean of both of these values during training so let me swing up here and let me create a b and mean underscore running and I'm going to initialize it at zeros and then b and sd d running which I'll initialize at once because in the beginning because of the way we initialized w1 and b1 hpx will be roughly unit Gaussian so the mean will be roughly zero and extend deviation roughly one so I'm going to initialize these that way but then here I'm going to update these and in PyTorch these mean and standard deviation that are running they're not actually part of the gradient based optimization we're never going to derive gradients with respect to them they're they're updated on the side of training and so what we're going to do here is we're going to say with torsched up no grad telling PyTorch that the update here is not supposed to be building out a graph because there will be no dot backward but this running mean is basically going to be 0.99 nine times the current value plus 0.001 times the best value this new mean and in the same way bnstd running will be mostly what it used to be but it will receive a small update in the direction of what the current standard deviation is and as you're seeing here this update is outside and on the side of the gradient based optimization and it's simply being updated not using gradient sent is just being updated using a jenky like smooth sort of running mean manner and so while the network is training and these preactivations are sort of changing and shifting around during during back propagation we are keeping track of the typical mean and standard deviation and rest of the mean them once and when I run this now I'm keeping track of this in a running manner and what we're hoping for of course is that the mean bn mean underscore running and bn mean underscore std are going to be very similar to the ones that we calculated here before and that way we don't need a second stage because we've sort of combined the two stages and we've put them on the side of each other if you want to look at it that way and this is how this is also implemented in the bastion realization layer in PyTorch so during training the exact same thing will happen and then later when you're using inference it will use the estimated running mean of both the mean and standard deviation of those hidden states so let's wait for the optimization to converge and hopefully the running mean and standard deviation are roughly equal to these two and then we can simply use it here and we don't need this stage of explicit calibration at the end okay so the optimization finished I'll rerun the explicit estimation and then the bn mean from the explicit estimation is here and bn mean from the running estimation during the during the optimization you can see it's very very similar it's not identical but it's pretty close and the same way bnstd is this and bnstd running is this as you can see that once again they are fairly similar values not identical but pretty close and so then here instead of being mean we can use the bn mean running instead of bnstd we can use bnstd running and hopefully the validation loss will not be impacted too much okay so it's basically identical and this way we've eliminated the need for this explicit stage of calibration because we are doing it in line over here okay so we're almost done with batch normalization there are only two more notes that I'd like to make number one I've skipped a discussion over what is this plus epsilon doing here this epsilon is usually like some small fixed number for example one in negative 5 by default and what it's doing is that it's basically preventing a division by zero in the case that the variance over your batch is exactly zero in that case here we normally have a division by zero but because of the plus epsilon this is going to become a small number in the denominator instead and things will be more well behaved so feel free to also add a plus epsilon here of a very small number it doesn't actually substantially change the result I'm going to skip it in our case just because this is unlikely to happen in our very simple example here and the second thing I want you to notice is that we're being wasteful here and it's very subtle but right here where we are adding the bias into each preact these biases now are actually useless because we're adding them to the each preact but then we are calculating the mean for every one of these neurons and subtracting it so whatever bias you add here is going to get subtracted right here and so these biases are not doing anything in fact but they're being subtracted out and they don't impact the rest of the calculation so if you look at b1.grad it's actually going to be zero because it being subtracted out and doesn't actually have any effect and so whenever you're using bachelor normalization layers then if you have any weight layers before like a linear or a conv or something like that you're better off coming here and just like not using bias so you don't want to use bias and then here you don't want to add it because that's that's spurious instead we have this bachelor normalization bias here and that bachelor normalization bias is now in charge of the biasing of this distribution instead of this b1 that we had here originally and so basically the bachelor normalization layer has its own bias and there's no need to have a bias in the layer before it because that bias is going to be subtracted out anyway so that's the other small detail to be careful with sometimes it's not going to do anything catastrophic this b1 will just be useless it will never get any gradient it will not learn it will stay constant and it's just wasteful but it doesn't actually really impact anything otherwise okay so I rearranged the code a little bit with comments and I just wanted to give a very quick summary of the bachelor normalization layer we are using bachelor normalization to control the statistics of activations in the neural net it is common to sprinkle bachelor normalization layer across the neural net and usually we will place it after layers that have multiplications like for example a linear layer or a convolutional layer which we may cover in the future now the bachelor normalization internally has parameters for the gain and the bias and these are trained using back propagation it also has two buffers the buffers are the mean and the standard deviation the running mean and the running mean of the standard deviation and these are not trained using back propagation these are trained using this janky update of kind of like a running mean update so these are sort of the parameters and the buffers of bachelor layer and then really what is doing is it's calculating the mean and standard deviation of the activations that are feeding into the bachelor layer over that batch then it's centering that batch to be unit Gaussian and then it's offsetting and scaling it by the learned bias and gain and then on top of that it's keeping track of the mean and standard deviation of the inputs and it's maintaining this running mean and standard deviation and this will later be used at inference so that we don't have to re-estimate the mean and standard deviation all the time and in addition that allows us to basically forward individual examples at test time so that's the bachelor normalization layer it's a fairly complicated layer but this is what it's doing internally now I wanted to show you a little bit of a real example so you can search ResNet which is a residual neural network and these are contact of neural arcs used for image classification and of course we haven't common dress nets in detail so I'm not going to explain all the pieces of it but for now just note that the image feeds into a ResNet on the top here and there's many many layers with repeating structure all the way to predictions of what's inside that image this repeating structure is made up of these blocks and these blocks are just sequentially stacked up in this deep neural network now the code for this the block basically that's used and repeated sequentially in series is called this bottleneck block and there's a lot here this is all PyTorch and of course we haven't covered all of it but I want to point out some small pieces of it here in the init is where we initialize the neural net so this code of block here is basically the kind of stuff we're doing here we're initializing all the layers and in the forward we are specifying how the neural net acts once you actually have the input so this code here is a long lines of what we're doing here and now these blocks are replicated and stacked up serially and that's what a residual network would be and so notice what's happening here come one these are convolutional layers and these convolutional layers basically they're the same thing as a linear layer except convolutional layers don't apply convolutional layers are used for images and so they have spatial structure and basically this linear multiplication and bias offset are done on patches instead of a map instead of the full input so because these images have structure spatial structure convolutional is just basically do wx plus b but they do it on overlapping patches of the input but otherwise is wx plus b then we have the normal layer which by default here is initialize to be a bashed norm in 2d so 2-dimensional bashed normalization layer and then we have a nonlinearity like relu so instead of here they use relu we are using 10h in this case but both both are just nonlinearities and you can just use them relatively interchangeably for very deep networks relu typically empirically work a bit better so see the motif that's being repeated here we have convolution, bashed normalization relu convolution, bashed normalization relu etc and then here this is residual connection that we haven't covered yet but basically that's the exact same pattern we have here we have a weight layer like a convolution or like a linear layer, bashed normalization and then 10h which is nonlinearity but basically a weight layer, a normalization layer and nonlinearity and that's the motif that you would be stacking up when you create these deep neural networks exactly as it's done here and one more thing I'd like you to notice is that here when they are initializing the comp layers like comp one by one the depth for that is right here and so it's initializing an nn.comp2d which is a convolutional layer in PyTorch and there's a bunch of keyword arguments here that I am not going to explain yet but you see how there's bias equals false the bias equals false is exactly for the same reason as bias is not used in our case you see how I raised the use of bias and the use of bias is spurious because after this weight layer there's the bashed normalization and the bashed normalization subtracts that bias and that has its own bias so there's no need to introduce these spurious parameters it wouldn't hurt performance it's just useless and so because they have this motif of calm bashed normalization they don't need a bias here because there's a bias inside here so by the way this example here is very easy to find just do a resonant PyTorch and it's this example here so this is kind of like the stock implementation of a residual neural network in PyTorch and you can find that here but of course I haven't covered many of these parts yet and I would also like to briefly descend into the definitions of these PyTorch layers and the parameters that they take now instead of a convolutional layer we're going to look at a linear layer because that's the one that we're using here this is a linear layer and I haven't covered cover the convolutions yet but as I mentioned convolutions are basically linear layers except on patches so a linear layer performs a wx plus b except here they're calling the w a transpose so the call is wx plus b very much like we did here to initialize this layer you need to know the fan in the fan out and that's so that they can initialize this w this is the fan in and the fan out so they know how big the weight matrix should be you need to also pass in whether you whether or not you want a bias and if you set it to false the no bias will be inside this layer and you may want to do that exactly like in our case if your layer is followed by a normalization layer such as bach norm so this allows you to basically disable bias in terms of the initialization if we swing down here this is reporting the variables used inside this linear layer and our linear layer here has two parameters the weight and the bias in the same way they have a weight and a bias and they're talking about how they initialize it by default so by default PyTorch will initialize your weights by taking the fan in and then doing one over fan in square root and then instead of a normal distribution they are using a uniform distribution so it's very much the same thing but they are using a one instead of five over three so there's no gain being calculated here the gain is just one but otherwise is exactly one over the square root of fan in exactly as we have here so one over the square root of k is the is this scale of the weights but when they are drawing the numbers they're not using a Gaussian by default they're using a uniform distribution by default and so they draw uniformly from negative square root of k to square root of k but it's the exact same thing and the same motivation from for with respect to what we've seen in this lecture and the reason they're doing this is if you have a roughly Gaussian input this will ensure that out of this layer you will have a roughly Gaussian output and you you basically achieve that by scaling the weights by one over the square root of fan in so that's what this is doing and then the second thing is the bachelor normalization layer so let's look at what that looks like in PyTorch so here we have a one-dimensional bachelor normalization layer exactly as we are using here and there are a number of keyword arguments going into it as well so we need to know the number of features for us that is 200 and that is needed so that we can initialize these parameters here the gain the bias and the buffers for the running mean and serredivation then they need to know the value of epsilon here and by default this is one negative five you don't typically change this too much then they need to know the momentum and the momentum here as they explain is basically used for these running mean and running standard deviation so by default the momentum here is point one the momentum we are using here in this example is 0.001 and basically you may want to change this sometimes and roughly speaking if you have a very large batch size then typically what you'll see is that when you estimate the mean and the same deviation for every single batch size if it's large enough you're going to get roughly the same result and so therefore you can use slightly higher momentum like point one but for a batch size s small is 32 the mean and understanding deviation here might take on slightly different numbers because there's only 32 examples we are using to estimate the mean and standard deviation so the value is changing around a lot and if your momentum is point one that that might not be good enough for this value to settle and converge to the actual mean and standard deviation over the entire training set and so basically if your batch size is very small momentum of point one is potentially dangerous and it might make it so that the running mean and standard deviation is thrashing too much during training and it's not actually converging properly affine equals true determines whether this batch normalization layer has these learnable affine parameters the gain and the bias and this is almost always kept the true I'm not actually sure why you would want to change this to false then track running stats is determining whether or not batch normalization layer of bite or chip will be doing this and one reason you may you may want to skip the running stats is because you may want to for example estimate them at the end as a stage two like this and in that case you don't want the batch normalization layer to be doing all this extra compute that you're not going to use and finally we need to know which device we're going to run this batch normalization on a CPU or a GPU and what the data type should be a half precision single precision double precision and so on so that's the batch normalization layer otherwise they linked to the paper is the same formula we've implemented and everything is the same exactly as we've done here okay so that's everything that I wanted to cover for this lecture really what I wanted to talk about is the importance of understanding the activations and the gradients and their statistics in neural networks and this becomes increasingly important especially as you make your neural networks bigger larger and deeper we looked at the distributions basically at the output layer and we saw that if you have two confident misperdictions because the activations are too messed up at the last layer you can end up with these hockey stick losses and if you fix this you get a better loss at the end of training because your training is not doing wasteful work then we also saw that we need to control the activations we don't want them to you know squash to zero or explore to infinity and because that you can run into a lot of trouble with all of these nonlinearities in these neural nets and basically you want everything to be fairly homogeneous throughout the neural net you want roughly Gaussian activations throughout the neural net let me talk about okay if we want roughly Gaussian activations how do we scale these weight matrices and biases during initialization of the neural net so that we don't get you know so everything is as controlled as possible so that give us a large boost in improvement and then I talked about how that strategy is not actually possible for much much deeper neural nets because when you have much deeper neural nets with lots of different types of layers it becomes really really hard to precisely set the weights and the biases in such a way that the activations are roughly uniform throughout the neural net so then I introduced the notion of the normalization layer now there are many normalization layers that that people use in practice, Beshaw normalization layer normalization this is normalization group normalization we haven't covered most of them but I've introduced the first one and also the one that I believe came out first and that's called Beshaw normalization and we saw how Beshaw normalization works this is a layer that you can sprinkle throughout your deep neural net and the basic idea is if you want roughly Gaussian activations well then take your activations and take the mean and the standard deviation and center your data and you can do that because the centering operation is differentiable but on top of that we actually had to add a lot of bells and whistles and that gave you a sense of the complexities of the Beshaw normalization layer because now we're centering the data that's great but suddenly we need the gain and the bias and now those are trainable and then because we are coupling all the training examples now suddenly the question is how do you do the inference or to do the inference we need to now estimate these mean and standard deviation once or the entire training set and then use those at inference but then no one likes to do stage two so instead we fold everything into the Beshaw normalization layer during training and try to estimate these in the running manner so that everything is a bit simpler and that gives us the Beshaw normalization layer and as I mentioned no one likes this layer it causes a huge amount of bugs and intuitively it's because it is coupling examples in the form of passive and neural net and I've shocked myself in the foot with this layer over and over again in my life and I don't want you to suffer the same so basically try to avoid it as much as possible some of the other alternatives to these layers are for example group normalization or layer normalization and those have become more common in more recent deep learning but we haven't covered those yet but definitely Beshaw normalization was very influential at the time when it came out in roughly 2015 because it was kind of the first time that you could train reliably much deeper neural nets and fundamentally the reason for that is because this layer was very effective at controlling the statistics of the activations in the neural net so that's the story so far and that's all I wanted to cover and in the future lecture so hopefully we can start going into recurring neural nets and recurring neural nets as we'll see are just very very deep networks because you you unrolled the loop and when you actually optimize these neural nets and that's where a lot of this analysis around the activations statistics and all these normalization layers will become very very important for good performance so we'll see that next time bye okay so I lied I would like us to do one more summary here as a bonus and I think it's useful as to have one more summary of everything I've presented in this lecture but also I would like us to start by torturing our code a little bit so it looks much more like what you would encounter in PyTorch so you'll see that I will structure our code into these modules like a linear module and a bachelor module and I'm putting the code inside these modules so that we can construct neural networks very much like we would construct the in PyTorch and I will go through this in detail so we'll create our neural net then we will do the optimization loop as we did before and then the one more thing that I want to do here is I want to look at the activations statistics both in the forward pass and in the backward pass and then here we have the evaluation and sampling just like before so let me rewind all the way up here and go a little bit slower so here I'm creating a linear layer you'll notice that Torch.nn has lots of different types of layers and one of those layers is the linear layer Torch.nn.nl it takes a number of input features output features whether or not we should have bias and then the device that we want to place this layer on and the data type so I will omit these two but otherwise we have the exact same thing we have the fan in which is the number of inputs, fan out the number of outputs and whether or not we want to use a bias and internally inside this layer there's a weight and a bias if you like it it is typical to initialize the weight using say random numbers drawn from a Gaussian and then here's the coming initialization that we discussed already in this lecture and that's a good default and also the default that I believe PyTorch chooses and by default the bias is usually initialized to zeros. Now when you call this module this will basically calculate W times x plus B if you have NB and then when you also call that parameters on this module it will return the tensors that are the parameters of this layer. Now next we have the Bachelormalization layer so I've written that here and this is very similar to PyTorch's and then that Bachelorm 1D layer as shown here. So I'm kind of taking these three parameters here the dimensionality the epsilon that we'll use in the division and the momentum that we will use in keeping track of these running stats the running mean and the running variance. Now PyTorch actually takes quite a few more things but I'm assuming some of their settings so for us I'll find will be true that means that we will be using a gamma beta after denormalization. The track running stats will be true so we will be keeping track of the running mean and the running variance in the in the classroom. Our device by default is the CPU and the data type by default is float float 32. So those are the defaults otherwise we are taking all the same parameters in this Bachelorm layer so first I'm just saving them. Now here's something new there's that training which by default is true and PyTorch and then modules also have this attribute that training and that's because many modules and Bachelorm is included in that have a different behavior whether you are training your own or not or whether you are running it in an evaluation mode and calculating your evaluation laws or using it for inference on some test examples. And Bachelorm is an example of this because when we are training we are going to be using the mean and the variance estimated from the current batch but during inference we are using the running mean and running variance and so also if we are training we are updating mean and variance but if we are testing then these are not being updated they're kept fixed and so this flag is necessary and by default true just like in PyTorch. Now the parameters of Bachelorm 1D are the gamma and the beta here and then the running mean and running variance are called buffers in PyTorch nomenclature and these buffers are trained using exponential moving average here explicitly and they are not part of the back propagation and stochastic gradient descent so they are not sort of like parameters of this layer and that's why when we have a parameters here we only return gamma and beta we do not return the mean and the variance this is trained sort of like internally here every forward pass using exponential moving average. So that's the initialization. Now in a forward pass if we are training then we use the mean and the variance estimated by the batch only a block of paper here. We calculate the mean and the variance. Now up above I was estimating the standard deviation and keeping track of the standard deviation here in the running standard deviation instead of running variance but let's follow the paper exactly here they calculate the variance which is the standard deviation squared and that's what's kept track of in the running variance instead of a running standard deviation but those two would be very very similar I believe. If we are not training then we use running mean and variance we normalize and then here I am calculating the output of this layer and I'm also assigning it to an attribute called dot out. Now dot out is something that I'm using in our modules here. This is not what you would find in PyTorch we are slightly deviating from it. I'm creating a dot out because I would like to very easily maintain all those variables so that we can create statistics of them and plot them but PyTorch and modules will not have a dot out attribute. And finally here we are updating the buffers using again as I mentioned exponential moving average given the provided momentum and importantly you'll notice that I'm using the torshtap no-grat context manager and I'm doing this because if we don't use this then PyTorch will start building out an entire computational graph out of these tensors because it is expecting that we will eventually call a dot backward but we are never going to be calling that backward on anything that includes running mean and running variance. So that's why we need to use this context manager so that we are not sort of maintaining them using all this additional memory. So this will make it more efficient and it's just telling PyTorch that they will need no backward. We just have a bunch of tensors we want to update them that's it and then we return. Okay now scrolling down we have the 10H layer. This is very very similar to torshtap 10H and it doesn't do too much it just calculates 10H as you might expect. So that's torshtap 10H and there's no parameters in this layer but because these are layers it now becomes very easy to sort of like stack them up into basically just a list and we can do all the initializations that we're used to. So we have the initial sort of embedding matrix we have our layers and we can call them sequentially and then again with torshtap no grad there's some initializations here. So we want to make the output softmax a bit less confident like we saw and in addition to that because we are using a six layer multi layer perceptron here so you see how I'm stacking linear 10H linear 10H etc. I'm going to be using the gain here and I'm going to play with this in a second so you'll see how when we change this what happens to the statistics. Finally the primers are basically the embedding matrix and all the parameters in all the layers and notice here I'm using a double list comprehension if you want to call it that but for every layer in layers and for every parameter in each of those layers we are just stacking up all those piece all those parameters. Now in total we have 46,000 parameters and I'm telling PyTorch that all of them require gradient. Then here we have everything here we are actually mostly used to. We are sampling batch we are doing a forward pass the forward pass now is just the linear application of all the layers in order followed by the cross entropy and then in the backward pass you'll notice that for every single layer I now iterate over all the outputs and I'm telling PyTorch to retain the gradient of them and then here we are already used to all the all the gradients set to none do the backward to fill in the gradients do an update using the cast gradient send and then track some statistics and then I am going to break after a single iteration. Now here in this cell in this diagram I am visualizing the histogram the histograms of the forward pass activations and I'm specifically doing it at the 10 each layers. So iterating over all the layers except for the very last one which is basically just the softmax layer. If it is a 10 each layer and I'm using a 10 each layer just because they have a finite output negative 1 to 1 and so it's very easy to visualize here so you see negative 1 to 1 and it's a finite range and easy to work with. I take the out tensor from that layer into t and then I'm calculating the mean this 10 deviation and the percent saturation of t and the way I define the percent saturation is that t dot absolute value is greater than 0.97 so that means we are here at the tails of the 10 each and remember that when we are in the tails of the 10 each that will actually stop gradients so we don't want this to be too high. Now here I'm calling torshot histogram and then I am plotting this histogram. So basically what this is doing is that every different type of layer and they all have a different color we are looking at how many values in these tensors take on any of the values below on this axis here. So the first layer is fairly saturated here at 20 percent so you can see that it's got tails here but then everything sort of stabilizes and if we had more layers here it would actually just stabilize at around the 10 deviation of about 0.65 and the saturation will be roughly 5 percent and the reason that this stabilizes and gives us a nice distribution here is because gain is set to 5 over 3. Now here this gain you see that by default we initialize with 1 over square root of fennel but then here during initialization I come in and I iterate our all the layers and if it's a linear layer I boost that by the gain. Now we saw that one so basically if we just do not use a gain then what happens? If I retraw this you will see that the standard deviation is shrinking and the saturation is coming to 0 and basically what's happening is the first layer is you know pretty decent but then further layers are just kind of like shrinking down to 0 and it's happening slowly but it's shrinking to 0 and the reason for that is when you just have a sandwich of linear layers alone then a then initializing our weights in this manner we saw previously would have conserved the standard deviation of 1 but because we have this interspersed 10H layers in there these 10H layers are squashing functions and so they take your distribution and they slightly squash it and so some gain is necessary to keep expanding it to fight the squashing so it just turns out that 5 over 3 is a good value so if we have something too small like 1 we saw that things will come towards 0 but if it's something too high let's do 2 then here we see that well let me do something a bit more extreme because so it's a bit more visible let's try 3 okay so we see here that the saturation is not going to be too large okay so 3 would create weight-o-saturated activations so 5 over 3 is a good setting for a sandwich of linear layers with 10H activations and it roughly stabilizes the standard deviation at a reasonable point now honestly I have no idea where 5 over 3 came from in PyTorch when we were looking at the counting initialization I see empirically that it stabilizes the sandwich of linear and 10H and that the saturation is in a good range but I don't actually know if this came out of some math formula I tried searching briefly for where this comes from but I wasn't able to find anything but certainly we see that empirically these are very nice ranges our saturation is roughly 5% which is a pretty good number and this is a good setting of the gain in this context similarly we can do the exact same thing with the gradients so here is a very same loop if it's a 10H but instead of taking the layer that out I'm taking the grad and then I'm also showing the mean and the standard deviation and I'm plotting the histogram of these values and so you'll see that the gradient distribution is fairly reasonable and in particular what we're looking for is that all the different layers in this sandwich has roughly the same gradient things are not shrinking or exploding so we can for example come here and we can take a look at what happens if this gain was way too small so this was 0.5 then you see the first of all the activations are shrinking to zero but also the gradients are doing something weird the gradients started out here and then now they're like expanding out and similarly if we for example have a two-hymogain select three then we see that also the gradients have there's some asymmetry going on where as you go into deeper and deeper layers the activations are also changing and so that's not what we want and in this case we saw that without use of besterm as we are going through right now we have to very carefully set those gains to get nice activations in both the forward pass and the backward pass now before we move on to bestermalization I would also like to take a look at what happens when we have no 10H units here so erasing all the 10H nonlinearities but keeping the gain at 5 over 3 we now have just a giant linear sandwich so let's see what happens to the activations as we saw before the correct gain here is one that is the standard deviation preserving gain so 1.667 is too high and so what's going to happen now is the following I have to change this to be linear so we are because there's no more 10H layers and let me change this to linear as well so what we're seeing is the activations started out on the blue and have by layer 4 become very diffuse so what's happening to the activations is this and with the gradients on the top layer the activation the gradient statistics are the purple and then they diminish as you go down deeper in the layers and so basically you have an asymmetry like in the neural net and you might imagine that if you have very deep neural networks say like 50 layers or something like that this just this is not a good place to be also that's why before best normalization this was an incredibly tricky to to set in particular if this is too large of a gain this happens and if it's too little of a gain then this happens also the opposite of that basically happens here we have a shrinking and a diffusion depending on which direction you look at it from and so certainly this is not what you want and in this case the correct setting of the gain is exactly one just like we're doing at initialization and then we see that the statistics for the forward and the backward pass are well behaved and so the reason I want to show you this is the basically like getting neuralness to train before these normalization layers and before the use of advanced optimizers like Adam which we still have to cover and residual connections and so on training neuralness basically look like this it's like a total balancing act you have to make sure that everything is precisely orchestrated and you have to care about the activations and the gradients and their statistics and then maybe you can train something but it was basically impossible to train very deep networks and this is fundamentally the reason for that you'd have to be very very careful with your initialization the other point here is you might be asking yourself by the way I'm not sure if I covered this why do we need these 10H layers at all why do we include them and then have to worry about the gain and the reason for that of course is that if you just have a stack of linear layers then certainly we're getting very easily nice activations and so on but this is just a massive linear sandwich and it turns out that it collapses to a single linear layer in terms of its representation power so if you were to plot the output as a function of the input you're just getting a linear function no matter how many linear layers you stack up you still just end up with a linear transformation all the WX plus B's just collapse into a large WX plus B with slightly different W's as slightly different B but interestingly even though the forward pass collapses to just a linear layer because of backpapigation and the dynamics of the backward pass the optimization is really is not identical you actually end up with all kinds of interesting dynamics in the backward pass because of the the way the chain rule is calculating it and so optimizing a linear layer by itself and optimizing a sandwich of 10 linear layers in both cases those are just a linear transformation in the forward pass but the training dynamics would be different and there's entire papers that analyze in fact like infinitely layered linear layers and so on and so there's a lot of things to that you can play with there but basically the tenational linearities allow us to turn this sandwich from just a linear function into a neural network that can in principle approximate any arbitrary function okay so now I've reset the code to use the linear ten each sandwich like before and I've reset everything so the gains five over three we can run a single step of optimization and we can look at the activations statistics of the forward pass and the backward pass but I've added one more plot here that I think is really important to look at when you're training your neural nets and to consider and ultimately what we're doing is we're updating the parameters of the neural net so we care about the parameters and their values and their gradients so here what I'm doing is I'm actually iterating over all the parameters available and that I'm only restricting it to the two-dimensional parameters which are basically the weights of these linear layers and I'm skipping the biases and I'm skipping the gammas and the betas and the best room just for simplicity but you can also take a look at those as well but what's happening with the weights is instructive by itself so here we have all the different weights their shapes so this is the embedding layer the first linear layer all the way to the very last linear layer and then we have the mean the standard deviation of all these primers the histogram and you can see that actually it doesn't look that amazing so there's some trouble in paradise even though these gradients looked okay there's something weird going on here I'll get to that in a second and the last thing here is the gradient to data ratio so sometimes I'll have to visualize this as well because what this could see a sense of is what is the scale of the gradient compared to the scale of the actual values and this is important because we're going to end up taking a step update that is the learning rate times the gradient onto the data and so the gradient has two larger magnitudes if the numbers and there are two large compared to the numbers in data then you'd be in trouble but in this case the gradient to data is our low numbers so the values inside grad are 1000 times smaller than the values inside data in these weights most of them now notably that is not true about the last layer and so the last layer actually here the output layer is a bit of a trouble maker in the way that this is currently arranged because you can see that the last layer here in pink takes on values there are much larger than some of the values inside inside the neural net so the standard deviations are roughly 1-3 throughout except for the last but last new layer which actually has roughly 1-2 standard deviation of gradients and so the gradients on the last layer are currently about 100 times greater sorry 10 times greater than all the other weights inside the neural net and so this problematic because in the simple stochastic gradient in the sense setup you would be training this last layer about 10 times faster than you would be training the other layers at initialization now this actually like kind of fixes itself a little bit if you train for a bit longer so for example if I agree then 1000 only then do a break let me initialize and then let me do it 1000 steps and after 1000 steps we can look at the for a pass okay so you see how the neurons are a bit are saturating a bit and we can also look at the backward pass but otherwise they look good they're about equal and there's no shrinking to zero or exploding to infinities and you can see that here in the weights things are also stabilizing a little bit so the tails of the last pink layer are actually coming in during the optimization but certainly this is like a little bit of troubling especially if you are using a very simple update rule like stochastic gradient descent instead of a modern optimizer like atom now I'd like to show you one more plot that I usually look at when I train your own works and basically the gradient to data ratio is not actually that informative because what matters at the end is not the gradient to data ratio but the update to the data ratio because that is the amount by which we will actually change the data in these tensors so coming up here what I'd like to do is I'd like to introduce a new update to data ratio it's going to be less than we're going to build it out every single iteration and here I'd like to keep track of basically the ratio every single iteration so without any ingredients I'm comparing the update which is learning rate times the time is the gradient that is the update that we're going to apply to every parameter associated with random world of parameters and then I'm taking the basically standard deviation of the update we're going to apply and divided by the actual content the data of that parameter and its standard deviation so this is the ratio of basically how great are the updates to the values in these tensors then we're going to take a log of it and actually I'd like to take a log 10 just so it's a nice serviceualization so we're going to be basically looking at the exponents of this division here and then that item to pop out the float and we're going to be keeping track of this for all the parameters and adding it to this UD tensor so now let me re-initialize and run a thousand iterations we can look at the activations the gradients and the parameter gradients as we did before but now I have one more plot here to introduce now what's happening here is we're every interval of parameters and I'm constraining it again like I did here to just to weights so the number of dimensions in these sensors is two and then I'm basically plotting all of these update ratios over time so when I plot this I plot those ratios and you can see that they evolve over time during initialization that they concern values and then these updates are like start stabilizing usually during training then the other thing that I'm plotting here is I'm plotting here like an approximate value that is a rough guide for what it roughly should be and it should be like roughly one in negative three and so that means that basically there's some values in this tensor and they take on certain values and the updates to them at every single iteration are no more than roughly one thousand of the actual magnitude in those tensors if this was much larger like for example if this was if the log of this was like same negative one this is actually updating those values quite a lot they're undergoing a lot of change but the reason that the final rate the final layer here is an outlier is because this layer was artificially shrugged down to keep the softmax income unconfident so here you see how we multiply the weight by point one in the initialization to make the last layer prediction less confident that made that artificially made the values inside that tensor way too low and that's why we're getting temporarily a very high ratio but you see that that stabilizes over time once that weight starts to learn starts to learn but basically I like to look at the evolution of this update ratio for all my parameters usually and I like to make sure that it's not too much above one negative three roughly so around negative three on this log plot if it's below negative three usually that means that parameters are not training fast enough so if our learning rate was very low let's do that experiment let's initialize and then let's actually do a learning rate of say one in negative three here so 0.001 if your learning rate is way too low this plot will typically reveal it so you see how all of these updates are way too small so the size of the update is basically 10,000 times in magnitude to the size of the numbers in that tensor in the first place so this is a symptom of training way too slow so this is another way to sometimes start to learning rate and to get a sense of what that learning rate should be and ultimately this is something that you would keep track of if anything the learning rate here is a little bit on the higher side because you see that we're above the black line of negative three we're somewhere around negative 2.5 it's like okay and but everything is like somewhat stabilizing and so this looks like a pretty decent setting of of learning rates and so on but this is something to look at and when things are miscalibrated you will you will see very quickly so for example everything looks pretty well behaved right but just as a comparison when things are not properly calibrated what does that look like let me come up here and let's say that for example what do we do let's say that we forgot to apply this a fan in normalization so the weights inside the linear layers are just sample from a Gaussian in all the stages what happens to our how do we notice that something's off well the activation plot will tell you whoa your neurons are way too saturated the gradients are going to be all messed up the histogram for these weights are going to be all messed up as well and there's a lot of asymmetry and then if we look here I suspect it's all going to be also pretty messed up so you see there's a lot of discrepancy in how fast these layers are learning and some of them are learning way too fast so negative 1 negative 1.5 those are very large numbers in terms of this ratio again you should be somewhere around negative three and not much more about that so this is how miscalibration so if your neurons are going to manifest and these kinds of plots here are a good way of sort of bringing those miscalibration sort of to your attention and so you can address them okay so so far we've seen that when we have this linear 10-H sandwich we can actually precisely calibrate the gains and make the activations the gradients and the parameters and the updates all look pretty decent but it definitely feels a little bit like balancing of a pencil on your finger and that's because this gain has to be very precisely calibrated so now let's introduce bashfulization layers into the fix into the mix and let's let's see how that helps fix the problem so here I'm going to take the bashful monday class and I'm going to start placing it inside and as I mentioned before the standard typical placey would place it is between the linear layer so right after it before the non-linearity but people have definitely played with that and in fact you can get very similar results even if you place it after the non-linearity and the other thing that I wanted to mention is it's totally fine to also place it at the end after the last linear layer and before the last function so this is potentially fine as well and in this case this would be output would be woke up size now because the last layer is bashful we would not be changing to wait to make the softmax less confident we'd be changing the gamma because gamma remember in the bash room is the variable that multiplicatively interacts with the output of that normalization so we can initialize this sandwich now we can train and we can see that the activations are going to of course look very good and they are going to necessarily look good because now before every single 10-H layer there is a normalization in the bash room so this is unsurprisingly all looks pretty good it's going to be standard deviation of roughly 0.65 2% and roughly equals standard deviation throughout the entire layers so everything looks very homogeneous the gradients look good the weights look good and their distributions and then the updates also look pretty reasonable we're going above negative 3 a little bit but not by too much so all the parameters are training in roughly the same rate here but now what we gained is we are going to be slightly less brittle with respect to the gain of these so for example I can make the gain B say 0.2 here which is much more personal over than what we had with the 10-H but as we'll see the activations will actually be exactly unaffected and that's because again this explicit normalization the gradients are going to look okay the weight gradients are going to look okay but actually the updates will change and so even though the forward and backward paths to a very large extent look okay because of the backward paths of the bash room and how the scale of the incoming activations interact in the bash room and its backward paths this is actually changing the scale of the updates on these parameters so the gradients of these weights are affected so we still don't get a completely free pass to pass in arbitrary weights here but everything else is significantly more robust in terms of the forward backward and the weight gradients it's just that you may have to retune your learning rate if you are changing sufficiently the scale of the activations that are coming into the bash rooms so here for example this we changed the gains of these linear layers to be greater and we're seeing that the updates are coming out lower as a result and then finally we can also if we are using bash rooms we don't actually need to necessarily let me reset this to one so there's no gain we don't necessarily even have to normalize back then in sometimes so if I take out the fan in so these are just now random Gaussian we'll see that because of bash rooms this will actually be relatively well behaved so this this is look of course in the forward pass look good the gradients look good the backward the weight updates look okay a little bit of fat tails and some delayers and this looks okay as well but as you as you can see we're significantly below negative three so we'd have to bump up the learning rate of this bachelor so that we are training more properly and in particular looking at this roughly looks like we have to 10x the learning rate to get to about 1 e negative 3 so we'd come here and we would change this to be update of 1.0 and if I initialize then we'll see that everything still of course looks good and now we are roughly here and we expect this to be an okay training run so long story short we are significantly more robust to the gain of these linear layers whether or not we have to apply the fan in and then we can change the gain but we actually do have to worry a little bit about the update scales and making sure that the learning rate is properly calibrated here but the activations of the forward backward pass and the updates are all are looking significantly more well behaved except for the global scale that is potentially being adjusted here okay so now let me summarize there are three things I was hoping to achieve with this section number one I wanted to introduce you to bachelor normalization which is one of the first modern innovations that we're looking into that helped stabilize very deep neural networks and their training and I hope you understand how the bachelor normalization works and how it would be used in neural network number two I was hoping to pie torchify some wire code and wrap it up into these modules so like linear bachelor mondi 10h etc these are layers or modules and they can be stacked up into neural nets like Lego building blocks and these layers actually exist in pie torch and if you import torch and then then you can actually the way I've constructed it you can simply just use pie torch by pre-pending nn dot to all these different layers and actually everything will just work because the API that I have developed here is identical to the API that pie torch uses and the implementation also is basically as far as I'm aware identical to the one in pie torch and number three I try to introduce you to the diagnostic tools that you would use to understand whether your neural network is in a good state dynamically so we are looking at the statistics and histograms and activation of the forward pass activation activations the backward pass gradients and then also we're looking at the weights that are going to be updated as part of stochasticity in the send and we're looking at their means standard deviations and also the ratio of gradients to data or even better the updates to data and we saw that typically we don't actually look at it as a single snapshot frozen in time at some particular iteration typically people look at this as a over time just like I've done here and they look at these update to data ratios and they make sure everything looks okay and in particular I said that one in negative three or basically negative three on the log scale is a good rough heuristic for what you want this ratio to be and if it's way too high then probably the learning rate or the updates are too big and if it's way too small that the learning rate is probably too small so that's just some of the things that you may want to play with when you try to get your neural network to work well very well now there's a number of things I did not try to achieve I did not try to beat our previous performance as an example by introducing the bachelor layer actually I did try and I found that you I used the learning rate finding mechanism that I've described before I tried to train the bachelor layer a bachelor neural nut and I actually ended up with results that are very very similar to what we obtained before and that's because our performance now is not bottlenecked by the optimization which is what bachelor is helping with the performance at the stage is bottlenecked by what I suspect is the context length of our context so currently we are taking three characters to predict the fourth one and I think we need to go beyond that and we need to look at more powerful architectures like recurrent neural networks and transformers in order to further push the block probabilities that we're achieving on this day as it and I also did not try to have a full explanation of all of these activations, the gradients and the backward pass and the statistics of all these gradients and so you may have found some of the parts here on intuitive and maybe you're slightly confused about okay if I change the gain here how come that we need a different learning rate and I didn't go into the full detail because you'd have to actually look at the backward pass of all these different layers and get an intuitive understanding of how that works and I did not go into that in this lecture the purpose really was just to introduce you to the diagnostic tools and what they look like but there's still a lot of work remaining on the intuitive level to understand the initialization the backward pass and how all that interacts but you shouldn't feel too bad because honestly we are getting to the cutting edge of where the field is we certainly haven't I would say solved initialization and we haven't solved back propagation and these are still very much an active area of research people are still trying to figure out where's the best way to initialize these networks what is the best update rule to use and so on so none of this is really solved and we don't really have all the answers to all the to you know all these cases but at least you know we're making progress and at least we have some tools to tell us whether or not things are on the right track for now so I think we've made positive progress in this lecture and I hope you enjoyed that and I will see you next time
[{"start": 0.0, "end": 5.6000000000000005, "text": " Hi everyone. Today we are continuing our implementation of Makemore. Now in the last lecture we implemented"}, {"start": 5.6000000000000005, "end": 10.72, "text": " the Multalia Perceptron along the lines of Benjueil to Hal 2003 for Character Level Language modeling."}, {"start": 10.72, "end": 15.52, "text": " So we followed this paper, took in a few characters in the past, and used an MLP to predict the next"}, {"start": 15.52, "end": 20.16, "text": " character in a sequence. So what we'd like to do now is we'd like to move on to more complex and"}, {"start": 20.16, "end": 24.96, "text": " larger neural networks, like recurrent neural networks, and their variations like the GRU, LSTM,"}, {"start": 24.96, "end": 30.32, "text": " and so on. Now before we do that though, we have to stick around the level of Multalia Perceptron"}, {"start": 30.32, "end": 34.56, "text": " for a bit longer. And I'd like to do this because I would like us to have a very good intuitive"}, {"start": 34.56, "end": 39.6, "text": " understanding of the activations in the neural net during training, and especially the gradients"}, {"start": 39.6, "end": 44.400000000000006, "text": " that are flowing backwards, and how they behave, and what they look like. This is going to be very"}, {"start": 44.400000000000006, "end": 48.8, "text": " important to understand the history of the development of these architectures, because we'll see that"}, {"start": 48.8, "end": 53.760000000000005, "text": " recurrent neural networks, while they are very expressive in that they are a universal approximator"}, {"start": 53.76, "end": 59.44, "text": " and can imprince the implement all the algorithms. We'll see that they are not very easily"}, {"start": 59.44, "end": 63.28, "text": " optimizable with the first order of gradient-based techniques that we have available to us, and that"}, {"start": 63.28, "end": 69.44, "text": " we use all the time. And the key to understanding why they are not optimizable easily is to understand"}, {"start": 69.44, "end": 73.44, "text": " the the activations and the gradients and how they behave during training. And we'll see that a lot"}, {"start": 73.44, "end": 79.52, "text": " of the variants since recurrent neural networks have tried to improve that situation. And so"}, {"start": 79.52, "end": 84.32, "text": " that's the path that we have to take, and let's go start it. So the starting code for this lecture"}, {"start": 84.32, "end": 89.19999999999999, "text": " is largely the code from before, but I've cleaned it up a little bit. So you'll see that we are"}, {"start": 89.19999999999999, "end": 95.52, "text": " importing all the torch and math plotlet utilities. We're reading in the words just like before."}, {"start": 95.52, "end": 100.64, "text": " These are eight example words. There's a total of 32,000 of them. Here's a vocabulary of all the"}, {"start": 100.64, "end": 107.28, "text": " lowercase letters and the special dot token. Here we are reading the dataset and processing it,"}, {"start": 107.28, "end": 115.36, "text": " and creating three splits, the train, dev, and the test split. Now in MLP, this is the identical"}, {"start": 115.36, "end": 120.32000000000001, "text": " same MLP, except you see that I removed a bunch of magic numbers that we had here. And instead we"}, {"start": 120.32000000000001, "end": 124.96000000000001, "text": " have the dimensionality of the embedding space of the characters and the number of hidden units"}, {"start": 124.96000000000001, "end": 129.68, "text": " in the hidden layer. And so I've pulled them outside here so that we don't have to go and change all"}, {"start": 129.68, "end": 134.88, "text": " these magic numbers all the time. With the same neural net with 11,000 parameters that we optimize"}, {"start": 134.88, "end": 141.68, "text": " now over 200,000 steps with batch size of 32. And you'll see that I refactored the code here a little"}, {"start": 141.68, "end": 147.2, "text": " bit, but there are no functional changes. I just created a few extra variables, a few more comments,"}, {"start": 147.2, "end": 152.8, "text": " and I removed all the magic numbers. And otherwise is the exact same thing. Then when we optimize,"}, {"start": 152.8, "end": 158.4, "text": " we saw that our loss looked something like this. We saw that the train and valve loss were about"}, {"start": 158.4, "end": 167.04000000000002, "text": " 2.16 and so on. Here I refactored the code a little bit for the evaluation of arbitrary splits."}, {"start": 167.04000000000002, "end": 171.68, "text": " So you pass in a string of which split you'd like to evaluate. And then here depending on"}, {"start": 171.68, "end": 176.64000000000001, "text": " train, valve, or test, I index in and I get the correct split. And then this is the forward pass"}, {"start": 176.64000000000001, "end": 181.92000000000002, "text": " of the network and evaluation of the loss and printing it. So just making it nicer,"}, {"start": 181.92, "end": 188.07999999999998, "text": " I want to think that you'll notice here is I'm using a decorator torched.no grad, which you can"}, {"start": 188.07999999999998, "end": 193.76, "text": " also look up and read documentation of. Basically what this decorator does on top of a function"}, {"start": 194.39999999999998, "end": 200.95999999999998, "text": " is that whatever happens in this function is synced by torched to never require an"}, {"start": 200.95999999999998, "end": 206.64, "text": " ingredients. So it will not do any of the bookkeeping that it does to keep track of all the gradients"}, {"start": 206.64, "end": 211.51999999999998, "text": " in anticipation of an eventual backward pass. It's almost as if all the tensors that get created"}, {"start": 211.52, "end": 216.64000000000001, "text": " here have a requires grad of false. And so it just makes everything much more efficient because"}, {"start": 216.64000000000001, "end": 220.96, "text": " you're telling torched that I will not call dot backward on any of this computation and you don't"}, {"start": 220.96, "end": 227.12, "text": " need to maintain the graph under the hood. So that's what this does. And you can also use a"}, {"start": 227.12, "end": 234.24, "text": " context manager with torched.no grad and you can let those out. Then here we have the sampling from"}, {"start": 234.24, "end": 240.64000000000001, "text": " a model just as before. Just a four pass of a neural nut getting the distribution sampling from it"}, {"start": 240.64, "end": 245.67999999999998, "text": " adjusting the context window and repeating until we get the special and token. And we see that we"}, {"start": 245.67999999999998, "end": 251.11999999999998, "text": " are starting to get much nicer looking words simple from the model. It's still not amazing and"}, {"start": 251.11999999999998, "end": 255.76, "text": " they're still not fully name like. But it's much better than what we had with the bi-gram model."}, {"start": 257.68, "end": 261.59999999999997, "text": " So that's our starting point. Now the first thing I would like to scrutinize is the initialization."}, {"start": 262.56, "end": 268.0, "text": " I can tell that our network is very improperly configured at initialization. And there's"}, {"start": 268.0, "end": 272.08, "text": " multiple things wrong with it. But let's just start with the first one. Look here on the zero"}, {"start": 272.08, "end": 278.4, "text": " iteration, the very first iteration, we are recording a loss of 27 and this rapidly comes down to"}, {"start": 278.4, "end": 282.96, "text": " roughly one or two or so. So I can tell that the initialization is all messed up because this is way"}, {"start": 282.96, "end": 288.0, "text": " too high. In training of neural nuts, it is almost always the case that you will have a rough idea"}, {"start": 288.0, "end": 293.2, "text": " for what loss to expect at initialization. And that just depends on the loss function and the"}, {"start": 293.2, "end": 299.03999999999996, "text": " problem setup. In this case, I do not expect 27. I expect a much lower number and we can calculate"}, {"start": 299.03999999999996, "end": 306.56, "text": " it together. Basically, at initialization, or we'd like is that there's 27 characters that could come"}, {"start": 306.56, "end": 311.91999999999996, "text": " next for anyone training example. At initialization, we have no reason to believe any characters to be"}, {"start": 311.91999999999996, "end": 316.71999999999997, "text": " much more likely than others. And so we'd expect that the probability distribution that comes out"}, {"start": 316.72, "end": 323.44000000000005, "text": " initially is a uniform distribution, assigning about equal probability to all the 27 characters."}, {"start": 323.44000000000005, "end": 330.0, "text": " So basically what we'd like is the probability for any character would be roughly one over 27."}, {"start": 331.92, "end": 336.56, "text": " That is the probability we should record. And then the loss is the negative log probability."}, {"start": 336.56, "end": 343.04, "text": " So let's wrap this in a tensor. And then then we can take the log of it. And then the negative"}, {"start": 343.04, "end": 350.32, "text": " log probability is the loss we would expect, which is 3.29, much, much lower than 27. And so what's"}, {"start": 350.32, "end": 355.04, "text": " happening right now is that at initialization, the neural net is creating probability distributions"}, {"start": 355.04, "end": 359.92, "text": " that are all messed up. Some characters are very confident and some characters are very not confident."}, {"start": 360.64000000000004, "end": 366.96000000000004, "text": " And then basically what's happening is that the network is very confidently wrong. And that makes"}, {"start": 366.96, "end": 373.35999999999996, "text": " that's what makes it record very high loss. So here's a smaller four dimensional example of the issue."}, {"start": 373.35999999999996, "end": 378.56, "text": " Let's say we only have four characters and then we have logits that come out of the neural net"}, {"start": 378.56, "end": 384.08, "text": " and they are very very close to zero. Then when we take the softmax of all zeroes, we get"}, {"start": 384.08, "end": 390.15999999999997, "text": " probabilities there are a diffuse distribution. So sums to one and is exactly uniform."}, {"start": 391.03999999999996, "end": 396.64, "text": " And then in this case, if the label is say two, it doesn't actually matter if this if the label is two"}, {"start": 396.64, "end": 401.28, "text": " or three or one or zero because it's a uniform distribution, we're recording the exact same loss"}, {"start": 401.28, "end": 405.59999999999997, "text": " in this case 1.38. So this is the loss we would expect for a four dimensional example."}, {"start": 406.15999999999997, "end": 411.28, "text": " And I can see of course that as we start to manipulate these logits, we're going to be changing"}, {"start": 411.28, "end": 417.12, "text": " the loss here. So it could be that we lock out and by chance this could be a very high number"}, {"start": 417.12, "end": 421.44, "text": " like five or something like that. Then in that case, we'll record a very low loss because we're"}, {"start": 421.44, "end": 427.6, "text": " signing the correct probability at initialization by chance to the correct label. Much more likely,"}, {"start": 427.6, "end": 435.12, "text": " it is that some other dimension will have a high logit and then what will happen is we start to"}, {"start": 435.12, "end": 439.84, "text": " record much higher loss. And what can come what can happen is basically the logits come out"}, {"start": 440.4, "end": 446.08, "text": " like something like this, you know, and they take on extreme values and we record really high loss."}, {"start": 446.08, "end": 455.2, "text": " For example, if we have torched out random of four, so these are uniform, sorry, these are normally"}, {"start": 455.2, "end": 464.88, "text": " distributed numbers, forum. And here we can also print the logits, probabilities that come out of"}, {"start": 464.88, "end": 471.52, "text": " it and the loss. And so because these logits are near zero, for the most part, the loss that comes"}, {"start": 471.52, "end": 481.68, "text": " out is okay. But suppose this is like time 10 now. You see how because these are more extreme values,"}, {"start": 481.68, "end": 487.12, "text": " it's very unlikely that you're going to be guessing the correct bucket and then you're confidently"}, {"start": 487.12, "end": 493.28, "text": " wrong and recording very high loss. If your logits are coming up even more extreme, you might get"}, {"start": 493.28, "end": 502.0, "text": " extremely same losses like infinity even at initialization. So basically this is not good and we want"}, {"start": 502.0, "end": 509.44, "text": " the logits to be roughly zero when the work is initialized. In fact, the logits can don't have to"}, {"start": 509.44, "end": 514.56, "text": " be just zero. They just have to be equal. So for example, if all the logits are one, then because of"}, {"start": 514.56, "end": 519.6, "text": " the normalization inside the softmax, this will actually come out okay. But by symmetry, we don't"}, {"start": 519.6, "end": 524.32, "text": " want it to be any arbitrary positive or negative number. We just want it to be all zeros and record"}, {"start": 524.32, "end": 528.32, "text": " the loss that we expect at initialization. So let's not completely see where things go wrong"}, {"start": 528.32, "end": 534.32, "text": " in our example. Here we have the initialization. Let me initialize the neural ad. And here let me"}, {"start": 534.32, "end": 540.4, "text": " break after the very first iteration. So we only see the initial loss, which is 27. So that's"}, {"start": 540.4, "end": 545.44, "text": " way too high. And intuitively now we can expect the variables involved. And we see that the logits"}, {"start": 545.44, "end": 551.9200000000001, "text": " here, if we just print some of these, if we just print the first row, we see that the logits take"}, {"start": 551.9200000000001, "end": 558.24, "text": " on quite extreme values. And that's what's creating the fake confidence and incorrect answers. And"}, {"start": 558.24, "end": 565.84, "text": " makes the loss get very, very high. So these logits should be much, much closer to zero. So now let's"}, {"start": 565.84, "end": 571.6800000000001, "text": " think through how we can achieve logits coming out of this neural net to be more closer to zero."}, {"start": 571.68, "end": 576.9599999999999, "text": " You see here that the logits are calculated as they hit in states multiplied by W2 plus B2."}, {"start": 577.5999999999999, "end": 583.28, "text": " So first of all, currently we're initializing B2 as random values of the right size."}, {"start": 584.2399999999999, "end": 589.28, "text": " But because we want roughly zero, we don't actually want to be adding a bias of random numbers."}, {"start": 589.28, "end": 595.68, "text": " So in fact, I'm going to add a times a zero here to make sure that B2 is just basically zero"}, {"start": 595.68, "end": 602.2399999999999, "text": " at initialization. And second, this is H multiplied by W2. So if we want logits to be very, very small,"}, {"start": 603.04, "end": 609.28, "text": " then we would be multiplying W2 and making that small. So for example, if we scale down W2 by"}, {"start": 609.28, "end": 615.4399999999999, "text": " 0.1 all the elements, then if I do again just a very first iteration, you see that we are getting"}, {"start": 615.4399999999999, "end": 622.8, "text": " much closer to what we expect. So roughly what we want is about 3.29, this is 4.2. I can make this"}, {"start": 622.8, "end": 630.88, "text": " maybe even smaller, 3.32. Okay, so we're getting closer and closer. Now, you're probably wondering,"}, {"start": 630.88, "end": 637.3599999999999, "text": " can we just set this to zero? Then we get, of course, exactly what we're looking for at initialization."}, {"start": 638.16, "end": 644.16, "text": " And the reason I don't usually do this is because I'm very nervous and I'll show you in a second why"}, {"start": 644.16, "end": 649.92, "text": " you don't want to be setting W's or weights of a neural net exactly to zero. You usually want"}, {"start": 649.92, "end": 655.5999999999999, "text": " to be small numbers instead of exactly zero. For this output layer in this specific case,"}, {"start": 655.5999999999999, "end": 660.0, "text": " I think it would be fine, but I'll show you in a second where things go wrong very quickly if you"}, {"start": 660.0, "end": 666.56, "text": " do that. So let's just go with 0.01. In that case, our loss is close enough, but has some entropy."}, {"start": 666.56, "end": 672.0799999999999, "text": " It's not exactly zero. It's got some low entropy and that's used for symmetry braking as we'll"}, {"start": 672.0799999999999, "end": 677.52, "text": " see in a second. Logits are now coming out much closer to zero and everything is well and good."}, {"start": 677.52, "end": 686.48, "text": " So if I just erase these and I now take away the brake statement, we can run the optimization"}, {"start": 686.48, "end": 693.84, "text": " with this new initialization. And let's just see what losses we record. Okay, so I'll let it run"}, {"start": 693.84, "end": 700.24, "text": " and you see that we started off good and then we came down a bit. The plot of the loss now doesn't"}, {"start": 700.24, "end": 706.64, "text": " have this hockey shape appearance because basically what's happening in the hockey stick, the very first"}, {"start": 706.64, "end": 711.6, "text": " few iterations of the loss, what's happening during the optimization is the optimization is just"}, {"start": 711.6, "end": 717.4399999999999, "text": " squashing down the logits and then it's rearranging the logits. So basically we took away this easy part"}, {"start": 717.4399999999999, "end": 723.68, "text": " of the loss function where just the weights were just being shrunk down. And so therefore we don't"}, {"start": 723.68, "end": 727.68, "text": " get these easy gains in the beginning and we're just getting some of the hard gains of training"}, {"start": 727.68, "end": 733.4399999999999, "text": " the actual neural nut. And so there's no hockey stick appearance. So good things are happening in that"}, {"start": 733.44, "end": 739.84, "text": " both number one loss and initialization is what we expect. And the loss doesn't look like a hockey"}, {"start": 739.84, "end": 746.24, "text": " stick. And this is true for any neural nut you might train and something to look at for. And second,"}, {"start": 746.24, "end": 751.2800000000001, "text": " the loss that came out is actually quite a bit improved. Unfortunately, I erased what we had here"}, {"start": 751.2800000000001, "end": 760.32, "text": " before. I believe this was 2.12 and this was 2.16. So we get a slightly improved result. And the"}, {"start": 760.32, "end": 766.48, "text": " reason for that is because we're spending more cycles, more time optimizing the neural nut actually"}, {"start": 766.48, "end": 772.0, "text": " instead of just spending the first several thousand iterations probably just squashing down the weights"}, {"start": 773.12, "end": 778.1600000000001, "text": " because they are so way too high in the beginning and the initialization. So something to look out for"}, {"start": 778.1600000000001, "end": 782.96, "text": " and that's number one. Now let's look at the second problem. Let me re-initialize our neural"}, {"start": 782.96, "end": 789.12, "text": " nut and let me reintroduce the break statement. So we have a reasonable initial loss. So even though"}, {"start": 789.12, "end": 792.64, "text": " everything is looking good on the level of the loss and we get something that we expect,"}, {"start": 792.64, "end": 796.4, "text": " there's still a deeper problem lurking inside this neural nut and its initialization."}, {"start": 797.44, "end": 804.32, "text": " So the logits are now okay. The problem now is with the values of H, the activations of the hidden"}, {"start": 804.32, "end": 810.4, "text": " states. Now if we just visualize this vector, sorry, this tensor H, it's kind of hard to see but"}, {"start": 810.4, "end": 815.04, "text": " the problem here roughly speaking is you see how many of the elements are one or negative one."}, {"start": 815.04, "end": 821.76, "text": " Now recall that tortsh.10h, the 10h function is a squashing function. It takes arbitrary numbers"}, {"start": 821.76, "end": 826.64, "text": " and it squashes them into a range of negative one and one and it does so smoothly. So let's look at"}, {"start": 826.64, "end": 831.52, "text": " the histogram of H to get a better idea of the distribution of the values inside this tensor."}, {"start": 832.3199999999999, "end": 840.8, "text": " We can do this first. Well we can see that H is 32 examples and 200 activations in each example."}, {"start": 840.8, "end": 847.52, "text": " We can view it as negative one to stretch it out into one large vector and we can then call"}, {"start": 847.52, "end": 855.04, "text": " two lists to convert this into one large Python list of floats and then we can pass this into"}, {"start": 855.04, "end": 862.3199999999999, "text": " plt.hist for histogram and we say we want 50 bins and a semicolon to suppress a bunch of output we"}, {"start": 862.3199999999999, "end": 868.8, "text": " don't want. So we see this histogram and we see that most of the values by far take on value of"}, {"start": 868.8, "end": 876.88, "text": " negative one and one. So this 10h is very very active and we can also look at basically why that is"}, {"start": 877.8399999999999, "end": 884.24, "text": " we can look at the preactivations that feed into the 10h and we can see that the distribution of"}, {"start": 884.24, "end": 890.4, "text": " the preactivations are is very very broad. These take numbers between negative 15 and 15 and"}, {"start": 890.4, "end": 894.8, "text": " that's why in a tortsh.10h everything is being squashed and capped to be in the range of negative"}, {"start": 894.8, "end": 900.4799999999999, "text": " one and one and lots of numbers here take on very extreme values. Now if you are new to it in your"}, {"start": 900.4799999999999, "end": 905.3599999999999, "text": " networks you might not actually see this as an issue but if you're well burst in the dark arts"}, {"start": 905.3599999999999, "end": 909.4399999999999, "text": " of back propagation and then have an intuitive sense of how these radians float through in your"}, {"start": 909.4399999999999, "end": 914.3199999999999, "text": " own net you are looking at your distribution of 10h activations here and you are sweating."}, {"start": 914.88, "end": 919.04, "text": " So let me show you why. We have to keep in mind that during back propagation just like we saw in"}, {"start": 919.04, "end": 924.7199999999999, "text": " micrograd we are doing backward pass starting at the loss and flowing through the network backwards."}, {"start": 924.7199999999999, "end": 930.4, "text": " In particular we're going to back propagate through this tortsh.10h and this layer here is made"}, {"start": 930.4, "end": 936.9599999999999, "text": " up of 200 neurons for each one of these examples and it implements an element twice 10h. So let's"}, {"start": 936.9599999999999, "end": 942.4, "text": " look at what happens in 10h in the backward pass. We can actually go back to our previous micrograd"}, {"start": 942.4, "end": 949.1999999999999, "text": " code in the very first lecture and see how we implement the 10h. We saw that the input here was x"}, {"start": 949.1999999999999, "end": 954.88, "text": " and then we calculate t which is the 10h of x. So that's t and t is between negative one and one."}, {"start": 954.88, "end": 959.1999999999999, "text": " It's the output of the 10h and then in the backward pass how do we back propagate through a 10h?"}, {"start": 960.0799999999999, "end": 966.16, "text": " We take out that grad and then we multiply it this is the chain rule with the local gradient"}, {"start": 966.16, "end": 972.0799999999999, "text": " which took the form of 1 minus t squared. So what happens if the outputs of your 10h are very close"}, {"start": 972.08, "end": 978.4000000000001, "text": " to negative one or one. If you plug in t equals one here you're going to get a zero multiplying"}, {"start": 978.4000000000001, "end": 984.32, "text": " out that grad. No matter what out that grad is we are killing the gradient and we're stopping"}, {"start": 984.32, "end": 989.44, "text": " effectively the back propagation through this 10h unit. Similarly when t is negative one this will"}, {"start": 989.44, "end": 995.6800000000001, "text": " again become zero and out that grad just stops and intuitively this makes sense because this is a"}, {"start": 995.6800000000001, "end": 1001.9200000000001, "text": " 10h neuron and what's happening is if its output is very close to one then we are in the"}, {"start": 1001.92, "end": 1011.28, "text": " tail of this 10h. So changing basically the input is not going to impact the output of the 10h"}, {"start": 1011.28, "end": 1017.8399999999999, "text": " too much because it's in the flat region of the 10h and so therefore there's no impact on the loss."}, {"start": 1018.4799999999999, "end": 1025.44, "text": " And so indeed the weights and the biases along with this 10h neuron do not impact the loss"}, {"start": 1025.44, "end": 1029.6, "text": " because the output of this 10h unit is in the flat region of the 10h and there's no influence."}, {"start": 1029.6, "end": 1034.48, "text": " We can we can be changing them whatever we want however we want and the loss is not impacted."}, {"start": 1034.48, "end": 1039.6799999999998, "text": " That's another way to justify that indeed the gradient would be basically zero advantages."}, {"start": 1040.8799999999999, "end": 1049.84, "text": " Indeed when t equals zero we get one times out that grad so when the 10h takes on exactly"}, {"start": 1049.84, "end": 1056.8, "text": " value of zero then out that grad is just passed through. So basically what this is doing right is"}, {"start": 1056.8, "end": 1064.72, "text": " if t is equal to zero then this the 10h unit is sort of inactive and gradient just passes through"}, {"start": 1064.72, "end": 1070.8799999999999, "text": " but the more you are in the flat tails the more degrading is squashed. So in fact you'll see that"}, {"start": 1070.8799999999999, "end": 1076.8, "text": " the gradient flowing through 10h can only ever decrease in the amount that it decreases is"}, {"start": 1077.6, "end": 1083.84, "text": " proportional through a square here depending on how far you are in the flat tails of this 10h."}, {"start": 1083.84, "end": 1090.6399999999999, "text": " And so that's kind of what's happening here and through this the concern here is that if all"}, {"start": 1090.6399999999999, "end": 1096.3999999999999, "text": " of these outputs h are in the flat regions of negative one to one then the gradients that are"}, {"start": 1096.3999999999999, "end": 1102.3999999999999, "text": " flowing through the network will just get destroyed at this layer. Now there is some redeeming"}, {"start": 1102.9599999999998, "end": 1106.56, "text": " quality here and that we can actually get a sense of the problem here as follows."}, {"start": 1107.6, "end": 1112.24, "text": " I brought some code here and basically what we want to do here is we want to take a look at"}, {"start": 1112.24, "end": 1119.52, "text": " h, take the absolute value and see how often it is in the in a flat region. So say"}, {"start": 1120.24, "end": 1127.52, "text": " grading point 99 and what you get is the following and this is a Boolean tensor. So in the Boolean"}, {"start": 1127.52, "end": 1133.28, "text": " tensor you get a white if this is true and a black if this is false. And so basically what we"}, {"start": 1133.28, "end": 1139.68, "text": " have here is the 32 examples and a 200 hidden neurons and we see that a lot of this is white"}, {"start": 1139.68, "end": 1147.28, "text": " and what that's telling us is that all these 10h neurons were very very active and they're in"}, {"start": 1147.28, "end": 1154.8, "text": " the flat tail and so in all these cases the backward gradient would get destroyed."}, {"start": 1156.48, "end": 1163.3600000000001, "text": " Now we would be in a lot of trouble if for for any one of these 200 neurons if it was the case"}, {"start": 1163.3600000000001, "end": 1168.8, "text": " that the entire column is white because in that case we have what's called a dead neuron and"}, {"start": 1168.8, "end": 1173.12, "text": " this could be a tenet neuron where the initialization of the weights in the biases could be such that"}, {"start": 1173.12, "end": 1180.8799999999999, "text": " no single example ever activates this 10h in the sort of active part of the 10h. If all the examples"}, {"start": 1180.8799999999999, "end": 1188.3999999999999, "text": " land in the tail then this neuron will never learn it is a dead neuron. And so just scrutinizing"}, {"start": 1188.3999999999999, "end": 1195.84, "text": " this and looking for columns of completely white we see that this is not the case. So I don't see"}, {"start": 1195.84, "end": 1202.24, "text": " a single neuron that is all of white and so therefore it is the case that for every one of these"}, {"start": 1202.24, "end": 1209.4399999999998, "text": " 10h neurons we do have some examples that activate them in the active part of the 10h and so some"}, {"start": 1209.4399999999998, "end": 1214.56, "text": " gradients will flow through this neuron will learn and neuron will change and it will move and it"}, {"start": 1214.56, "end": 1220.48, "text": " will do something. But you can sometimes get yourself in cases where you have dead neurons and"}, {"start": 1220.48, "end": 1226.32, "text": " weight this manifests is that for 10h neuron this would be when no matter what inputs you plug in"}, {"start": 1226.32, "end": 1231.44, "text": " from your data set this 10h neuron always fires completely one or completely negative one and"}, {"start": 1231.44, "end": 1237.3600000000001, "text": " then it will just not learn because all the gradients will be just zero that. This is true not just"}, {"start": 1237.3600000000001, "end": 1242.0, "text": " for 10h but for a lot of other non-linearities that people use in neural networks. So we certainly"}, {"start": 1242.0, "end": 1247.84, "text": " use 10h a lot but sigmoid will have the exact same issue because it is a squashing neuron and so"}, {"start": 1247.84, "end": 1257.04, "text": " the same will be true for sigmoid but you know basically the same will actually apply to sigmoid."}, {"start": 1257.04, "end": 1262.6399999999999, "text": " The same will also apply to a relu so relu has a completely flat region here below zero."}, {"start": 1263.4399999999998, "end": 1270.56, "text": " So if you have a relu neuron then it is a pass-through if it is positive and if the pre-activation is"}, {"start": 1270.56, "end": 1275.6799999999998, "text": " negative it will just shut it off. Since the region here is completely flat then during back"}, {"start": 1275.68, "end": 1281.92, "text": " propagation this would be exactly zeroing out the gradient. Like all of the gradient would be"}, {"start": 1281.92, "end": 1286.72, "text": " set exactly to zero instead of just like a very very small number depending on how positive or"}, {"start": 1286.72, "end": 1293.3600000000001, "text": " negative t is. And so you can get for example a dead relu neuron and a dead relu neuron would"}, {"start": 1293.3600000000001, "end": 1300.4, "text": " basically look like. Basically what it is is if a neuron with a relu non-linearity never activates"}, {"start": 1300.4, "end": 1306.0800000000002, "text": " it. So for any examples that you plug in in the data set it never turns on it is always in this"}, {"start": 1306.0800000000002, "end": 1312.3200000000002, "text": " flat region then this relu neuron is a dead neuron. Its weights and bias will never learn they will"}, {"start": 1312.3200000000002, "end": 1317.76, "text": " never get agradient because the neuron never activated. And this can sometimes happen at initialization"}, {"start": 1317.76, "end": 1322.0800000000002, "text": " because the weights in the bias is just make it so that by chance some neurons are just forever dead"}, {"start": 1322.72, "end": 1327.52, "text": " but it can also happen during optimization. If you have like a too high learning weight for example"}, {"start": 1327.52, "end": 1331.6, "text": " sometimes you have these neurons that get too much of a gradient and they get knocked out"}, {"start": 1331.6, "end": 1337.84, "text": " off the data manifold. And what happens is that from then on no example ever activates this neuron"}, {"start": 1337.84, "end": 1341.44, "text": " so this neuron remains dead forever. So it's kind of like a permanent brain damage in a"}, {"start": 1341.44, "end": 1346.6399999999999, "text": " in a mind of a network. And so sometimes what can happen is if your learning rate is very high"}, {"start": 1346.6399999999999, "end": 1351.52, "text": " for example and you have a neural net with relu neurons you train the neural net and you get some"}, {"start": 1351.52, "end": 1357.2, "text": " last loss. But then actually what you do is you go through the entire training set and you forward"}, {"start": 1357.2, "end": 1364.0, "text": " on your examples and you can find neurons that never activate they are dead neurons in your network"}, {"start": 1364.0, "end": 1368.32, "text": " and so those neurons will will never turn on. And usually what happens is that during training"}, {"start": 1368.32, "end": 1373.04, "text": " these relu neurons are changing moving etc and then because of a high gradient somewhere by chance"}, {"start": 1373.04, "end": 1377.84, "text": " they get knocked off and then nothing ever activates them and from then on they are just dead."}, {"start": 1377.84, "end": 1382.24, "text": " So that's kind of like a permanent brain damage that can happen to some of these neurons."}, {"start": 1382.24, "end": 1387.76, "text": " These other nonlinearities like leaky relu will not suffer from this issue as much because you can"}, {"start": 1387.76, "end": 1395.36, "text": " see that it doesn't have flat tails. You almost always get gradients and ilu is also fairly frequently"}, {"start": 1395.36, "end": 1401.36, "text": " used. It also might suffer from this issue because it has flat parts. So that's just something to"}, {"start": 1401.36, "end": 1408.24, "text": " be aware of and something to be concerned about. And in this case we have way too many activations"}, {"start": 1408.24, "end": 1414.32, "text": " age that take on extreme values and because there's no column of white I think we will be okay"}, {"start": 1414.32, "end": 1419.12, "text": " and indeed the network optimizes and gives us a pretty decent loss but it's just not optimal and"}, {"start": 1419.12, "end": 1423.9199999999998, "text": " this is not something you want especially during initialization. And so basically what's happening"}, {"start": 1423.92, "end": 1432.0, "text": " is that this age preactivation that's flowing to 10h it's too extreme it's too large it's creating"}, {"start": 1432.0, "end": 1437.68, "text": " very it's creating distribution that is too saturated in both sides of the 10h and it's not"}, {"start": 1437.68, "end": 1443.68, "text": " something you want because it needs that there's less training for these neurons because they update"}, {"start": 1444.72, "end": 1453.1200000000001, "text": " less frequently. So how do we fix this? Well, age preactivation is MCAT which comes from C. So these"}, {"start": 1453.12, "end": 1460.2399999999998, "text": " are uniform Gaussian but then multiply by W1 plus B1 and age preact is too far off from 0 and"}, {"start": 1460.2399999999998, "end": 1465.6799999999998, "text": " that's causing the issue. So we want this preactivation to be closer to 0 very similar to what we have"}, {"start": 1465.6799999999998, "end": 1473.84, "text": " with logists. So here we want actually something very very similar. Now it's okay to set the biases"}, {"start": 1473.84, "end": 1478.08, "text": " to very small number. We can either multiply by 0, 0, 1 to get like a little bit of entropy."}, {"start": 1478.08, "end": 1485.9199999999998, "text": " I sometimes like to do that just so that there's like a little bit of variation and diversity in"}, {"start": 1485.9199999999998, "end": 1492.1599999999999, "text": " the original initialization of these 10h neurons and I find in practice that that can help optimization"}, {"start": 1492.1599999999999, "end": 1497.76, "text": " a little bit and then the weights we can also just like squash. So let's multiply everything by"}, {"start": 1497.76, "end": 1505.12, "text": " 0.1. Let's rerun the first batch and now let's look at this and well first let's look at here."}, {"start": 1505.12, "end": 1511.76, "text": " You see now because we multiply W by 0.1 we have a much better histogram and that's because the"}, {"start": 1511.76, "end": 1517.52, "text": " preactivations are now between negative 1.5 and 1.5 and this we expect much much less white."}, {"start": 1518.56, "end": 1526.32, "text": " Okay, there's no white. So basically that's because there are no neurons that saturated above 0.99"}, {"start": 1526.32, "end": 1535.28, "text": " in either direction. So it's actually a pretty decent place to be. Maybe we can go up a little bit."}, {"start": 1536.6399999999999, "end": 1544.8799999999999, "text": " Sorry am I changing W1 here? Maybe we can go to 0.2. Okay, so maybe something like this is a"}, {"start": 1544.8799999999999, "end": 1550.24, "text": " nice distribution. So maybe this is what our initialization should be. So let me now erase"}, {"start": 1550.24, "end": 1559.76, "text": " these and let me starting with initialization. Let me run the full optimization without the break"}, {"start": 1560.48, "end": 1566.64, "text": " and let's see what we get. Okay, so the optimization finished and I rerun the loss and this is the"}, {"start": 1566.64, "end": 1571.04, "text": " result that we get and then just as a reminder I put down all the losses that we saw previously in"}, {"start": 1571.04, "end": 1576.56, "text": " this lecture. So we see that we actually do get an improvement here and just as a reminder we"}, {"start": 1576.56, "end": 1581.2, "text": " started off with the validation loss of 2.17 when we started. By fixing the softmax being"}, {"start": 1581.2, "end": 1586.8, "text": " confidently wrong we came down to 2.13 and by fixing the 10H layer being way too saturated we"}, {"start": 1586.8, "end": 1591.9199999999998, "text": " came down to 2.10 and the reason this is happening of course is because our initialization is better"}, {"start": 1591.9199999999998, "end": 1598.0, "text": " and so we're spending more time being productive training instead of not very productive training"}, {"start": 1598.0, "end": 1604.0, "text": " because our gradients are set to zero and we have to learn very simple things like the overconfidence"}, {"start": 1604.0, "end": 1608.0, "text": " of the softmax in the beginning and we're spending cycles just like squashing down the white matrix."}, {"start": 1608.96, "end": 1616.96, "text": " So this is illustrating basically initialization and its impacts on performance just by being aware"}, {"start": 1616.96, "end": 1621.76, "text": " of the intervals of these neural nets and their actuations and their gradients. Now we're working"}, {"start": 1621.76, "end": 1626.96, "text": " with a very small network. This is just one layer, multi-layer perception. So because the network is"}, {"start": 1626.96, "end": 1632.24, "text": " so shallow the optimization problem is actually quite easy and very forgiving. So even though our"}, {"start": 1632.24, "end": 1637.04, "text": " initialization was terrible the network still learned eventually it just got a bit worse result."}, {"start": 1637.04, "end": 1642.72, "text": " This is not the case in general though. Once we actually start working with much deeper networks"}, {"start": 1642.72, "end": 1649.52, "text": " that have saved 50 layers things can get much more complicated and these problems stack up"}, {"start": 1650.4, "end": 1655.2, "text": " and so you can actually get into a place where the network is basically not training at all if"}, {"start": 1655.2, "end": 1660.0, "text": " your initialization is bad enough and the deeper your network is and the more complex it is the"}, {"start": 1660.0, "end": 1666.16, "text": " less forgiving it is to some of these errors. And so something to be definitely aware of"}, {"start": 1667.44, "end": 1673.04, "text": " something to scrutinize, something to plot and something to be careful with and yeah."}, {"start": 1673.52, "end": 1677.6, "text": " Okay so that's great that that worked for us but what we have here now is all these"}, {"start": 1677.6, "end": 1682.0, "text": " magic numbers like point two like where do I come up with this and how am I supposed to set these"}, {"start": 1682.0, "end": 1687.04, "text": " if I have a large neural net with lots and lots of layers and so obviously no one does this by"}, {"start": 1687.04, "end": 1692.3999999999999, "text": " hand there's actually some relatively principled ways of setting these scales that I would like to"}, {"start": 1692.3999999999999, "end": 1697.52, "text": " introduce to you now. So let me paste some code here that I prepared just to motivate the discussion"}, {"start": 1697.52, "end": 1704.6399999999999, "text": " of this. So what I'm doing here is we have some random input here x that is drawn from a Gaussian"}, {"start": 1705.28, "end": 1711.84, "text": " and there's 1000 examples that are 10 dimensional and then we have a weight layer here that is also"}, {"start": 1711.84, "end": 1717.9199999999998, "text": " initialized using Gaussian just like we did here and we these neurons in the hidden layer look at 10"}, {"start": 1717.9199999999998, "end": 1723.4399999999998, "text": " inputs and there are 200 neurons in this hidden layer and then we have here just like here"}, {"start": 1724.24, "end": 1729.6799999999998, "text": " in this case the multiplication x multiply by w to get the preactivations of these neurons."}, {"start": 1730.8, "end": 1735.9199999999998, "text": " And basically the analysis here looks at okay suppose these are uniform Gaussian and these weights"}, {"start": 1735.92, "end": 1742.24, "text": " are uniform Gaussian. If I do x times w and we forget for now the bias and the nonlinearity"}, {"start": 1743.28, "end": 1748.0800000000002, "text": " then what is the mean and the standard deviation of these Gaussians? So in the beginning here the"}, {"start": 1748.0800000000002, "end": 1753.76, "text": " input is just a normal Gaussian distribution mean zero and the standard deviation is one and the"}, {"start": 1753.76, "end": 1759.6000000000001, "text": " standard deviation again is just the measure of a spread of the Gaussian. But then once we multiply"}, {"start": 1759.6, "end": 1766.3999999999999, "text": " here and we look at the histogram of y we see that the mean of course stays the same it's about zero"}, {"start": 1766.9599999999998, "end": 1771.6, "text": " because this is a symmetric operation but we see here that the standard deviation has expanded to"}, {"start": 1771.6, "end": 1777.04, "text": " three. So the inputs and deviation was one but now we've grown to three and so what you're seeing"}, {"start": 1777.04, "end": 1785.04, "text": " in the histogram is that this Gaussian is expanding and so we're expanding this Gaussian from the"}, {"start": 1785.04, "end": 1790.56, "text": " input and we don't want that we want most of the neural nets to have relatively similar activations"}, {"start": 1790.56, "end": 1795.52, "text": " so unit Gaussian roughly throughout the neural net. And so the question is how do we scale these"}, {"start": 1795.52, "end": 1805.2, "text": " w's to preserve the to preserve this distribution to remain a Gaussian. And so intuitively if I"}, {"start": 1805.2, "end": 1814.32, "text": " multiply here these elements of w by a large number like say by 5 then this Gaussian grows and"}, {"start": 1814.32, "end": 1820.1599999999999, "text": " grows in standard deviation so now we're at 15 so basically these numbers here in the output y"}, {"start": 1820.1599999999999, "end": 1826.8, "text": " take on more and more extreme values but if we scale it down like say 0.2 then conversely"}, {"start": 1826.8, "end": 1832.1599999999999, "text": " this Gaussian is getting smaller and smaller and it's shrinking and you can see that the standard"}, {"start": 1832.1599999999999, "end": 1838.6399999999999, "text": " deviation is 0.6 and so the question is what do I multiply by here to exactly preserve the standard"}, {"start": 1838.6399999999999, "end": 1843.4399999999998, "text": " deviation to be one and it turns out that the correct answer mathematically when you work out"}, {"start": 1843.44, "end": 1850.0800000000002, "text": " through the variance of this multiplication here is that you are supposed to divide by the"}, {"start": 1850.0800000000002, "end": 1858.3200000000002, "text": " square root of the fan in. The fan in is basically the number of input components here 10 so we"}, {"start": 1858.3200000000002, "end": 1862.96, "text": " are supposed to divide by 10 square root and this is one way to do the square root you raise it"}, {"start": 1862.96, "end": 1869.76, "text": " to a power of 0.5 and that's the same as doing a square root. So when you divide by the square root"}, {"start": 1869.76, "end": 1878.64, "text": " of 10 then we see that the output Gaussian it has exactly standard deviation of 1. Now unsurprisingly"}, {"start": 1878.64, "end": 1884.24, "text": " a number of papers have looked into how but to best initialize neural networks and in the case"}, {"start": 1884.24, "end": 1888.64, "text": " in multiple perceptions we can have fairly deep networks that had these non-linearities in between"}, {"start": 1889.12, "end": 1893.44, "text": " and we want to make sure that the activations are well behaved and they don't expand to infinity"}, {"start": 1893.44, "end": 1897.6, "text": " or shrink all the way to 0 and the question is how do we initialize the weights so that these"}, {"start": 1897.6, "end": 1902.7199999999998, "text": " activations take on reasonable values throughout the network. Now one paper that has studied this"}, {"start": 1902.7199999999998, "end": 1907.84, "text": " in quite a bit detail that is often referenced is this paper by Kaminghe Tull called delving deep"}, {"start": 1907.84, "end": 1913.28, "text": " interactive fires. Now in this case they actually study convolutional neural networks and they study"}, {"start": 1913.28, "end": 1919.84, "text": " especially the relu nullinarity and the p-relu nullinarity instead of a 10-h nullinarity"}, {"start": 1919.84, "end": 1925.6, "text": " but the analysis is very similar and basically what happens here is for them the"}, {"start": 1925.6, "end": 1932.32, "text": " the relu nullinarity that they care about quite a bit here is a squashing function where all the"}, {"start": 1932.32, "end": 1938.32, "text": " negative numbers are simply clamped to 0 so the positive numbers are a path through but everything"}, {"start": 1938.32, "end": 1944.6399999999999, "text": " negative is just set to 0 and because you are basically throwing away half of the distribution"}, {"start": 1944.6399999999999, "end": 1948.9599999999998, "text": " they find in their analysis of the forward activations in the neural net that you have to"}, {"start": 1948.96, "end": 1956.88, "text": " complicate for that with a gain and so here they find that basically when they initialize their"}, {"start": 1956.88, "end": 1961.28, "text": " weights they have to do it with a zero mean Gaussian whose standard deviation is square root of"}, {"start": 1961.28, "end": 1967.76, "text": " 2 over the fanning what we have here is we are initializing the Gaussian with the square root of"}, {"start": 1967.76, "end": 1974.48, "text": " fanning this NL here is the fanning so what we have is square root of 1 over the fanning"}, {"start": 1974.48, "end": 1980.96, "text": " because we have the division here now they have to add this factor of 2 because of the relu"}, {"start": 1980.96, "end": 1986.24, "text": " which basically discards half of the distribution and clamps it at 0 and so that's where you get"}, {"start": 1986.24, "end": 1992.64, "text": " an initial factor now in addition to that this paper also studies not just the sort of behavior"}, {"start": 1992.64, "end": 1997.68, "text": " of the activations in the forward path of the neural net but it also studies the back propagation"}, {"start": 1997.68, "end": 2001.44, "text": " and we have to make sure that the gradients also are well behaved and so"}, {"start": 2001.44, "end": 2007.76, "text": " because ultimately they end up updating our parameters and what they find here through a lot of"}, {"start": 2007.76, "end": 2012.3200000000002, "text": " the analysis that I am much into read through but it's not exactly approachable what they find"}, {"start": 2012.3200000000002, "end": 2017.6000000000001, "text": " is basically if you properly initialize the forward path the backward path is also"}, {"start": 2017.6000000000001, "end": 2023.52, "text": " approximately initialized up to a constant factor that has to do with the size of the number of"}, {"start": 2024.3200000000002, "end": 2031.2, "text": " hidden neurons and early and late layer and but basically they find empirically that"}, {"start": 2031.2, "end": 2037.2, "text": " this is not a choice that matters too much now this chiming initialization is also implemented in"}, {"start": 2037.2, "end": 2041.8400000000001, "text": " PyTorch so if you go to torshtat and end up in it documentation you'll find chiming normal"}, {"start": 2042.48, "end": 2046.4, "text": " and in my opinion this is probably the most common way of initializing neural networks now"}, {"start": 2047.3600000000001, "end": 2052.8, "text": " and it takes a few keyword arguments here so another one it wants to know the mode"}, {"start": 2052.8, "end": 2057.12, "text": " would you like to normalize the activations or would you like to normalize the gradients to"}, {"start": 2057.12, "end": 2063.44, "text": " to be always caution with zero mean and unit or one standard deviation and because they find"}, {"start": 2063.44, "end": 2067.12, "text": " a paper that this doesn't matter too much most of the people just leave it as the default which is"}, {"start": 2067.12, "end": 2073.52, "text": " pen in and then second pass in the nullinearity that you are using because depending on the nullinearity"}, {"start": 2073.52, "end": 2079.2799999999997, "text": " we need to calculate a slightly different gain and so if your nullinearity is just a linear"}, {"start": 2079.2799999999997, "end": 2084.72, "text": " so there's no nullinearity then the gain here will be one and we have these same kind of formula"}, {"start": 2084.72, "end": 2088.72, "text": " that we've caught here but if the nullinearity is something else we're going to get a slightly"}, {"start": 2088.72, "end": 2093.68, "text": " different gain and so if we come up here to the top we see that for example in the case of"}, {"start": 2093.68, "end": 2098.8799999999997, "text": " relu this gain is a square root of two and the reason is the square root because in this paper"}, {"start": 2103.04, "end": 2108.08, "text": " you see how the two is inside of the square root so the gain is a square root of two"}, {"start": 2108.08, "end": 2115.2799999999997, "text": " in a case of linear or identity we just get a gain of one in a case of 10h which is what we're"}, {"start": 2115.2799999999997, "end": 2121.44, "text": " using here the advised gain is a 5 over 3 and intuitively why do we need a gain on top of the"}, {"start": 2121.44, "end": 2128.08, "text": " initialization is because 10h just like relu is a contractive transformation so what that means"}, {"start": 2128.08, "end": 2132.56, "text": " is you're taking the output distribution from this matrix multiplication and then you are squashing"}, {"start": 2132.56, "end": 2137.2, "text": " it in some way now relu squashes it by taking everything below zero and clamping it to zero"}, {"start": 2137.2, "end": 2142.0, "text": " 10h also squashes it because it's a contractive operation it will take the tails and it will"}, {"start": 2142.8799999999997, "end": 2148.96, "text": " squeeze them in and so in order to fight the squeezing in we need to boost the weights a little bit"}, {"start": 2148.96, "end": 2154.72, "text": " so that we re-normalize everything back to unit standard deviation so that's why there's a"}, {"start": 2154.72, "end": 2158.72, "text": " little bit of a gain that comes out now i'm skipping through this section a little bit quickly"}, {"start": 2158.72, "end": 2164.16, "text": " and i'm doing that actually intentionally and the reason for that is because about seven years ago"}, {"start": 2164.16, "end": 2168.7999999999997, "text": " when this paper was written you had to actually be extremely careful with the activations and"}, {"start": 2168.7999999999997, "end": 2173.12, "text": " ingredients and their ranges and their histograms and you had to be very careful with the"}, {"start": 2173.12, "end": 2177.6, "text": " precise setting of gains and the scrutinizing of the null and your t's used and so on and everything"}, {"start": 2177.6, "end": 2182.48, "text": " was very finicky and very fragile and to be very properly arranged for the neural network to train"}, {"start": 2182.48, "end": 2186.3199999999997, "text": " especially if your neural network was very deep but there are a number of modern innovations that"}, {"start": 2186.3199999999997, "end": 2190.8799999999997, "text": " have made everything significantly more stable and more well behaved and it's become less important"}, {"start": 2190.88, "end": 2196.32, "text": " to initialize these networks exactly right and some of those modern innovations for example are"}, {"start": 2196.32, "end": 2202.1600000000003, "text": " residual connections which we will cover in the future the use of a number of normalization"}, {"start": 2202.1600000000003, "end": 2207.44, "text": " layers like for example bash normalization layer normalization work normalization we're going to"}, {"start": 2207.44, "end": 2212.0, "text": " go into a lot of these as well and number three much better optimizers not just to cast a gradient"}, {"start": 2212.0, "end": 2217.2000000000003, "text": " scent the simple optimizer we're basically using here but slightly more complex optimizers"}, {"start": 2217.2, "end": 2222.0, "text": " like armistrop and especially atom and so all of these modern innovations make it less"}, {"start": 2222.0, "end": 2226.96, "text": " important for you to precisely calibrate the initialization of the neural net all that being said"}, {"start": 2226.96, "end": 2232.96, "text": " in practice what should we do in practice when I initialize these neural nets I basically just"}, {"start": 2232.96, "end": 2239.4399999999996, "text": " normalize my weights by the square root of the pen and so basically roughly what we did here"}, {"start": 2239.4399999999996, "end": 2245.7599999999998, "text": " is what I do now if we want to be exactly accurate here we and go by in it of"}, {"start": 2245.76, "end": 2251.44, "text": " kind of normal this is how we would implement it we want to set the standard deviation to be"}, {"start": 2251.44, "end": 2259.28, "text": " gained over the square root of fan in right so to set the standard deviation of our weights we will"}, {"start": 2259.28, "end": 2265.84, "text": " proceed as follows basically when we have torshtot ran and let's say I just create a thousand numbers"}, {"start": 2265.84, "end": 2269.92, "text": " we can look at the standard deviation of this and of course that's one that's the amount of spread"}, {"start": 2269.92, "end": 2274.88, "text": " let's make this a bit bigger so it's closer to one so this is the spread of the Gaussian"}, {"start": 2274.88, "end": 2280.88, "text": " of zero mean and unit standard deviation now basically when you take these and you multiply by say"}, {"start": 2280.88, "end": 2287.44, "text": " 0.2 that basically scales down the Gaussian and that makes its standard deviation 0.2 so basically"}, {"start": 2287.44, "end": 2292.8, "text": " the number that you multiply by here ends up being the standard deviation of this Gaussian so here"}, {"start": 2292.8, "end": 2300.4, "text": " this is a standard deviation 0.2 Gaussian here when we sample our w1 but we want to set the standard"}, {"start": 2300.4, "end": 2307.76, "text": " deviation to gain over square root of fan mode which is fan in so in other words we want to multiply"}, {"start": 2307.76, "end": 2317.44, "text": " by gain which for 10 h is 5 over 3 5 over 3 is the gain and then times"}, {"start": 2317.44, "end": 2324.4, "text": " um"}, {"start": 2324.4, "end": 2334.08, "text": " I guess I divide square root of the fan in and in this example here the fan in was 10 and I"}, {"start": 2334.08, "end": 2339.92, "text": " just noticed that actually here the fan in for w1 is actually an embed times block size which as"}, {"start": 2339.92, "end": 2344.32, "text": " you will recall is actually 30 and that's because each character is 10 dimensional but then we have"}, {"start": 2344.32, "end": 2348.6400000000003, "text": " three of them and we can cutinate them so actually the fan in here was 30 and I should have used"}, {"start": 2348.6400000000003, "end": 2355.52, "text": " 30 here probably but basically we want 30 square root so this is the number this is what our standard"}, {"start": 2355.52, "end": 2361.36, "text": " deviation we want to be and this number turns out to be 0.3 whereas here just by fiddling with it"}, {"start": 2361.36, "end": 2366.8, "text": " and looking at the distribution and making sure it looks okay we came up with 0.2 and so instead what"}, {"start": 2366.8, "end": 2375.36, "text": " we want to do here is we want to make the standard deviation be um 5 over 3 which is our gain divide"}, {"start": 2377.28, "end": 2384.8, "text": " this amount times 0.2 square root and these brackets here are not that necessary but I'll just put"}, {"start": 2384.8, "end": 2390.6400000000003, "text": " them here for clarity this is basically what we want this is the kiming in it in our case for a 10"}, {"start": 2390.64, "end": 2398.24, "text": " h nonlinearity and this is how we would initialize the neural net and so we're multiplying by 0.3 instead"}, {"start": 2398.24, "end": 2406.64, "text": " of multiplying by 0.2 and so we can we can initialize this way and then we can train the neural net"}, {"start": 2406.64, "end": 2412.16, "text": " and see what we got okay so I trained the neural net and we end up in roughly the same spot"}, {"start": 2412.16, "end": 2417.7599999999998, "text": " so looking at the values should loss we now get 2.10 and previously we also had 2.10 there's"}, {"start": 2417.76, "end": 2422.0800000000004, "text": " a little bit of a difference but that's just the randomness the process I suspect but the big deal"}, {"start": 2422.0800000000004, "end": 2429.28, "text": " of course is we get to the same spot but we did not have to introduce any um magic numbers that we"}, {"start": 2429.28, "end": 2434.0, "text": " got from just looking at histograms and guessing checking we have something that is semi-principled"}, {"start": 2434.0, "end": 2440.0800000000004, "text": " and will scale us to much bigger networks and uh something that we can sort of use as a guide"}, {"start": 2440.0800000000004, "end": 2444.48, "text": " so I mentioned that the precise setting of these initializations is not as important today"}, {"start": 2444.48, "end": 2448.4, "text": " due to some modern innovations and I think now is a pretty good time to introduce one of those modern"}, {"start": 2448.4, "end": 2454.8, "text": " innovations and that is best normalization so best normalization came out in uh 2015 from a team"}, {"start": 2454.8, "end": 2460.56, "text": " at Google and it was an extremely impactful paper because it made it possible to train very deep"}, {"start": 2460.56, "end": 2466.48, "text": " neural nets quite reliably and uh it basically just worked so here's what best normalization does"}, {"start": 2466.48, "end": 2474.08, "text": " and what's implemented um basically we have these uh hidden states h pre-act right and we were"}, {"start": 2474.08, "end": 2480.64, "text": " talking about how we don't want these uh these um pre-activation states to be way too small because"}, {"start": 2480.64, "end": 2485.6, "text": " that then the 10h is not um doing anything but we don't want them to be too large because then the"}, {"start": 2485.6, "end": 2492.3199999999997, "text": " 10h is saturated in fact we want them to be roughly roughly Gaussian so zero mean and a units or"}, {"start": 2492.3199999999997, "end": 2499.44, "text": " one standard deviation at least at initialization so the insight from the best normalization paper is"}, {"start": 2499.44, "end": 2504.7200000000003, "text": " okay you have these hidden states and you'd like them to be roughly Gaussian then why not take the"}, {"start": 2504.7200000000003, "end": 2510.56, "text": " hidden states and uh just normalize them to be Gaussian and it sounds kind of crazy but you can"}, {"start": 2510.56, "end": 2517.44, "text": " just do that because uh standardizing hidden states so that their unit Gaussian is a perfectly"}, {"start": 2517.44, "end": 2522.16, "text": " differentiable operation as we'll soon see and so that was kind of like the big insight in this paper"}, {"start": 2522.16, "end": 2526.48, "text": " and when I first read it my mind was blown because you can just normalize these hidden states and"}, {"start": 2526.48, "end": 2532.2400000000002, "text": " if you'd like unit Gaussian states in your network uh at least initialization you can just normalize"}, {"start": 2532.2400000000002, "end": 2538.2400000000002, "text": " them to be in Gaussian so uh let's see how that works so we're going to scroll to our pre-activations"}, {"start": 2538.2400000000002, "end": 2543.6, "text": " here just before they enter into the 10h now the idea again is remember we're trying to make these"}, {"start": 2543.6, "end": 2549.2, "text": " roughly Gaussian and that's because if these are way too small numbers then the 10h here is kind"}, {"start": 2549.2, "end": 2555.6, "text": " of an active but if these are very large numbers then the 10h is way to saturated and graded in the"}, {"start": 2555.6, "end": 2561.8399999999997, "text": " flow so we'd like this to be roughly Gaussian so the insight in best romanization again is that we"}, {"start": 2561.8399999999997, "end": 2568.3199999999997, "text": " can just standardize these activations so they are exactly Gaussian so here h preact"}, {"start": 2569.8399999999997, "end": 2576.72, "text": " has a shape of 32 by 200 32 examples by 200 neurons in the hidden layer so basically what we can"}, {"start": 2576.72, "end": 2583.04, "text": " do is we can take h preact and we can just calculate the mean um and the mean we want to calculate"}, {"start": 2583.04, "end": 2590.64, "text": " across the zero dimension and we want to also keep them as true so that we can easily broadcast this"}, {"start": 2591.6, "end": 2599.2799999999997, "text": " so the shape of this is 1 by 200 in other words we are doing the mean over all the uh elements in the"}, {"start": 2599.2799999999997, "end": 2604.96, "text": " batch and similarly we can calculate the standard deviation of these activations"}, {"start": 2604.96, "end": 2614.0, "text": " and then we'll also be 1 by 200 now in this paper they have the uh sort of prescription here"}, {"start": 2614.64, "end": 2622.7200000000003, "text": " and see here we are calculating the mean which is just taking the average value of any neurons"}, {"start": 2622.7200000000003, "end": 2628.32, "text": " activation and then the standard deviation is basically kind of like um this the measure of"}, {"start": 2628.32, "end": 2634.4, "text": " the spread that we've been using which is the distance of every one of these values away from the"}, {"start": 2634.4, "end": 2642.4, "text": " mean and that squared and averaged that's the that's the variance and then if you want to take"}, {"start": 2642.4, "end": 2646.56, "text": " the standard deviation you would the square root the variance to get the standard deviation"}, {"start": 2647.76, "end": 2652.56, "text": " so these are the two that we're calculating and now we're going to normalize or standardize"}, {"start": 2652.56, "end": 2659.2000000000003, "text": " these x's by subtracting the mean and um dividing by the standard deviation so basically we're taking"}, {"start": 2659.2, "end": 2672.3999999999996, "text": " edge preact and we subtract the mean and then we divide by the standard deviation"}, {"start": 2674.3199999999997, "end": 2682.08, "text": " this is exactly what these two STD and mean are calculating oops sorry this is the mean and this"}, {"start": 2682.08, "end": 2686.72, "text": " is the variance you see how the sigma is the standard deviation usually so this is sigma square"}, {"start": 2686.72, "end": 2692.16, "text": " which is the variance is the square of the standard deviation so this is how you standardize"}, {"start": 2692.16, "end": 2697.3599999999997, "text": " these values and what this will do is that every single neuron now and its firing rate will be"}, {"start": 2697.3599999999997, "end": 2702.3999999999996, "text": " exactly unit Gaussian on these 32 examples at least of this batch that's why it's called batch"}, {"start": 2702.3999999999996, "end": 2709.7599999999998, "text": " normalization we are normalizing these batches and then we could in principle train this notice"}, {"start": 2709.7599999999998, "end": 2714.24, "text": " that calculating the mean and standard deviation these are just mathematical formulas they're perfectly"}, {"start": 2714.24, "end": 2719.52, "text": " differentiable all this is perfectly differentiable and we can just train this the problem is you"}, {"start": 2719.52, "end": 2726.72, "text": " actually won't achieve a very good result with this and the reason for that is we want these to be"}, {"start": 2726.72, "end": 2733.04, "text": " roughly Gaussian but only at initialization but we don't want these to be forced to be Gaussian"}, {"start": 2733.04, "end": 2738.64, "text": " always we would like to allow the neural nets to move this around to potentially make it more"}, {"start": 2738.64, "end": 2744.0, "text": " diffuse to make it more sharp to make some 10 H neurons maybe mean more trigger more trigger happy"}, {"start": 2744.0, "end": 2748.24, "text": " or less trigger happy so we'd like this distribution to move around and we'd like the back"}, {"start": 2748.24, "end": 2754.4, "text": " propagation to tell us how the distribution should move around and so in addition to this idea"}, {"start": 2754.4, "end": 2760.8, "text": " of standardizing the activations at any point in the network we have to also introduce this"}, {"start": 2760.8, "end": 2766.64, "text": " additional component in the paper here describe the scale and shift and so basically what we're"}, {"start": 2766.64, "end": 2772.16, "text": " doing is we're taking these normalized inputs and we are additionally scaling them by some gain"}, {"start": 2772.16, "end": 2778.8799999999997, "text": " and offsetting them by some bias to get our final output from this layer and so what that amounts"}, {"start": 2778.8799999999997, "end": 2786.0, "text": " to is the following we are going to allow a bash normalization gain to be initialized at just"}, {"start": 2786.0, "end": 2794.48, "text": " a once and the once will be in the shape of 1 by n hidden and then we also will have a bn bias"}, {"start": 2794.48, "end": 2802.88, "text": " which will be torched at 0's and it will also be of the shape n by 1 by n hidden and then here"}, {"start": 2803.6, "end": 2812.48, "text": " the bn gain will multiply this and the bn bias will offset it here so because this is initialized"}, {"start": 2812.48, "end": 2820.88, "text": " to 1 and this to 0 at initialization each neurons firing values in this batch will be exactly"}, {"start": 2820.88, "end": 2827.12, "text": " unit Gaussian and will have nice numbers no matter what the distribution of the h preact is coming in"}, {"start": 2827.12, "end": 2831.36, "text": " coming out it will be unit Gaussian for each neuron and that's roughly what we want at least at"}, {"start": 2831.36, "end": 2838.48, "text": " initialization and then during optimization we'll be able to back propagate to bn gain and bn bias"}, {"start": 2838.48, "end": 2844.7200000000003, "text": " and change them so the network is given the full ability to do with this whatever it wants internally"}, {"start": 2844.72, "end": 2852.3199999999997, "text": " here we just have to make sure that we include these in the parameters of the neural nut because"}, {"start": 2852.3199999999997, "end": 2859.2799999999997, "text": " they will be trained with back propagation so let's initialize this and then we should be able to train"}, {"start": 2865.6, "end": 2871.2799999999997, "text": " and then we're going to also copy this line which is the bash normalization layer"}, {"start": 2871.28, "end": 2875.76, "text": " here on a single line of code and we're going to swing down here and we're also going to"}, {"start": 2876.4, "end": 2884.2400000000002, "text": " do the exact same thing at test time here so similar to train time we're going to normalize"}, {"start": 2884.96, "end": 2889.0400000000004, "text": " and then scale and that's going to give us our train and validation loss"}, {"start": 2890.6400000000003, "end": 2893.6000000000004, "text": " and we'll see in a second that we're actually going to change this a little bit but for now I'm"}, {"start": 2893.6000000000004, "end": 2898.5600000000004, "text": " going to keep it this way so I'm just going to wait for this to converge okay so I allowed the"}, {"start": 2898.56, "end": 2903.92, "text": " neural nut to converge here and when we scroll down we see that our validation loss here is 2.10"}, {"start": 2903.92, "end": 2908.56, "text": " roughly which I wrote down here and we see that this is actually kind of comparable to some of the"}, {"start": 2908.56, "end": 2914.24, "text": " results that we've achieved previously now I'm not actually expecting an improvement in this case"}, {"start": 2914.7999999999997, "end": 2918.72, "text": " and that's because we are dealing with a very simple neural nut that has just a single hidden layer"}, {"start": 2919.44, "end": 2923.92, "text": " so in fact in this very simple case of just one hidden layer we were able to actually"}, {"start": 2923.92, "end": 2929.28, "text": " calculate what the scale of W should be to make these pre-activations already have a roughly"}, {"start": 2929.28, "end": 2934.4, "text": " Gaussian shape so the best normalization is not doing much here but you might imagine that once"}, {"start": 2934.4, "end": 2939.76, "text": " you have a much deeper neural nut that has lots of different types of operations and there's also"}, {"start": 2939.76, "end": 2944.7200000000003, "text": " for example residual connections which we'll cover and so on it will become basically very very"}, {"start": 2944.7200000000003, "end": 2950.4, "text": " difficult to tune the scales of your wake matrices such that all the activations throughout the"}, {"start": 2950.4, "end": 2956.48, "text": " neural nut are roughly Gaussian and so that's going to become very quickly intractable but"}, {"start": 2956.48, "end": 2960.88, "text": " compared to that it's going to be much much easier to sprinkle batch normalization layers throughout"}, {"start": 2960.88, "end": 2966.96, "text": " the neural nut so in particular it's common to to look at every single linear layer like this one"}, {"start": 2966.96, "end": 2972.2400000000002, "text": " this is a linear layer multiplied by weight matrix and adding the bias or for example convolutions"}, {"start": 2972.2400000000002, "end": 2977.84, "text": " which we'll cover later and also perform basically a multiplication with weight matrix but in a"}, {"start": 2977.84, "end": 2982.56, "text": " more spatially structured format it's customer it's customer to take these linear layer or"}, {"start": 2982.56, "end": 2989.2000000000003, "text": " convolutional layer and append a best normalization layer right after it to control the scale of"}, {"start": 2989.2000000000003, "end": 2993.76, "text": " these activations at every point in the neural nut so we'd be adding these best normal layers throughout"}, {"start": 2993.76, "end": 2998.0, "text": " the neural nut and then this controls the scale of these activations throughout the neural nut"}, {"start": 2998.56, "end": 3003.84, "text": " it doesn't require us to do a perfect mathematics and care about the activation distributions"}, {"start": 3003.84, "end": 3007.76, "text": " for all these different types of neural nut or Lego building blocks that you might want to"}, {"start": 3007.76, "end": 3013.2000000000003, "text": " introduce into your neural nut and it significantly stabilizes the train and that's why these"}, {"start": 3013.2000000000003, "end": 3017.6800000000003, "text": " layers are quite popular now the stability offered by batch normalization actually comes at a"}, {"start": 3017.6800000000003, "end": 3023.2000000000003, "text": " terrible cost and that cost is that if you think about what's happening here something's something"}, {"start": 3023.2000000000003, "end": 3029.1200000000003, "text": " terribly strange and unnatural is happening it used to be that we have a single example feeding"}, {"start": 3029.12, "end": 3035.12, "text": " into a neural nut and then we calculate this activations and its logits and this is a"}, {"start": 3035.12, "end": 3041.12, "text": " deterministic sort of process so you arrive at some logits for this example and then because of"}, {"start": 3041.12, "end": 3046.0, "text": " efficiency of training we suddenly started to use batches of examples but those batches of examples"}, {"start": 3046.0, "end": 3051.52, "text": " were processed independently and it was just an efficiency thing but now suddenly in batch normalization"}, {"start": 3051.52, "end": 3056.56, "text": " because of the normalization through the batch we are coupling these examples mathematically"}, {"start": 3056.56, "end": 3062.16, "text": " and in the forward pass and backward pass of the neural nut so now the hidden state activations H"}, {"start": 3062.16, "end": 3067.6, "text": " pre-oct and your logits for any one input example are not just a function of that example and its"}, {"start": 3067.6, "end": 3073.04, "text": " input but they're also a function of all the other examples that happen to come for a ride in that"}, {"start": 3073.04, "end": 3078.16, "text": " batch and these examples are sampled randomly and so what's happening is for example when you look"}, {"start": 3078.16, "end": 3083.84, "text": " at H pre-oct that's going to feed into H the hidden state activations for for example for for any one"}, {"start": 3083.84, "end": 3088.96, "text": " of these input examples is going to actually change slightly depending on what other examples"}, {"start": 3088.96, "end": 3095.04, "text": " there are in the batch and depending on what other examples happen to come for a ride H is going"}, {"start": 3095.04, "end": 3099.6000000000004, "text": " to change suddenly and it's going to look jitter if you imagine sampling different examples"}, {"start": 3099.6000000000004, "end": 3103.1200000000003, "text": " because the statistics of the mean and the standard deviation are going to be impacted"}, {"start": 3104.0, "end": 3109.52, "text": " and so you'll get a jitter for H and you'll get a jitter for logits and you think that this would"}, {"start": 3109.52, "end": 3116.24, "text": " be a bug or something undesirable but in a very strange way this actually turns out to be good"}, {"start": 3116.24, "end": 3121.7599999999998, "text": " in neural network training and as a side effect and the reason for that is that you can think of this"}, {"start": 3121.7599999999998, "end": 3126.48, "text": " as kind of like a regularizer because what's happening is you have your input and you get your H"}, {"start": 3126.48, "end": 3131.04, "text": " and then depending on the other examples this is jittering a bit and so what that does is that"}, {"start": 3131.04, "end": 3135.52, "text": " it's effectively padding out any one of these input examples and it's introducing a little bit of"}, {"start": 3135.52, "end": 3141.36, "text": " entropy and because of the padding out it's actually kind of like a form of a data augmentation"}, {"start": 3141.36, "end": 3146.0, "text": " which will cover in the future and it's not kind of like augmenting the input a little bit and"}, {"start": 3146.0, "end": 3151.04, "text": " jittering it and that makes it harder for the neural nets to overfit these concrete specific"}, {"start": 3151.04, "end": 3156.8, "text": " examples so by introducing all this noise it actually like pads out the examples and it regularizes"}, {"start": 3156.8, "end": 3162.48, "text": " the neural net and that's one of the reasons why deceiving me as a second order effect this is"}, {"start": 3162.48, "end": 3167.76, "text": " actually a regularizer and that has made it harder for us to remove the use of bachelor normalization"}, {"start": 3168.72, "end": 3173.36, "text": " because basically no one likes this property that the examples in the batch are coupled"}, {"start": 3173.36, "end": 3179.2, "text": " mathematically and in the forward pass and at least all kinds of like strange results will go into"}, {"start": 3179.2, "end": 3185.84, "text": " some of that in a second as well and at least do a lot of bugs and so on and so no one likes this"}, {"start": 3185.84, "end": 3191.84, "text": " property and so people have tried to deprecate the use of bachelor normalization and move to other"}, {"start": 3191.84, "end": 3196.88, "text": " normalization techniques that do not couple the examples of a batch examples are linear normalization"}, {"start": 3196.88, "end": 3201.84, "text": " instance normalization group normalization and so on and we'll come rest we'll come rest on these"}, {"start": 3201.84, "end": 3208.0, "text": " later but basically a long story short bachelor normalization was the first kind of normalization"}, {"start": 3208.0, "end": 3212.88, "text": " later to be introduced it worked extremely well it happens to have this regularizing effect"}, {"start": 3213.44, "end": 3219.04, "text": " it's stabilized training and people have been trying to remove it and move to some of the other"}, {"start": 3219.04, "end": 3224.8, "text": " normalization techniques but it's been hard because it it just works quite well and some of the"}, {"start": 3224.8, "end": 3229.2, "text": " reason that it works quite well is again because of this regularizing effect and because of the"}, {"start": 3229.2, "end": 3234.72, "text": " because it is quite effective at controlling the activations and their distributions so that's"}, {"start": 3234.72, "end": 3240.56, "text": " kind of like the brief story of bachelor normalization and I like to show you one of the other weird"}, {"start": 3240.56, "end": 3246.0, "text": " sort of outcomes of this coupling so here's one of the strange outcomes that I only lost over"}, {"start": 3246.0, "end": 3252.08, "text": " previously when I was evaluating the loss on the validation set basically once we've trained a"}, {"start": 3252.08, "end": 3257.28, "text": " neural net we'd like to deploy it in some kind of a setting and we'd like to be able to feed in a"}, {"start": 3257.28, "end": 3262.64, "text": " single individual example and get a prediction out from our neural net but how do we do that when"}, {"start": 3262.64, "end": 3267.12, "text": " our neural net now in a forward pass estimates the statistics of the mean understand deviation"}, {"start": 3267.12, "end": 3272.08, "text": " of a batch the neural net expects batches as an input now so how do we feed in a single example"}, {"start": 3272.08, "end": 3278.0, "text": " and get sensible results out and so the proposal in the bachelor normalization paper is the following"}, {"start": 3278.7999999999997, "end": 3284.16, "text": " what we would like to do here is we would like to basically have a step after training"}, {"start": 3284.7999999999997, "end": 3290.88, "text": " that calculates and sets the bachelor mean and standard deviation a single time over the"}, {"start": 3290.88, "end": 3296.48, "text": " training set and so I wrote this code here in interest of time and we're going to call what's"}, {"start": 3296.48, "end": 3303.04, "text": " called calibrate the bachelor statistics and basically what we do is torshot no grad telling by"}, {"start": 3303.04, "end": 3307.92, "text": " torshot that none of this we will call the dot backward on and it's going to be a bit more efficient"}, {"start": 3308.8, "end": 3313.6, "text": " we're going to take the training set get the preactivations for every single training example"}, {"start": 3313.6, "end": 3317.36, "text": " and then one single time estimate the mean and standard deviation over the entire training set"}, {"start": 3318.16, "end": 3322.2400000000002, "text": " and then we're going to get b and mean and b and standard deviation and now these are fixed"}, {"start": 3322.24, "end": 3328.4799999999996, "text": " numbers estimated over the entire training set and here instead of estimating it dynamically"}, {"start": 3329.8399999999997, "end": 3336.64, "text": " we are going to instead here use b and mean and here we're just going to use b and standard deviation"}, {"start": 3338.08, "end": 3343.52, "text": " and so at test time we are going to fix these clamp them and use them during inference and now"}, {"start": 3345.52, "end": 3351.2799999999997, "text": " you see that we get basically identical result but the benefit that we've gained is that we can"}, {"start": 3351.28, "end": 3355.52, "text": " now also forward a single example because the mean and standard deviation are now fixed"}, {"start": 3355.52, "end": 3361.1200000000003, "text": " all sorts of tensors that said nobody actually wants to estimate this mean and standard deviation"}, {"start": 3361.1200000000003, "end": 3366.8, "text": " as a second stage after neural network training because everyone is lazy and so this"}, {"start": 3366.8, "end": 3371.28, "text": " specialization paper actually introduced one more idea which is that we can we can estimate"}, {"start": 3371.28, "end": 3376.0, "text": " the mean and standard deviation in a running manner running manner during training of the"}, {"start": 3376.0, "end": 3381.04, "text": " neural network and then we can simply just have a single stage of training and on the side of that"}, {"start": 3381.04, "end": 3385.52, "text": " training we are estimating the running mean as a deviation so let's see what that would look like"}, {"start": 3386.56, "end": 3391.12, "text": " let me basically take the mean here that we are estimating on the batch and let me call this"}, {"start": 3391.12, "end": 3406.4, "text": " b and mean on the i-th iteration and then here this is b and sd d b and sd d at i okay"}, {"start": 3407.04, "end": 3415.2799999999997, "text": " and the mean comes here and the sd d comes here so so far I've done nothing I've just moved around"}, {"start": 3415.2799999999997, "end": 3419.68, "text": " and I created these extra variables for the mean and standard deviation and I've put them here"}, {"start": 3419.68, "end": 3424.24, "text": " so so far nothing has changed but what we're going to do now is we're going to keep a running mean"}, {"start": 3424.24, "end": 3429.9199999999996, "text": " of both of these values during training so let me swing up here and let me create a b and mean"}, {"start": 3429.9199999999996, "end": 3437.8399999999997, "text": " underscore running and I'm going to initialize it at zeros and then b and sd d running"}, {"start": 3438.64, "end": 3447.2, "text": " which I'll initialize at once because in the beginning because of the way we initialized w1"}, {"start": 3447.2, "end": 3453.4399999999996, "text": " and b1 hpx will be roughly unit Gaussian so the mean will be roughly zero and extend deviation"}, {"start": 3453.4399999999996, "end": 3458.64, "text": " roughly one so I'm going to initialize these that way but then here I'm going to update these"}, {"start": 3459.4399999999996, "end": 3466.3199999999997, "text": " and in PyTorch these mean and standard deviation that are running they're not actually part of"}, {"start": 3466.3199999999997, "end": 3470.16, "text": " the gradient based optimization we're never going to derive gradients with respect to them"}, {"start": 3470.16, "end": 3475.3599999999997, "text": " they're they're updated on the side of training and so what we're going to do here is we're going to"}, {"start": 3475.36, "end": 3482.2400000000002, "text": " say with torsched up no grad telling PyTorch that the update here is not supposed to be building"}, {"start": 3482.2400000000002, "end": 3489.2000000000003, "text": " out a graph because there will be no dot backward but this running mean is basically going to be 0.99"}, {"start": 3490.2400000000002, "end": 3502.56, "text": " nine times the current value plus 0.001 times the best value this new mean and in the same way bnstd"}, {"start": 3502.56, "end": 3511.68, "text": " running will be mostly what it used to be but it will receive a small update in the direction of"}, {"start": 3511.68, "end": 3518.16, "text": " what the current standard deviation is and as you're seeing here this update is outside and on"}, {"start": 3518.16, "end": 3523.6, "text": " the side of the gradient based optimization and it's simply being updated not using gradient"}, {"start": 3523.6, "end": 3534.56, "text": " sent is just being updated using a jenky like smooth sort of running mean manner and so while the"}, {"start": 3534.56, "end": 3538.96, "text": " network is training and these preactivations are sort of changing and shifting around during"}, {"start": 3538.96, "end": 3543.6, "text": " during back propagation we are keeping track of the typical mean and standard deviation"}, {"start": 3543.6, "end": 3546.64, "text": " and rest of the mean them once and when I run this"}, {"start": 3546.64, "end": 3553.68, "text": " now I'm keeping track of this in a running manner and what we're hoping for of course is that the"}, {"start": 3553.68, "end": 3560.0, "text": " mean bn mean underscore running and bn mean underscore std are going to be very similar to the ones"}, {"start": 3560.0, "end": 3565.2799999999997, "text": " that we calculated here before and that way we don't need a second stage because we've sort of"}, {"start": 3565.2799999999997, "end": 3569.3599999999997, "text": " combined the two stages and we've put them on the side of each other if you want to look at it"}, {"start": 3569.3599999999997, "end": 3575.44, "text": " that way and this is how this is also implemented in the bastion realization layer in PyTorch so"}, {"start": 3575.44, "end": 3581.6, "text": " during training the exact same thing will happen and then later when you're using inference it will"}, {"start": 3581.6, "end": 3586.7200000000003, "text": " use the estimated running mean of both the mean and standard deviation of those hidden states"}, {"start": 3587.84, "end": 3592.32, "text": " so let's wait for the optimization to converge and hopefully the running mean and standard deviation"}, {"start": 3592.32, "end": 3597.36, "text": " are roughly equal to these two and then we can simply use it here and we don't need this stage"}, {"start": 3597.36, "end": 3602.7200000000003, "text": " of explicit calibration at the end okay so the optimization finished I'll rerun the explicit"}, {"start": 3602.72, "end": 3609.68, "text": " estimation and then the bn mean from the explicit estimation is here and bn mean from the running"}, {"start": 3609.68, "end": 3617.2799999999997, "text": " estimation during the during the optimization you can see it's very very similar it's not identical"}, {"start": 3617.2799999999997, "end": 3627.2799999999997, "text": " but it's pretty close and the same way bnstd is this and bnstd running is this as you can see that once"}, {"start": 3627.28, "end": 3633.28, "text": " again they are fairly similar values not identical but pretty close and so then here instead of"}, {"start": 3633.28, "end": 3641.2000000000003, "text": " being mean we can use the bn mean running instead of bnstd we can use bnstd running and hopefully"}, {"start": 3641.2000000000003, "end": 3647.84, "text": " the validation loss will not be impacted too much okay so it's basically identical and this way"}, {"start": 3647.84, "end": 3653.1200000000003, "text": " we've eliminated the need for this explicit stage of calibration because we are doing it in line"}, {"start": 3653.12, "end": 3657.2799999999997, "text": " over here okay so we're almost done with batch normalization there are only two more notes"}, {"start": 3657.2799999999997, "end": 3661.44, "text": " that I'd like to make number one I've skipped a discussion over what is this plus epsilon doing"}, {"start": 3661.44, "end": 3667.2, "text": " here this epsilon is usually like some small fixed number for example one in negative 5 by default"}, {"start": 3667.2, "end": 3672.3199999999997, "text": " and what it's doing is that it's basically preventing a division by zero in the case that the"}, {"start": 3672.3199999999997, "end": 3678.96, "text": " variance over your batch is exactly zero in that case here we normally have a division by zero"}, {"start": 3678.96, "end": 3683.52, "text": " but because of the plus epsilon this is going to become a small number in the denominator instead"}, {"start": 3683.52, "end": 3688.48, "text": " and things will be more well behaved so feel free to also add a plus epsilon here of a very small"}, {"start": 3688.48, "end": 3692.56, "text": " number it doesn't actually substantially change the result I'm going to skip it in our case just"}, {"start": 3692.56, "end": 3697.2, "text": " because this is unlikely to happen in our very simple example here and the second thing I want you"}, {"start": 3697.2, "end": 3702.48, "text": " to notice is that we're being wasteful here and it's very subtle but right here where we are"}, {"start": 3702.48, "end": 3708.88, "text": " adding the bias into each preact these biases now are actually useless because we're adding them"}, {"start": 3708.88, "end": 3715.12, "text": " to the each preact but then we are calculating the mean for every one of these neurons and subtracting"}, {"start": 3715.12, "end": 3722.0, "text": " it so whatever bias you add here is going to get subtracted right here and so these biases are"}, {"start": 3722.0, "end": 3726.1600000000003, "text": " not doing anything in fact but they're being subtracted out and they don't impact the rest of"}, {"start": 3726.1600000000003, "end": 3731.36, "text": " the calculation so if you look at b1.grad it's actually going to be zero because it being subtracted"}, {"start": 3731.36, "end": 3736.08, "text": " out and doesn't actually have any effect and so whenever you're using bachelor normalization layers"}, {"start": 3736.08, "end": 3740.08, "text": " then if you have any weight layers before like a linear or a conv or something like that"}, {"start": 3740.64, "end": 3746.24, "text": " you're better off coming here and just like not using bias so you don't want to use bias"}, {"start": 3746.24, "end": 3751.84, "text": " and then here you don't want to add it because that's that's spurious instead we have this"}, {"start": 3751.84, "end": 3756.7999999999997, "text": " bachelor normalization bias here and that bachelor normalization bias is now in charge of the"}, {"start": 3756.7999999999997, "end": 3763.92, "text": " biasing of this distribution instead of this b1 that we had here originally and so basically"}, {"start": 3763.92, "end": 3768.48, "text": " the bachelor normalization layer has its own bias and there's no need to have a bias in the layer"}, {"start": 3768.48, "end": 3773.2000000000003, "text": " before it because that bias is going to be subtracted out anyway so that's the other small detail"}, {"start": 3773.2000000000003, "end": 3778.48, "text": " to be careful with sometimes it's not going to do anything catastrophic this b1 will just be useless"}, {"start": 3778.48, "end": 3783.04, "text": " it will never get any gradient it will not learn it will stay constant and it's just wasteful"}, {"start": 3783.04, "end": 3788.88, "text": " but it doesn't actually really impact anything otherwise okay so I rearranged the code a little bit"}, {"start": 3788.88, "end": 3792.8, "text": " with comments and I just wanted to give a very quick summary of the bachelor normalization layer"}, {"start": 3792.8, "end": 3798.88, "text": " we are using bachelor normalization to control the statistics of activations in the neural net"}, {"start": 3799.6000000000004, "end": 3804.1600000000003, "text": " it is common to sprinkle bachelor normalization layer across the neural net and usually we will"}, {"start": 3804.1600000000003, "end": 3810.32, "text": " place it after layers that have multiplications like for example a linear layer or a convolutional"}, {"start": 3810.32, "end": 3817.6800000000003, "text": " layer which we may cover in the future now the bachelor normalization internally has parameters"}, {"start": 3817.68, "end": 3823.6, "text": " for the gain and the bias and these are trained using back propagation it also has two buffers"}, {"start": 3824.3999999999996, "end": 3829.2, "text": " the buffers are the mean and the standard deviation the running mean and the running mean of"}, {"start": 3829.2, "end": 3833.9199999999996, "text": " the standard deviation and these are not trained using back propagation these are trained using"}, {"start": 3833.9199999999996, "end": 3842.8799999999997, "text": " this janky update of kind of like a running mean update so these are sort of the parameters"}, {"start": 3842.88, "end": 3848.1600000000003, "text": " and the buffers of bachelor layer and then really what is doing is it's calculating the mean and"}, {"start": 3848.1600000000003, "end": 3853.6800000000003, "text": " standard deviation of the activations that are feeding into the bachelor layer over that batch"}, {"start": 3854.96, "end": 3860.8, "text": " then it's centering that batch to be unit Gaussian and then it's offsetting and scaling it by the"}, {"start": 3860.8, "end": 3867.36, "text": " learned bias and gain and then on top of that it's keeping track of the mean and standard deviation"}, {"start": 3867.36, "end": 3873.52, "text": " of the inputs and it's maintaining this running mean and standard deviation and this will later"}, {"start": 3873.52, "end": 3878.0, "text": " be used at inference so that we don't have to re-estimate the mean and standard deviation all the time"}, {"start": 3878.96, "end": 3883.28, "text": " and in addition that allows us to basically forward individual examples at test time"}, {"start": 3884.2400000000002, "end": 3887.1200000000003, "text": " so that's the bachelor normalization layer it's a fairly complicated layer"}, {"start": 3888.4, "end": 3892.56, "text": " but this is what it's doing internally now I wanted to show you a little bit of a real example"}, {"start": 3892.56, "end": 3899.44, "text": " so you can search ResNet which is a residual neural network and these are contact of neural"}, {"start": 3899.44, "end": 3905.68, "text": " arcs used for image classification and of course we haven't common dress nets in detail so I'm"}, {"start": 3905.68, "end": 3910.72, "text": " not going to explain all the pieces of it but for now just note that the image feeds into a"}, {"start": 3910.72, "end": 3915.68, "text": " ResNet on the top here and there's many many layers with repeating structure all the way to"}, {"start": 3915.68, "end": 3920.96, "text": " predictions of what's inside that image this repeating structure is made up of these blocks and"}, {"start": 3920.96, "end": 3927.12, "text": " these blocks are just sequentially stacked up in this deep neural network now the code for this"}, {"start": 3928.0, "end": 3935.28, "text": " the block basically that's used and repeated sequentially in series is called this bottleneck block"}, {"start": 3936.16, "end": 3940.64, "text": " and there's a lot here this is all PyTorch and of course we haven't covered all of it but I want"}, {"start": 3940.64, "end": 3946.16, "text": " to point out some small pieces of it here in the init is where we initialize the neural net so this"}, {"start": 3946.16, "end": 3951.3599999999997, "text": " code of block here is basically the kind of stuff we're doing here we're initializing all the layers and"}, {"start": 3951.3599999999997, "end": 3956.16, "text": " in the forward we are specifying how the neural net acts once you actually have the input so this"}, {"start": 3956.16, "end": 3964.48, "text": " code here is a long lines of what we're doing here and now these blocks are replicated and stacked"}, {"start": 3964.48, "end": 3971.52, "text": " up serially and that's what a residual network would be and so notice what's happening here come one"}, {"start": 3971.52, "end": 3978.64, "text": " these are convolutional layers and these convolutional layers basically they're the same thing as a linear"}, {"start": 3978.64, "end": 3985.2, "text": " layer except convolutional layers don't apply convolutional layers are used for images and so they"}, {"start": 3985.2, "end": 3990.96, "text": " have spatial structure and basically this linear multiplication and bias offset are done on patches"}, {"start": 3991.84, "end": 3997.92, "text": " instead of a map instead of the full input so because these images have structure spatial structure"}, {"start": 3997.92, "end": 4004.08, "text": " convolutional is just basically do wx plus b but they do it on overlapping patches of the input but"}, {"start": 4004.08, "end": 4009.92, "text": " otherwise is wx plus b then we have the normal layer which by default here is initialize to be a"}, {"start": 4009.92, "end": 4016.88, "text": " bashed norm in 2d so 2-dimensional bashed normalization layer and then we have a nonlinearity like relu so"}, {"start": 4016.88, "end": 4024.64, "text": " instead of here they use relu we are using 10h in this case but both both are just nonlinearities and"}, {"start": 4024.64, "end": 4029.92, "text": " you can just use them relatively interchangeably for very deep networks relu typically empirically"}, {"start": 4029.92, "end": 4036.0, "text": " work a bit better so see the motif that's being repeated here we have convolution, bashed normalization"}, {"start": 4036.0, "end": 4041.2799999999997, "text": " relu convolution, bashed normalization relu etc and then here this is residual connection that we"}, {"start": 4041.2799999999997, "end": 4047.68, "text": " haven't covered yet but basically that's the exact same pattern we have here we have a weight layer"}, {"start": 4047.68, "end": 4055.52, "text": " like a convolution or like a linear layer, bashed normalization and then 10h which is nonlinearity"}, {"start": 4055.52, "end": 4060.48, "text": " but basically a weight layer, a normalization layer and nonlinearity and that's the motif that"}, {"start": 4060.48, "end": 4064.96, "text": " you would be stacking up when you create these deep neural networks exactly as it's done here"}, {"start": 4065.6, "end": 4069.2799999999997, "text": " and one more thing I'd like you to notice is that here when they are initializing the"}, {"start": 4069.2799999999997, "end": 4076.7999999999997, "text": " comp layers like comp one by one the depth for that is right here and so it's initializing an nn.comp2d"}, {"start": 4076.8, "end": 4080.88, "text": " which is a convolutional layer in PyTorch and there's a bunch of keyword arguments here that I"}, {"start": 4080.88, "end": 4086.1600000000003, "text": " am not going to explain yet but you see how there's bias equals false the bias equals false is exactly"}, {"start": 4086.1600000000003, "end": 4091.52, "text": " for the same reason as bias is not used in our case you see how I raised the use of bias"}, {"start": 4092.1600000000003, "end": 4096.72, "text": " and the use of bias is spurious because after this weight layer there's the bashed normalization"}, {"start": 4096.72, "end": 4101.360000000001, "text": " and the bashed normalization subtracts that bias and that has its own bias so there's no need to"}, {"start": 4101.360000000001, "end": 4106.64, "text": " introduce these spurious parameters it wouldn't hurt performance it's just useless and so because"}, {"start": 4106.64, "end": 4111.92, "text": " they have this motif of calm bashed normalization they don't need a bias here because there's a bias"}, {"start": 4111.92, "end": 4118.08, "text": " inside here so by the way this example here is very easy to find just do a resonant PyTorch"}, {"start": 4119.360000000001, "end": 4124.88, "text": " and it's this example here so this is kind of like the stock implementation of a residual neural"}, {"start": 4124.88, "end": 4129.92, "text": " network in PyTorch and you can find that here but of course I haven't covered many of these parts yet"}, {"start": 4130.64, "end": 4135.52, "text": " and I would also like to briefly descend into the definitions of these PyTorch layers and the"}, {"start": 4135.52, "end": 4140.0, "text": " parameters that they take now instead of a convolutional layer we're going to look at a linear layer"}, {"start": 4141.280000000001, "end": 4144.88, "text": " because that's the one that we're using here this is a linear layer and I haven't covered"}, {"start": 4144.88, "end": 4149.76, "text": " cover the convolutions yet but as I mentioned convolutions are basically linear layers except on patches"}, {"start": 4151.200000000001, "end": 4156.72, "text": " so a linear layer performs a wx plus b except here they're calling the w a transpose"}, {"start": 4158.72, "end": 4163.040000000001, "text": " so the call is wx plus b very much like we did here to initialize this layer you need to know"}, {"start": 4163.04, "end": 4170.96, "text": " the fan in the fan out and that's so that they can initialize this w this is the fan in and the"}, {"start": 4170.96, "end": 4176.88, "text": " fan out so they know how big the weight matrix should be you need to also pass in whether you"}, {"start": 4176.88, "end": 4183.2, "text": " whether or not you want a bias and if you set it to false the no bias will be inside this layer"}, {"start": 4184.32, "end": 4189.2, "text": " and you may want to do that exactly like in our case if your layer is followed by a normalization"}, {"start": 4189.2, "end": 4195.679999999999, "text": " layer such as bach norm so this allows you to basically disable bias in terms of the initialization"}, {"start": 4195.679999999999, "end": 4202.08, "text": " if we swing down here this is reporting the variables used inside this linear layer and our linear"}, {"start": 4202.08, "end": 4207.679999999999, "text": " layer here has two parameters the weight and the bias in the same way they have a weight and a bias"}, {"start": 4208.5599999999995, "end": 4213.36, "text": " and they're talking about how they initialize it by default so by default PyTorch will initialize"}, {"start": 4213.36, "end": 4222.24, "text": " your weights by taking the fan in and then doing one over fan in square root and then instead of a"}, {"start": 4222.24, "end": 4228.4, "text": " normal distribution they are using a uniform distribution so it's very much the same thing but they"}, {"start": 4228.4, "end": 4232.96, "text": " are using a one instead of five over three so there's no gain being calculated here the gain is"}, {"start": 4232.96, "end": 4238.88, "text": " just one but otherwise is exactly one over the square root of fan in exactly as we have here"}, {"start": 4238.88, "end": 4246.32, "text": " so one over the square root of k is the is this scale of the weights but when they are drawing the"}, {"start": 4246.32, "end": 4251.4400000000005, "text": " numbers they're not using a Gaussian by default they're using a uniform distribution by default"}, {"start": 4251.4400000000005, "end": 4256.88, "text": " and so they draw uniformly from negative square root of k to square root of k but it's the"}, {"start": 4256.88, "end": 4263.04, "text": " exact same thing and the same motivation from for with respect to what we've seen in this lecture"}, {"start": 4263.04, "end": 4268.4800000000005, "text": " and the reason they're doing this is if you have a roughly Gaussian input this will ensure that out"}, {"start": 4268.48, "end": 4274.32, "text": " of this layer you will have a roughly Gaussian output and you you basically achieve that by"}, {"start": 4274.32, "end": 4281.36, "text": " scaling the weights by one over the square root of fan in so that's what this is doing and then the"}, {"start": 4281.36, "end": 4285.12, "text": " second thing is the bachelor normalization layer so let's look at what that looks like in PyTorch"}, {"start": 4286.16, "end": 4290.08, "text": " so here we have a one-dimensional bachelor normalization layer exactly as we are using here"}, {"start": 4290.879999999999, "end": 4294.639999999999, "text": " and there are a number of keyword arguments going into it as well so we need to know the number of"}, {"start": 4294.64, "end": 4300.8, "text": " features for us that is 200 and that is needed so that we can initialize these parameters here"}, {"start": 4300.8, "end": 4307.92, "text": " the gain the bias and the buffers for the running mean and serredivation then they need to know the"}, {"start": 4307.92, "end": 4312.8, "text": " value of epsilon here and by default this is one negative five you don't typically change this"}, {"start": 4312.8, "end": 4319.360000000001, "text": " too much then they need to know the momentum and the momentum here as they explain is basically used"}, {"start": 4319.36, "end": 4325.04, "text": " for these running mean and running standard deviation so by default the momentum here is point one"}, {"start": 4325.04, "end": 4332.719999999999, "text": " the momentum we are using here in this example is 0.001 and basically you may want to change this"}, {"start": 4332.719999999999, "end": 4338.88, "text": " sometimes and roughly speaking if you have a very large batch size then typically what you'll see"}, {"start": 4338.88, "end": 4342.96, "text": " is that when you estimate the mean and the same deviation for every single batch size if it's"}, {"start": 4342.96, "end": 4349.04, "text": " large enough you're going to get roughly the same result and so therefore you can use slightly higher"}, {"start": 4349.04, "end": 4356.08, "text": " momentum like point one but for a batch size s small is 32 the mean and understanding deviation here"}, {"start": 4356.08, "end": 4360.48, "text": " might take on slightly different numbers because there's only 32 examples we are using to estimate"}, {"start": 4360.48, "end": 4365.36, "text": " the mean and standard deviation so the value is changing around a lot and if your momentum is"}, {"start": 4365.36, "end": 4372.32, "text": " point one that that might not be good enough for this value to settle and converge to the actual mean"}, {"start": 4372.32, "end": 4377.5199999999995, "text": " and standard deviation over the entire training set and so basically if your batch size is very small"}, {"start": 4377.52, "end": 4382.4800000000005, "text": " momentum of point one is potentially dangerous and it might make it so that the running mean and"}, {"start": 4382.4800000000005, "end": 4387.280000000001, "text": " standard deviation is thrashing too much during training and it's not actually converging properly"}, {"start": 4389.4400000000005, "end": 4394.400000000001, "text": " affine equals true determines whether this batch normalization layer has these learnable affine"}, {"start": 4394.400000000001, "end": 4401.6, "text": " parameters the gain and the bias and this is almost always kept the true I'm not actually sure"}, {"start": 4401.6, "end": 4409.280000000001, "text": " why you would want to change this to false then track running stats is determining whether or not"}, {"start": 4409.280000000001, "end": 4415.360000000001, "text": " batch normalization layer of bite or chip will be doing this and one reason you may you may want"}, {"start": 4415.360000000001, "end": 4420.96, "text": " to skip the running stats is because you may want to for example estimate them at the end as a"}, {"start": 4420.96, "end": 4425.76, "text": " stage two like this and in that case you don't want the batch normalization layer to be doing all"}, {"start": 4425.76, "end": 4431.360000000001, "text": " this extra compute that you're not going to use and finally we need to know which device we're"}, {"start": 4431.36, "end": 4437.12, "text": " going to run this batch normalization on a CPU or a GPU and what the data type should be a half"}, {"start": 4437.12, "end": 4442.48, "text": " precision single precision double precision and so on so that's the batch normalization layer"}, {"start": 4442.48, "end": 4447.44, "text": " otherwise they linked to the paper is the same formula we've implemented and everything is the"}, {"start": 4447.44, "end": 4453.12, "text": " same exactly as we've done here okay so that's everything that I wanted to cover for this lecture"}, {"start": 4453.759999999999, "end": 4457.44, "text": " really what I wanted to talk about is the importance of understanding the activations and"}, {"start": 4457.44, "end": 4462.0, "text": " the gradients and their statistics in neural networks and this becomes increasingly important"}, {"start": 4462.0, "end": 4466.96, "text": " especially as you make your neural networks bigger larger and deeper we looked at the distributions"}, {"start": 4466.96, "end": 4472.16, "text": " basically at the output layer and we saw that if you have two confident misperdictions because"}, {"start": 4472.16, "end": 4476.96, "text": " the activations are too messed up at the last layer you can end up with these hockey stick losses"}, {"start": 4477.5199999999995, "end": 4481.919999999999, "text": " and if you fix this you get a better loss at the end of training because your training is not"}, {"start": 4481.919999999999, "end": 4486.719999999999, "text": " doing wasteful work then we also saw that we need to control the activations we don't want them to"}, {"start": 4486.72, "end": 4493.12, "text": " you know squash to zero or explore to infinity and because that you can run into a lot of trouble with"}, {"start": 4493.12, "end": 4498.08, "text": " all of these nonlinearities in these neural nets and basically you want everything to be fairly homogeneous"}, {"start": 4498.08, "end": 4502.88, "text": " throughout the neural net you want roughly Gaussian activations throughout the neural net let me"}, {"start": 4502.88, "end": 4508.64, "text": " talk about okay if we want roughly Gaussian activations how do we scale these weight matrices"}, {"start": 4508.64, "end": 4514.0, "text": " and biases during initialization of the neural net so that we don't get you know so everything is"}, {"start": 4514.0, "end": 4521.36, "text": " as controlled as possible so that give us a large boost in improvement and then I talked about how"}, {"start": 4521.36, "end": 4529.2, "text": " that strategy is not actually possible for much much deeper neural nets because when you have"}, {"start": 4529.2, "end": 4534.4, "text": " much deeper neural nets with lots of different types of layers it becomes really really hard to"}, {"start": 4534.4, "end": 4539.76, "text": " precisely set the weights and the biases in such a way that the activations are roughly uniform"}, {"start": 4539.76, "end": 4544.88, "text": " throughout the neural net so then I introduced the notion of the normalization layer now there are"}, {"start": 4544.88, "end": 4550.24, "text": " many normalization layers that that people use in practice, Beshaw normalization layer normalization"}, {"start": 4550.24, "end": 4554.64, "text": " this is normalization group normalization we haven't covered most of them but I've introduced"}, {"start": 4554.64, "end": 4559.76, "text": " the first one and also the one that I believe came out first and that's called Beshaw normalization"}, {"start": 4560.64, "end": 4565.12, "text": " and we saw how Beshaw normalization works this is a layer that you can sprinkle throughout your"}, {"start": 4565.12, "end": 4570.96, "text": " deep neural net and the basic idea is if you want roughly Gaussian activations well then take"}, {"start": 4570.96, "end": 4577.2, "text": " your activations and take the mean and the standard deviation and center your data and you can do"}, {"start": 4577.2, "end": 4583.76, "text": " that because the centering operation is differentiable but on top of that we actually had to add a lot"}, {"start": 4583.76, "end": 4588.5599999999995, "text": " of bells and whistles and that gave you a sense of the complexities of the Beshaw normalization layer"}, {"start": 4588.5599999999995, "end": 4593.36, "text": " because now we're centering the data that's great but suddenly we need the gain and the bias"}, {"start": 4593.36, "end": 4599.04, "text": " and now those are trainable and then because we are coupling all the training examples now suddenly"}, {"start": 4599.04, "end": 4604.48, "text": " the question is how do you do the inference or to do the inference we need to now estimate"}, {"start": 4604.48, "end": 4611.04, "text": " these mean and standard deviation once or the entire training set and then use those at inference"}, {"start": 4611.759999999999, "end": 4616.719999999999, "text": " but then no one likes to do stage two so instead we fold everything into the Beshaw normalization"}, {"start": 4616.719999999999, "end": 4621.28, "text": " layer during training and try to estimate these in the running manner so that everything is a bit"}, {"start": 4621.28, "end": 4628.8, "text": " simpler and that gives us the Beshaw normalization layer and as I mentioned no one likes this layer"}, {"start": 4629.44, "end": 4635.84, "text": " it causes a huge amount of bugs and intuitively it's because it is coupling examples"}, {"start": 4636.96, "end": 4642.5599999999995, "text": " in the form of passive and neural net and I've shocked myself in the foot with this layer"}, {"start": 4643.2, "end": 4647.2, "text": " over and over again in my life and I don't want you to suffer the same"}, {"start": 4647.2, "end": 4653.92, "text": " so basically try to avoid it as much as possible some of the other alternatives to these layers are"}, {"start": 4653.92, "end": 4659.2, "text": " for example group normalization or layer normalization and those have become more common in more"}, {"start": 4659.2, "end": 4665.2, "text": " recent deep learning but we haven't covered those yet but definitely Beshaw normalization was very"}, {"start": 4665.2, "end": 4670.32, "text": " influential at the time when it came out in roughly 2015 because it was kind of the first time"}, {"start": 4670.32, "end": 4676.72, "text": " that you could train reliably much deeper neural nets and fundamentally the reason for that is"}, {"start": 4676.72, "end": 4682.08, "text": " because this layer was very effective at controlling the statistics of the activations in the neural net"}, {"start": 4683.12, "end": 4689.12, "text": " so that's the story so far and that's all I wanted to cover and in the future lecture so"}, {"start": 4689.12, "end": 4694.16, "text": " hopefully we can start going into recurring neural nets and recurring neural nets as we'll see"}, {"start": 4694.16, "end": 4700.240000000001, "text": " are just very very deep networks because you you unrolled the loop and when you actually optimize"}, {"start": 4700.24, "end": 4707.36, "text": " these neural nets and that's where a lot of this analysis around the activations statistics"}, {"start": 4707.36, "end": 4712.96, "text": " and all these normalization layers will become very very important for good performance so we'll"}, {"start": 4712.96, "end": 4719.5199999999995, "text": " see that next time bye okay so I lied I would like us to do one more summary here as a bonus and"}, {"start": 4719.5199999999995, "end": 4724.08, "text": " I think it's useful as to have one more summary of everything I've presented in this lecture but"}, {"start": 4724.08, "end": 4728.32, "text": " also I would like us to start by torturing our code a little bit so it looks much more like what"}, {"start": 4728.32, "end": 4734.08, "text": " you would encounter in PyTorch so you'll see that I will structure our code into these modules"}, {"start": 4734.08, "end": 4741.679999999999, "text": " like a linear module and a bachelor module and I'm putting the code inside these modules so that we"}, {"start": 4741.679999999999, "end": 4745.36, "text": " can construct neural networks very much like we would construct the in PyTorch and I will go through"}, {"start": 4745.36, "end": 4751.84, "text": " this in detail so we'll create our neural net then we will do the optimization loop as we did before"}, {"start": 4752.5599999999995, "end": 4756.16, "text": " and then the one more thing that I want to do here is I want to look at the activations statistics"}, {"start": 4756.16, "end": 4761.44, "text": " both in the forward pass and in the backward pass and then here we have the evaluation and sampling"}, {"start": 4761.44, "end": 4767.84, "text": " just like before so let me rewind all the way up here and go a little bit slower so here I'm creating"}, {"start": 4767.84, "end": 4773.28, "text": " a linear layer you'll notice that Torch.nn has lots of different types of layers and one of those"}, {"start": 4773.28, "end": 4778.48, "text": " layers is the linear layer Torch.nn.nl it takes a number of input features output features whether"}, {"start": 4778.48, "end": 4783.28, "text": " or not we should have bias and then the device that we want to place this layer on and the data type"}, {"start": 4783.28, "end": 4789.679999999999, "text": " so I will omit these two but otherwise we have the exact same thing we have the fan in which is"}, {"start": 4789.679999999999, "end": 4795.44, "text": " the number of inputs, fan out the number of outputs and whether or not we want to use a bias and"}, {"start": 4795.44, "end": 4801.679999999999, "text": " internally inside this layer there's a weight and a bias if you like it it is typical to initialize"}, {"start": 4801.679999999999, "end": 4807.599999999999, "text": " the weight using say random numbers drawn from a Gaussian and then here's the coming initialization"}, {"start": 4807.6, "end": 4813.04, "text": " that we discussed already in this lecture and that's a good default and also the default that I"}, {"start": 4813.04, "end": 4819.120000000001, "text": " believe PyTorch chooses and by default the bias is usually initialized to zeros. Now when you call"}, {"start": 4819.120000000001, "end": 4825.6, "text": " this module this will basically calculate W times x plus B if you have NB and then when you also"}, {"start": 4825.6, "end": 4831.360000000001, "text": " call that parameters on this module it will return the tensors that are the parameters of this layer."}, {"start": 4831.36, "end": 4838.5599999999995, "text": " Now next we have the Bachelormalization layer so I've written that here and this is very"}, {"start": 4838.5599999999995, "end": 4846.639999999999, "text": " similar to PyTorch's and then that Bachelorm 1D layer as shown here. So I'm kind of taking these"}, {"start": 4846.639999999999, "end": 4851.839999999999, "text": " three parameters here the dimensionality the epsilon that we'll use in the division and the"}, {"start": 4851.839999999999, "end": 4856.08, "text": " momentum that we will use in keeping track of these running stats the running mean and the running"}, {"start": 4856.08, "end": 4862.24, "text": " variance. Now PyTorch actually takes quite a few more things but I'm assuming some of their settings"}, {"start": 4862.24, "end": 4867.12, "text": " so for us I'll find will be true that means that we will be using a gamma beta after denormalization."}, {"start": 4867.92, "end": 4871.6, "text": " The track running stats will be true so we will be keeping track of the running mean and the"}, {"start": 4871.6, "end": 4878.0, "text": " running variance in the in the classroom. Our device by default is the CPU and the data type by"}, {"start": 4878.0, "end": 4886.08, "text": " default is float float 32. So those are the defaults otherwise we are taking all the same parameters"}, {"start": 4886.08, "end": 4891.6, "text": " in this Bachelorm layer so first I'm just saving them. Now here's something new there's that"}, {"start": 4891.6, "end": 4896.24, "text": " training which by default is true and PyTorch and then modules also have this attribute that"}, {"start": 4896.24, "end": 4902.08, "text": " training and that's because many modules and Bachelorm is included in that have a different"}, {"start": 4902.08, "end": 4906.96, "text": " behavior whether you are training your own or not or whether you are running it in an evaluation"}, {"start": 4906.96, "end": 4911.92, "text": " mode and calculating your evaluation laws or using it for inference on some test examples."}, {"start": 4912.8, "end": 4917.04, "text": " And Bachelorm is an example of this because when we are training we are going to be using the"}, {"start": 4917.04, "end": 4922.08, "text": " mean and the variance estimated from the current batch but during inference we are using the running"}, {"start": 4922.08, "end": 4928.08, "text": " mean and running variance and so also if we are training we are updating mean and variance but if"}, {"start": 4928.08, "end": 4933.36, "text": " we are testing then these are not being updated they're kept fixed and so this flag is necessary"}, {"start": 4933.36, "end": 4938.88, "text": " and by default true just like in PyTorch. Now the parameters of Bachelorm 1D are the gamma and the"}, {"start": 4938.88, "end": 4946.799999999999, "text": " beta here and then the running mean and running variance are called buffers in PyTorch nomenclature"}, {"start": 4947.5199999999995, "end": 4954.08, "text": " and these buffers are trained using exponential moving average here explicitly and they are not"}, {"start": 4954.08, "end": 4958.5599999999995, "text": " part of the back propagation and stochastic gradient descent so they are not sort of like parameters"}, {"start": 4958.56, "end": 4964.56, "text": " of this layer and that's why when we have a parameters here we only return gamma and beta"}, {"start": 4964.56, "end": 4969.04, "text": " we do not return the mean and the variance this is trained sort of like internally here"}, {"start": 4970.160000000001, "end": 4975.6, "text": " every forward pass using exponential moving average. So that's the initialization."}, {"start": 4976.8, "end": 4981.84, "text": " Now in a forward pass if we are training then we use the mean and the variance estimated"}, {"start": 4981.84, "end": 4990.400000000001, "text": " by the batch only a block of paper here. We calculate the mean and the variance. Now up above I was"}, {"start": 4990.400000000001, "end": 4995.6, "text": " estimating the standard deviation and keeping track of the standard deviation here in the running"}, {"start": 4995.6, "end": 5000.56, "text": " standard deviation instead of running variance but let's follow the paper exactly here they"}, {"start": 5000.56, "end": 5005.2, "text": " calculate the variance which is the standard deviation squared and that's what's kept track of"}, {"start": 5005.2, "end": 5010.96, "text": " in the running variance instead of a running standard deviation but those two would be very"}, {"start": 5010.96, "end": 5017.52, "text": " very similar I believe. If we are not training then we use running mean and variance we normalize"}, {"start": 5019.04, "end": 5023.52, "text": " and then here I am calculating the output of this layer and I'm also assigning it to an"}, {"start": 5023.52, "end": 5030.16, "text": " attribute called dot out. Now dot out is something that I'm using in our modules here. This is not"}, {"start": 5030.16, "end": 5034.72, "text": " what you would find in PyTorch we are slightly deviating from it. I'm creating a dot out because I"}, {"start": 5034.72, "end": 5040.32, "text": " would like to very easily maintain all those variables so that we can create statistics of them"}, {"start": 5040.32, "end": 5046.24, "text": " and plot them but PyTorch and modules will not have a dot out attribute. And finally here we are"}, {"start": 5046.24, "end": 5052.88, "text": " updating the buffers using again as I mentioned exponential moving average given the provided momentum"}, {"start": 5052.88, "end": 5058.16, "text": " and importantly you'll notice that I'm using the torshtap no-grat context manager and I'm doing this"}, {"start": 5058.16, "end": 5063.36, "text": " because if we don't use this then PyTorch will start building out an entire computational graph"}, {"start": 5063.36, "end": 5068.32, "text": " out of these tensors because it is expecting that we will eventually call a dot backward but we"}, {"start": 5068.32, "end": 5071.679999999999, "text": " are never going to be calling that backward on anything that includes running mean and running"}, {"start": 5071.679999999999, "end": 5077.36, "text": " variance. So that's why we need to use this context manager so that we are not sort of maintaining"}, {"start": 5077.36, "end": 5082.5599999999995, "text": " them using all this additional memory. So this will make it more efficient and it's just telling"}, {"start": 5082.5599999999995, "end": 5086.4, "text": " PyTorch that they will need no backward. We just have a bunch of tensors we want to update them"}, {"start": 5086.4, "end": 5093.44, "text": " that's it and then we return. Okay now scrolling down we have the 10H layer. This is very very"}, {"start": 5093.44, "end": 5100.16, "text": " similar to torshtap 10H and it doesn't do too much it just calculates 10H as you might expect."}, {"start": 5100.48, "end": 5106.5599999999995, "text": " So that's torshtap 10H and there's no parameters in this layer but because these are layers"}, {"start": 5107.44, "end": 5114.96, "text": " it now becomes very easy to sort of like stack them up into basically just a list and we can do all"}, {"start": 5114.96, "end": 5119.759999999999, "text": " the initializations that we're used to. So we have the initial sort of embedding matrix we have"}, {"start": 5119.76, "end": 5124.8, "text": " our layers and we can call them sequentially and then again with torshtap no grad there's some"}, {"start": 5124.8, "end": 5129.76, "text": " initializations here. So we want to make the output softmax a bit less confident like we saw"}, {"start": 5130.320000000001, "end": 5135.280000000001, "text": " and in addition to that because we are using a six layer multi layer perceptron here so you see how"}, {"start": 5135.280000000001, "end": 5142.0, "text": " I'm stacking linear 10H linear 10H etc. I'm going to be using the gain here and I'm going to play"}, {"start": 5142.0, "end": 5146.08, "text": " with this in a second so you'll see how when we change this what happens to the statistics."}, {"start": 5146.08, "end": 5151.92, "text": " Finally the primers are basically the embedding matrix and all the parameters in all the layers"}, {"start": 5151.92, "end": 5157.36, "text": " and notice here I'm using a double list comprehension if you want to call it that but for every layer"}, {"start": 5157.36, "end": 5162.48, "text": " in layers and for every parameter in each of those layers we are just stacking up all those"}, {"start": 5162.48, "end": 5170.8, "text": " piece all those parameters. Now in total we have 46,000 parameters and I'm telling PyTorch that all"}, {"start": 5170.8, "end": 5179.92, "text": " of them require gradient. Then here we have everything here we are actually mostly used to."}, {"start": 5180.64, "end": 5185.2, "text": " We are sampling batch we are doing a forward pass the forward pass now is just the linear"}, {"start": 5185.2, "end": 5190.400000000001, "text": " application of all the layers in order followed by the cross entropy and then in the backward pass"}, {"start": 5190.400000000001, "end": 5194.64, "text": " you'll notice that for every single layer I now iterate over all the outputs and I'm telling"}, {"start": 5194.64, "end": 5200.16, "text": " PyTorch to retain the gradient of them and then here we are already used to all the all the"}, {"start": 5200.16, "end": 5205.44, "text": " gradients set to none do the backward to fill in the gradients do an update using the cast gradient"}, {"start": 5205.44, "end": 5212.0, "text": " send and then track some statistics and then I am going to break after a single iteration."}, {"start": 5212.0, "end": 5216.96, "text": " Now here in this cell in this diagram I am visualizing the histogram the histograms of the"}, {"start": 5216.96, "end": 5223.36, "text": " forward pass activations and I'm specifically doing it at the 10 each layers. So iterating over all"}, {"start": 5223.36, "end": 5231.12, "text": " the layers except for the very last one which is basically just the softmax layer. If it is a 10"}, {"start": 5231.12, "end": 5235.44, "text": " each layer and I'm using a 10 each layer just because they have a finite output negative 1 to 1"}, {"start": 5235.44, "end": 5240.0, "text": " and so it's very easy to visualize here so you see negative 1 to 1 and it's a finite range and"}, {"start": 5240.0, "end": 5246.88, "text": " easy to work with. I take the out tensor from that layer into t and then I'm calculating the mean"}, {"start": 5246.88, "end": 5252.48, "text": " this 10 deviation and the percent saturation of t and the way I define the percent saturation is"}, {"start": 5252.48, "end": 5258.08, "text": " that t dot absolute value is greater than 0.97 so that means we are here at the tails of the 10"}, {"start": 5258.08, "end": 5262.799999999999, "text": " each and remember that when we are in the tails of the 10 each that will actually stop gradients"}, {"start": 5262.799999999999, "end": 5269.919999999999, "text": " so we don't want this to be too high. Now here I'm calling torshot histogram and then I am plotting"}, {"start": 5269.919999999999, "end": 5274.16, "text": " this histogram. So basically what this is doing is that every different type of layer and they"}, {"start": 5274.16, "end": 5281.04, "text": " all have a different color we are looking at how many values in these tensors take on any of the"}, {"start": 5281.04, "end": 5288.24, "text": " values below on this axis here. So the first layer is fairly saturated here at 20 percent so you"}, {"start": 5288.24, "end": 5293.6, "text": " can see that it's got tails here but then everything sort of stabilizes and if we had more layers"}, {"start": 5293.6, "end": 5298.72, "text": " here it would actually just stabilize at around the 10 deviation of about 0.65 and the saturation"}, {"start": 5298.72, "end": 5304.24, "text": " will be roughly 5 percent and the reason that this stabilizes and gives us a nice distribution here"}, {"start": 5304.24, "end": 5313.04, "text": " is because gain is set to 5 over 3. Now here this gain you see that by default we initialize"}, {"start": 5313.04, "end": 5317.92, "text": " with 1 over square root of fennel but then here during initialization I come in and I iterate"}, {"start": 5317.92, "end": 5323.92, "text": " our all the layers and if it's a linear layer I boost that by the gain. Now we saw that one"}, {"start": 5324.5599999999995, "end": 5331.2, "text": " so basically if we just do not use a gain then what happens? If I retraw this you will see that"}, {"start": 5331.2, "end": 5338.0, "text": " the standard deviation is shrinking and the saturation is coming to 0 and basically what's happening"}, {"start": 5338.0, "end": 5343.679999999999, "text": " is the first layer is you know pretty decent but then further layers are just kind of like shrinking"}, {"start": 5343.679999999999, "end": 5349.92, "text": " down to 0 and it's happening slowly but it's shrinking to 0 and the reason for that is when you"}, {"start": 5349.92, "end": 5357.84, "text": " just have a sandwich of linear layers alone then a then initializing our weights in this manner"}, {"start": 5357.84, "end": 5363.12, "text": " we saw previously would have conserved the standard deviation of 1 but because we have this"}, {"start": 5363.12, "end": 5370.4800000000005, "text": " interspersed 10H layers in there these 10H layers are squashing functions and so they take your"}, {"start": 5370.4800000000005, "end": 5376.64, "text": " distribution and they slightly squash it and so some gain is necessary to keep expanding it"}, {"start": 5377.2, "end": 5384.32, "text": " to fight the squashing so it just turns out that 5 over 3 is a good value so if we have something"}, {"start": 5384.32, "end": 5390.88, "text": " too small like 1 we saw that things will come towards 0 but if it's something too high let's do 2"}, {"start": 5392.4, "end": 5400.32, "text": " then here we see that well let me do something a bit more extreme because so it's a bit more visible"}, {"start": 5400.32, "end": 5407.679999999999, "text": " let's try 3 okay so we see here that the saturation is not going to be too large okay so 3 would"}, {"start": 5407.68, "end": 5416.16, "text": " create weight-o-saturated activations so 5 over 3 is a good setting for a sandwich of linear layers"}, {"start": 5416.16, "end": 5421.84, "text": " with 10H activations and it roughly stabilizes the standard deviation at a reasonable point"}, {"start": 5422.56, "end": 5428.08, "text": " now honestly I have no idea where 5 over 3 came from in PyTorch when we were looking at the"}, {"start": 5428.08, "end": 5434.320000000001, "text": " counting initialization I see empirically that it stabilizes the sandwich of linear and 10H"}, {"start": 5434.32, "end": 5438.719999999999, "text": " and that the saturation is in a good range but I don't actually know if this came out of some"}, {"start": 5438.719999999999, "end": 5444.16, "text": " math formula I tried searching briefly for where this comes from but I wasn't able to find anything"}, {"start": 5444.719999999999, "end": 5449.36, "text": " but certainly we see that empirically these are very nice ranges our saturation is roughly 5%"}, {"start": 5449.36, "end": 5456.0, "text": " which is a pretty good number and this is a good setting of the gain in this context similarly"}, {"start": 5456.0, "end": 5461.36, "text": " we can do the exact same thing with the gradients so here is a very same loop if it's a 10H"}, {"start": 5461.36, "end": 5465.5199999999995, "text": " but instead of taking the layer that out I'm taking the grad and then I'm also showing the mean"}, {"start": 5465.5199999999995, "end": 5470.48, "text": " and the standard deviation and I'm plotting the histogram of these values and so you'll see"}, {"start": 5470.48, "end": 5475.2, "text": " that the gradient distribution is fairly reasonable and in particular what we're looking for is that"}, {"start": 5475.2, "end": 5480.5599999999995, "text": " all the different layers in this sandwich has roughly the same gradient things are not shrinking"}, {"start": 5480.5599999999995, "end": 5486.0, "text": " or exploding so we can for example come here and we can take a look at what happens if this gain"}, {"start": 5486.0, "end": 5494.24, "text": " was way too small so this was 0.5 then you see the first of all the activations are shrinking to zero"}, {"start": 5494.24, "end": 5498.88, "text": " but also the gradients are doing something weird the gradients started out here and then now they're"}, {"start": 5498.88, "end": 5507.04, "text": " like expanding out and similarly if we for example have a two-hymogain select three then we see"}, {"start": 5507.04, "end": 5511.36, "text": " that also the gradients have there's some asymmetry going on where as you go into deeper and deeper"}, {"start": 5511.36, "end": 5516.4, "text": " layers the activations are also changing and so that's not what we want and in this case we saw"}, {"start": 5516.4, "end": 5522.4, "text": " that without use of besterm as we are going through right now we have to very carefully set those"}, {"start": 5522.4, "end": 5528.4, "text": " gains to get nice activations in both the forward pass and the backward pass now before we move on"}, {"start": 5528.4, "end": 5533.92, "text": " to bestermalization I would also like to take a look at what happens when we have no 10H units here"}, {"start": 5533.92, "end": 5540.799999999999, "text": " so erasing all the 10H nonlinearities but keeping the gain at 5 over 3 we now have just a giant"}, {"start": 5540.8, "end": 5546.400000000001, "text": " linear sandwich so let's see what happens to the activations as we saw before the correct gain"}, {"start": 5546.400000000001, "end": 5554.08, "text": " here is one that is the standard deviation preserving gain so 1.667 is too high and so what's"}, {"start": 5554.08, "end": 5560.72, "text": " going to happen now is the following I have to change this to be linear so we are because there's"}, {"start": 5560.72, "end": 5569.52, "text": " no more 10H layers and let me change this to linear as well so what we're seeing is the activations"}, {"start": 5569.52, "end": 5575.84, "text": " started out on the blue and have by layer 4 become very diffuse so what's happening to the"}, {"start": 5575.84, "end": 5583.120000000001, "text": " activations is this and with the gradients on the top layer the activation the gradient statistics"}, {"start": 5583.120000000001, "end": 5588.240000000001, "text": " are the purple and then they diminish as you go down deeper in the layers and so basically"}, {"start": 5588.240000000001, "end": 5592.400000000001, "text": " you have an asymmetry like in the neural net and you might imagine that if you have very deep"}, {"start": 5592.400000000001, "end": 5598.0, "text": " neural networks say like 50 layers or something like that this just this is not a good place to be"}, {"start": 5598.0, "end": 5604.8, "text": " also that's why before best normalization this was an incredibly tricky to to set in particular"}, {"start": 5604.8, "end": 5610.8, "text": " if this is too large of a gain this happens and if it's too little of a gain then this happens"}, {"start": 5610.8, "end": 5620.0, "text": " also the opposite of that basically happens here we have a shrinking and a diffusion depending on"}, {"start": 5620.0, "end": 5624.96, "text": " which direction you look at it from and so certainly this is not what you want and in this case the"}, {"start": 5624.96, "end": 5630.8, "text": " correct setting of the gain is exactly one just like we're doing at initialization and then we see"}, {"start": 5630.8, "end": 5638.0, "text": " that the statistics for the forward and the backward pass are well behaved and so the reason I want"}, {"start": 5638.0, "end": 5644.24, "text": " to show you this is the basically like getting neuralness to train before these normalization layers"}, {"start": 5644.24, "end": 5648.96, "text": " and before the use of advanced optimizers like Adam which we still have to cover and residual"}, {"start": 5648.96, "end": 5654.88, "text": " connections and so on training neuralness basically look like this it's like a total balancing act"}, {"start": 5654.88, "end": 5659.04, "text": " you have to make sure that everything is precisely orchestrated and you have to care about the"}, {"start": 5659.04, "end": 5663.84, "text": " activations and the gradients and their statistics and then maybe you can train something but it was"}, {"start": 5663.84, "end": 5668.0, "text": " basically impossible to train very deep networks and this is fundamentally the reason for that"}, {"start": 5668.0, "end": 5673.6, "text": " you'd have to be very very careful with your initialization the other point here is"}, {"start": 5673.6, "end": 5678.4800000000005, "text": " you might be asking yourself by the way I'm not sure if I covered this why do we need these 10H"}, {"start": 5678.96, "end": 5684.56, "text": " layers at all why do we include them and then have to worry about the gain and the reason for"}, {"start": 5684.56, "end": 5689.280000000001, "text": " that of course is that if you just have a stack of linear layers then certainly we're getting very"}, {"start": 5689.280000000001, "end": 5695.120000000001, "text": " easily nice activations and so on but this is just a massive linear sandwich and it turns out"}, {"start": 5695.120000000001, "end": 5700.8, "text": " that it collapses to a single linear layer in terms of its representation power so if you were to"}, {"start": 5700.8, "end": 5705.04, "text": " plot the output as a function of the input you're just getting a linear function no matter how many"}, {"start": 5705.04, "end": 5710.72, "text": " linear layers you stack up you still just end up with a linear transformation all the WX plus B's"}, {"start": 5710.72, "end": 5715.84, "text": " just collapse into a large WX plus B with slightly different W's as slightly different B"}, {"start": 5717.52, "end": 5721.92, "text": " but interestingly even though the forward pass collapses to just a linear layer because of"}, {"start": 5721.92, "end": 5728.56, "text": " backpapigation and the dynamics of the backward pass the optimization is really is not identical"}, {"start": 5728.56, "end": 5735.52, "text": " you actually end up with all kinds of interesting dynamics in the backward pass because of the"}, {"start": 5735.52, "end": 5741.68, "text": " the way the chain rule is calculating it and so optimizing a linear layer by itself and optimizing"}, {"start": 5741.68, "end": 5746.160000000001, "text": " a sandwich of 10 linear layers in both cases those are just a linear transformation in the forward"}, {"start": 5746.160000000001, "end": 5751.120000000001, "text": " pass but the training dynamics would be different and there's entire papers that analyze in fact"}, {"start": 5751.120000000001, "end": 5756.72, "text": " like infinitely layered linear layers and so on and so there's a lot of things to that you can play"}, {"start": 5756.72, "end": 5766.88, "text": " with there but basically the tenational linearities allow us to turn this sandwich from just a linear"}, {"start": 5768.4800000000005, "end": 5774.4800000000005, "text": " function into a neural network that can in principle approximate any arbitrary function"}, {"start": 5775.52, "end": 5781.280000000001, "text": " okay so now I've reset the code to use the linear ten each sandwich like before and I've reset"}, {"start": 5781.28, "end": 5786.8, "text": " everything so the gains five over three we can run a single step of optimization and we can look"}, {"start": 5786.8, "end": 5791.44, "text": " at the activations statistics of the forward pass and the backward pass but I've added one more"}, {"start": 5791.44, "end": 5794.96, "text": " plot here that I think is really important to look at when you're training your neural nets and"}, {"start": 5794.96, "end": 5800.08, "text": " to consider and ultimately what we're doing is we're updating the parameters of the neural net"}, {"start": 5800.08, "end": 5805.44, "text": " so we care about the parameters and their values and their gradients so here what I'm doing is"}, {"start": 5805.44, "end": 5808.639999999999, "text": " I'm actually iterating over all the parameters available and that I'm only"}, {"start": 5808.64, "end": 5814.64, "text": " restricting it to the two-dimensional parameters which are basically the weights of these linear layers"}, {"start": 5814.64, "end": 5820.96, "text": " and I'm skipping the biases and I'm skipping the gammas and the betas and the best room just for"}, {"start": 5820.96, "end": 5825.76, "text": " simplicity but you can also take a look at those as well but what's happening with the weights is"}, {"start": 5826.64, "end": 5833.84, "text": " instructive by itself so here we have all the different weights their shapes so this is the embedding"}, {"start": 5833.84, "end": 5838.160000000001, "text": " layer the first linear layer all the way to the very last linear layer and then we have the"}, {"start": 5838.16, "end": 5843.84, "text": " mean the standard deviation of all these primers the histogram and you can see that actually it"}, {"start": 5843.84, "end": 5848.72, "text": " doesn't look that amazing so there's some trouble in paradise even though these gradients looked okay"}, {"start": 5848.72, "end": 5853.68, "text": " there's something weird going on here I'll get to that in a second and the last thing here is the"}, {"start": 5853.68, "end": 5859.04, "text": " gradient to data ratio so sometimes I'll have to visualize this as well because what this could"}, {"start": 5859.04, "end": 5864.88, "text": " see a sense of is what is the scale of the gradient compared to the scale of the actual values"}, {"start": 5864.88, "end": 5871.36, "text": " and this is important because we're going to end up taking a step update that is the learning"}, {"start": 5871.36, "end": 5877.2, "text": " rate times the gradient onto the data and so the gradient has two larger magnitudes if the numbers"}, {"start": 5877.2, "end": 5882.400000000001, "text": " and there are two large compared to the numbers in data then you'd be in trouble but in this case"}, {"start": 5882.400000000001, "end": 5889.84, "text": " the gradient to data is our low numbers so the values inside grad are 1000 times smaller than the values"}, {"start": 5889.84, "end": 5897.12, "text": " inside data in these weights most of them now notably that is not true about the last layer"}, {"start": 5897.12, "end": 5901.2, "text": " and so the last layer actually here the output layer is a bit of a trouble maker in the way that"}, {"start": 5901.2, "end": 5909.28, "text": " this is currently arranged because you can see that the last layer here in pink takes on values"}, {"start": 5909.28, "end": 5916.8, "text": " there are much larger than some of the values inside inside the neural net so the standard deviations"}, {"start": 5916.8, "end": 5923.76, "text": " are roughly 1-3 throughout except for the last but last new layer which actually has roughly 1-2"}, {"start": 5923.76, "end": 5930.0, "text": " standard deviation of gradients and so the gradients on the last layer are currently about 100 times"}, {"start": 5930.0, "end": 5936.24, "text": " greater sorry 10 times greater than all the other weights inside the neural net and so this"}, {"start": 5936.24, "end": 5941.360000000001, "text": " problematic because in the simple stochastic gradient in the sense setup you would be training"}, {"start": 5941.36, "end": 5947.44, "text": " this last layer about 10 times faster than you would be training the other layers at initialization now"}, {"start": 5947.44, "end": 5951.92, "text": " this actually like kind of fixes itself a little bit if you train for a bit longer so for example"}, {"start": 5951.92, "end": 5959.36, "text": " if I agree then 1000 only then do a break let me initialize and then let me do it 1000 steps"}, {"start": 5960.0, "end": 5966.24, "text": " and after 1000 steps we can look at the for a pass okay so you see how the neurons are a bit"}, {"start": 5966.24, "end": 5971.44, "text": " are saturating a bit and we can also look at the backward pass but otherwise they look good they're"}, {"start": 5971.44, "end": 5977.5199999999995, "text": " about equal and there's no shrinking to zero or exploding to infinities and you can see that here"}, {"start": 5977.5199999999995, "end": 5982.8, "text": " in the weights things are also stabilizing a little bit so the tails of the last pink layer"}, {"start": 5982.8, "end": 5988.8, "text": " are actually coming in during the optimization but certainly this is like a little bit of troubling"}, {"start": 5988.8, "end": 5992.96, "text": " especially if you are using a very simple update rule like stochastic gradient descent instead of"}, {"start": 5992.96, "end": 5997.6, "text": " a modern optimizer like atom now I'd like to show you one more plot that I usually look at when"}, {"start": 5997.6, "end": 6003.28, "text": " I train your own works and basically the gradient to data ratio is not actually that informative"}, {"start": 6003.28, "end": 6008.4, "text": " because what matters at the end is not the gradient to data ratio but the update to the data ratio"}, {"start": 6008.4, "end": 6013.52, "text": " because that is the amount by which we will actually change the data in these tensors so coming up"}, {"start": 6013.52, "end": 6020.32, "text": " here what I'd like to do is I'd like to introduce a new update to data ratio it's going to be less"}, {"start": 6020.32, "end": 6025.04, "text": " than we're going to build it out every single iteration and here I'd like to keep track of basically"}, {"start": 6025.84, "end": 6035.36, "text": " the ratio every single iteration so without any ingredients I'm comparing the update which is"}, {"start": 6035.36, "end": 6040.88, "text": " learning rate times the time is the gradient that is the update that we're going to apply to every"}, {"start": 6040.88, "end": 6045.92, "text": " parameter associated with random world of parameters and then I'm taking the basically standard"}, {"start": 6045.92, "end": 6053.52, "text": " deviation of the update we're going to apply and divided by the actual content the data of that"}, {"start": 6053.52, "end": 6060.0, "text": " parameter and its standard deviation so this is the ratio of basically how great are the updates"}, {"start": 6060.0, "end": 6064.56, "text": " to the values in these tensors then we're going to take a log of it and actually I'd like to take a"}, {"start": 6064.56, "end": 6071.6, "text": " log 10 just so it's a nice serviceualization so we're going to be basically looking at the"}, {"start": 6071.6, "end": 6080.0, "text": " exponents of this division here and then that item to pop out the float and we're going to be"}, {"start": 6080.0, "end": 6084.8, "text": " keeping track of this for all the parameters and adding it to this UD tensor so now let me"}, {"start": 6084.8, "end": 6091.280000000001, "text": " re-initialize and run a thousand iterations we can look at the activations the gradients"}, {"start": 6091.92, "end": 6096.0, "text": " and the parameter gradients as we did before but now I have one more plot here to introduce"}, {"start": 6096.0, "end": 6101.76, "text": " now what's happening here is we're every interval of parameters and I'm constraining it again"}, {"start": 6101.76, "end": 6108.16, "text": " like I did here to just to weights so the number of dimensions in these sensors is two and then I'm"}, {"start": 6108.16, "end": 6117.52, "text": " basically plotting all of these update ratios over time so when I plot this I plot those ratios"}, {"start": 6117.52, "end": 6122.16, "text": " and you can see that they evolve over time during initialization that they concern values and then"}, {"start": 6122.16, "end": 6126.72, "text": " these updates are like start stabilizing usually during training then the other thing that I'm"}, {"start": 6126.72, "end": 6131.76, "text": " plotting here is I'm plotting here like an approximate value that is a rough guide for what it"}, {"start": 6131.76, "end": 6136.24, "text": " roughly should be and it should be like roughly one in negative three and so that means that"}, {"start": 6136.24, "end": 6142.639999999999, "text": " basically there's some values in this tensor and they take on certain values and the updates"}, {"start": 6142.639999999999, "end": 6149.04, "text": " to them at every single iteration are no more than roughly one thousand of the actual magnitude"}, {"start": 6149.04, "end": 6156.64, "text": " in those tensors if this was much larger like for example if this was if the log of this was like"}, {"start": 6156.64, "end": 6161.04, "text": " same negative one this is actually updating those values quite a lot they're undergoing a lot of"}, {"start": 6161.04, "end": 6167.68, "text": " change but the reason that the final rate the final layer here is an outlier is because this layer"}, {"start": 6167.68, "end": 6177.12, "text": " was artificially shrugged down to keep the softmax income unconfident so here you see how we multiply"}, {"start": 6177.12, "end": 6183.2, "text": " the weight by point one in the initialization to make the last layer prediction less confident"}, {"start": 6184.16, "end": 6189.84, "text": " that made that artificially made the values inside that tensor way too low and that's why we're"}, {"start": 6189.84, "end": 6195.76, "text": " getting temporarily a very high ratio but you see that that stabilizes over time once that weight"}, {"start": 6195.76, "end": 6201.36, "text": " starts to learn starts to learn but basically I like to look at the evolution of this update ratio"}, {"start": 6201.36, "end": 6208.08, "text": " for all my parameters usually and I like to make sure that it's not too much above one negative three"}, {"start": 6208.08, "end": 6214.799999999999, "text": " roughly so around negative three on this log plot if it's below negative three usually that means"}, {"start": 6214.799999999999, "end": 6219.599999999999, "text": " that parameters are not training fast enough so if our learning rate was very low let's do that"}, {"start": 6219.599999999999, "end": 6226.24, "text": " experiment let's initialize and then let's actually do a learning rate of say one in negative three"}, {"start": 6226.24, "end": 6236.88, "text": " here so 0.001 if your learning rate is way too low this plot will typically reveal it so you see"}, {"start": 6236.88, "end": 6244.719999999999, "text": " how all of these updates are way too small so the size of the update is basically 10,000 times"}, {"start": 6246.4, "end": 6251.84, "text": " in magnitude to the size of the numbers in that tensor in the first place so this is a symptom"}, {"start": 6251.84, "end": 6257.12, "text": " of training way too slow so this is another way to sometimes start to learning rate and to get"}, {"start": 6257.12, "end": 6261.52, "text": " a sense of what that learning rate should be and ultimately this is something that you would keep track of"}, {"start": 6265.04, "end": 6270.64, "text": " if anything the learning rate here is a little bit on the higher side because you see that"}, {"start": 6271.76, "end": 6276.56, "text": " we're above the black line of negative three we're somewhere around negative 2.5 it's like okay"}, {"start": 6277.04, "end": 6281.68, "text": " and but everything is like somewhat stabilizing and so this looks like a pretty decent setting of"}, {"start": 6281.68, "end": 6286.96, "text": " of learning rates and so on but this is something to look at and when things are miscalibrated you"}, {"start": 6286.96, "end": 6292.8, "text": " will you will see very quickly so for example everything looks pretty well behaved right but just"}, {"start": 6292.8, "end": 6297.280000000001, "text": " as a comparison when things are not properly calibrated what does that look like let me come up here"}, {"start": 6297.84, "end": 6304.0, "text": " and let's say that for example what do we do let's say that we forgot to apply this"}, {"start": 6304.0, "end": 6308.8, "text": " a fan in normalization so the weights inside the linear layers are just sample from a Gaussian"}, {"start": 6308.8, "end": 6315.4400000000005, "text": " in all the stages what happens to our how do we notice that something's off well the activation"}, {"start": 6315.4400000000005, "end": 6320.320000000001, "text": " plot will tell you whoa your neurons are way too saturated the gradients are going to be all messed up"}, {"start": 6321.360000000001, "end": 6325.92, "text": " the histogram for these weights are going to be all messed up as well and there's a lot of"}, {"start": 6325.92, "end": 6332.0, "text": " asymmetry and then if we look here I suspect it's all going to be also pretty messed up so you see"}, {"start": 6332.0, "end": 6337.360000000001, "text": " there's a lot of discrepancy in how fast these layers are learning and some of them are learning"}, {"start": 6337.36, "end": 6344.48, "text": " way too fast so negative 1 negative 1.5 those are very large numbers in terms of this ratio again"}, {"start": 6344.48, "end": 6349.679999999999, "text": " you should be somewhere around negative three and not much more about that so this is how"}, {"start": 6349.679999999999, "end": 6355.04, "text": " miscalibration so if your neurons are going to manifest and these kinds of plots here are a good way of"}, {"start": 6356.08, "end": 6363.839999999999, "text": " sort of bringing those miscalibration sort of to your attention and so you can address them"}, {"start": 6363.84, "end": 6369.2, "text": " okay so so far we've seen that when we have this linear 10-H sandwich we can actually precisely"}, {"start": 6369.2, "end": 6374.32, "text": " calibrate the gains and make the activations the gradients and the parameters and the updates all"}, {"start": 6374.32, "end": 6380.56, "text": " look pretty decent but it definitely feels a little bit like balancing of a pencil on your finger"}, {"start": 6381.12, "end": 6386.64, "text": " and that's because this gain has to be very precisely calibrated so now let's introduce"}, {"start": 6386.64, "end": 6394.08, "text": " bashfulization layers into the fix into the mix and let's let's see how that helps fix the problem so"}, {"start": 6394.08, "end": 6401.4400000000005, "text": " here I'm going to take the bashful monday class and I'm going to start placing it inside and as I"}, {"start": 6401.4400000000005, "end": 6407.360000000001, "text": " mentioned before the standard typical placey would place it is between the linear layer so right"}, {"start": 6407.360000000001, "end": 6412.72, "text": " after it before the non-linearity but people have definitely played with that and in fact you can"}, {"start": 6412.72, "end": 6418.64, "text": " get very similar results even if you place it after the non-linearity and the other thing that I"}, {"start": 6418.64, "end": 6423.04, "text": " wanted to mention is it's totally fine to also place it at the end after the last linear layer"}, {"start": 6423.04, "end": 6430.4800000000005, "text": " and before the last function so this is potentially fine as well and in this case this would be"}, {"start": 6430.4800000000005, "end": 6438.64, "text": " output would be woke up size now because the last layer is bashful we would not be changing to wait"}, {"start": 6438.64, "end": 6444.64, "text": " to make the softmax less confident we'd be changing the gamma because gamma remember in the"}, {"start": 6444.64, "end": 6449.92, "text": " bash room is the variable that multiplicatively interacts with the output of that normalization"}, {"start": 6452.64, "end": 6458.64, "text": " so we can initialize this sandwich now we can train and we can see that the activations"}, {"start": 6459.360000000001, "end": 6464.8, "text": " are going to of course look very good and they are going to necessarily look good because now before"}, {"start": 6464.8, "end": 6471.4400000000005, "text": " every single 10-H layer there is a normalization in the bash room so this is unsurprisingly all"}, {"start": 6472.16, "end": 6477.4400000000005, "text": " looks pretty good it's going to be standard deviation of roughly 0.65 2% and roughly equals"}, {"start": 6477.4400000000005, "end": 6483.52, "text": " standard deviation throughout the entire layers so everything looks very homogeneous the gradients"}, {"start": 6483.52, "end": 6492.72, "text": " look good the weights look good and their distributions and then the updates also look pretty"}, {"start": 6492.72, "end": 6499.280000000001, "text": " reasonable we're going above negative 3 a little bit but not by too much so all the parameters"}, {"start": 6499.280000000001, "end": 6508.0, "text": " are training in roughly the same rate here but now what we gained is we are going to be slightly less"}, {"start": 6511.4400000000005, "end": 6516.64, "text": " brittle with respect to the gain of these so for example I can make the gain B say 0.2 here"}, {"start": 6516.64, "end": 6524.400000000001, "text": " which is much more personal over than what we had with the 10-H but as we'll see the activations"}, {"start": 6524.400000000001, "end": 6530.240000000001, "text": " will actually be exactly unaffected and that's because again this explicit normalization the gradients"}, {"start": 6530.240000000001, "end": 6535.4400000000005, "text": " are going to look okay the weight gradients are going to look okay but actually the updates will"}, {"start": 6535.4400000000005, "end": 6541.92, "text": " change and so even though the forward and backward paths to a very large extent look okay because"}, {"start": 6541.92, "end": 6546.56, "text": " of the backward paths of the bash room and how the scale of the incoming activations interact"}, {"start": 6546.56, "end": 6554.8, "text": " in the bash room and its backward paths this is actually changing the scale of the updates on"}, {"start": 6554.8, "end": 6561.4400000000005, "text": " these parameters so the gradients of these weights are affected so we still don't get a completely"}, {"start": 6561.4400000000005, "end": 6568.8, "text": " free pass to pass in arbitrary weights here but everything else is significantly more robust in"}, {"start": 6568.8, "end": 6574.4800000000005, "text": " terms of the forward backward and the weight gradients it's just that you may have to retune your"}, {"start": 6574.48, "end": 6580.16, "text": " learning rate if you are changing sufficiently the scale of the activations that are coming into"}, {"start": 6580.16, "end": 6586.5599999999995, "text": " the bash rooms so here for example this we changed the gains of these linear layers to be"}, {"start": 6586.5599999999995, "end": 6592.5599999999995, "text": " greater and we're seeing that the updates are coming out lower as a result and then finally we can"}, {"start": 6592.5599999999995, "end": 6597.759999999999, "text": " also if we are using bash rooms we don't actually need to necessarily let me reset this to one"}, {"start": 6597.759999999999, "end": 6604.16, "text": " so there's no gain we don't necessarily even have to normalize back then in sometimes so if I take"}, {"start": 6604.16, "end": 6609.599999999999, "text": " out the fan in so these are just now random Gaussian we'll see that because of bash rooms this"}, {"start": 6609.599999999999, "end": 6617.28, "text": " will actually be relatively well behaved so this this is look of course in the forward pass look good"}, {"start": 6617.28, "end": 6625.12, "text": " the gradients look good the backward the weight updates look okay a little bit of fat tails and some"}, {"start": 6625.12, "end": 6633.36, "text": " delayers and this looks okay as well but as you as you can see we're significantly below negative"}, {"start": 6633.36, "end": 6638.32, "text": " three so we'd have to bump up the learning rate of this bachelor so that we are training more"}, {"start": 6638.32, "end": 6643.2, "text": " properly and in particular looking at this roughly looks like we have to 10x the learning rate"}, {"start": 6643.2, "end": 6650.5599999999995, "text": " to get to about 1 e negative 3 so we'd come here and we would change this to be update of 1.0"}, {"start": 6651.36, "end": 6652.719999999999, "text": " and if I initialize"}, {"start": 6652.72, "end": 6664.72, "text": " then we'll see that everything still of course looks good and now we are roughly here and we expect"}, {"start": 6664.72, "end": 6670.400000000001, "text": " this to be an okay training run so long story short we are significantly more robust to the gain"}, {"start": 6670.400000000001, "end": 6675.52, "text": " of these linear layers whether or not we have to apply the fan in and then we can change the gain"}, {"start": 6676.320000000001, "end": 6682.0, "text": " but we actually do have to worry a little bit about the update scales and making sure that the"}, {"start": 6682.0, "end": 6686.96, "text": " learning rate is properly calibrated here but the activations of the forward backward pass and the"}, {"start": 6686.96, "end": 6692.24, "text": " updates are all are looking significantly more well behaved except for the global scale that is"}, {"start": 6692.24, "end": 6698.08, "text": " potentially being adjusted here okay so now let me summarize there are three things I was hoping to"}, {"start": 6698.08, "end": 6702.48, "text": " achieve with this section number one I wanted to introduce you to bachelor normalization which is"}, {"start": 6702.48, "end": 6707.92, "text": " one of the first modern innovations that we're looking into that helped stabilize very deep neural"}, {"start": 6707.92, "end": 6713.76, "text": " networks and their training and I hope you understand how the bachelor normalization works and how"}, {"start": 6713.76, "end": 6719.76, "text": " it would be used in neural network number two I was hoping to pie torchify some wire code and wrap"}, {"start": 6719.76, "end": 6727.28, "text": " it up into these modules so like linear bachelor mondi 10h etc these are layers or modules and they"}, {"start": 6727.28, "end": 6733.92, "text": " can be stacked up into neural nets like Lego building blocks and these layers actually exist in"}, {"start": 6733.92, "end": 6739.4400000000005, "text": " pie torch and if you import torch and then then you can actually the way I've constructed it you"}, {"start": 6739.4400000000005, "end": 6746.56, "text": " can simply just use pie torch by pre-pending nn dot to all these different layers and actually"}, {"start": 6746.56, "end": 6751.28, "text": " everything will just work because the API that I have developed here is identical to the API that"}, {"start": 6751.28, "end": 6756.96, "text": " pie torch uses and the implementation also is basically as far as I'm aware identical to the one"}, {"start": 6756.96, "end": 6762.64, "text": " in pie torch and number three I try to introduce you to the diagnostic tools that you would use to"}, {"start": 6762.64, "end": 6767.52, "text": " understand whether your neural network is in a good state dynamically so we are looking at the"}, {"start": 6767.52, "end": 6772.72, "text": " statistics and histograms and activation of the forward pass activation activations the backward"}, {"start": 6772.72, "end": 6777.52, "text": " pass gradients and then also we're looking at the weights that are going to be updated as part of"}, {"start": 6777.52, "end": 6782.8, "text": " stochasticity in the send and we're looking at their means standard deviations and also the ratio"}, {"start": 6782.8, "end": 6789.84, "text": " of gradients to data or even better the updates to data and we saw that typically we don't actually"}, {"start": 6789.84, "end": 6794.56, "text": " look at it as a single snapshot frozen in time at some particular iteration typically people look"}, {"start": 6794.56, "end": 6799.84, "text": " at this as a over time just like I've done here and they look at these update to data ratios and they"}, {"start": 6799.84, "end": 6806.08, "text": " make sure everything looks okay and in particular I said that one in negative three or basically"}, {"start": 6806.08, "end": 6812.0, "text": " negative three on the log scale is a good rough heuristic for what you want this ratio to be and if"}, {"start": 6812.0, "end": 6817.12, "text": " it's way too high then probably the learning rate or the updates are too big and if it's way too"}, {"start": 6817.12, "end": 6821.04, "text": " small that the learning rate is probably too small so that's just some of the things that you"}, {"start": 6821.04, "end": 6827.2, "text": " may want to play with when you try to get your neural network to work well very well now there's"}, {"start": 6827.2, "end": 6831.5199999999995, "text": " a number of things I did not try to achieve I did not try to beat our previous performance as an"}, {"start": 6831.5199999999995, "end": 6837.36, "text": " example by introducing the bachelor layer actually I did try and I found that you I used the"}, {"start": 6837.36, "end": 6841.76, "text": " learning rate finding mechanism that I've described before I tried to train the bachelor layer"}, {"start": 6841.76, "end": 6846.64, "text": " a bachelor neural nut and I actually ended up with results that are very very similar to what we"}, {"start": 6846.64, "end": 6852.96, "text": " obtained before and that's because our performance now is not bottlenecked by the optimization"}, {"start": 6852.96, "end": 6857.6, "text": " which is what bachelor is helping with the performance at the stage is bottlenecked by what I"}, {"start": 6857.6, "end": 6863.68, "text": " suspect is the context length of our context so currently we are taking three characters to"}, {"start": 6863.68, "end": 6867.6, "text": " predict the fourth one and I think we need to go beyond that and we need to look at more powerful"}, {"start": 6867.6, "end": 6873.200000000001, "text": " architectures like recurrent neural networks and transformers in order to further push the"}, {"start": 6873.2, "end": 6879.5199999999995, "text": " block probabilities that we're achieving on this day as it and I also did not try to have a full"}, {"start": 6879.5199999999995, "end": 6884.32, "text": " explanation of all of these activations, the gradients and the backward pass and the statistics of"}, {"start": 6884.32, "end": 6888.4, "text": " all these gradients and so you may have found some of the parts here on intuitive and maybe you're"}, {"start": 6888.4, "end": 6893.36, "text": " slightly confused about okay if I change the gain here how come that we need a different"}, {"start": 6893.36, "end": 6896.88, "text": " learning rate and I didn't go into the full detail because you'd have to actually look at the"}, {"start": 6896.88, "end": 6901.28, "text": " backward pass of all these different layers and get an intuitive understanding of how that works"}, {"start": 6901.28, "end": 6906.16, "text": " and I did not go into that in this lecture the purpose really was just to introduce you to the"}, {"start": 6906.16, "end": 6910.48, "text": " diagnostic tools and what they look like but there's still a lot of work remaining on the intuitive"}, {"start": 6910.48, "end": 6916.08, "text": " level to understand the initialization the backward pass and how all that interacts but you"}, {"start": 6916.08, "end": 6922.88, "text": " shouldn't feel too bad because honestly we are getting to the cutting edge of where the field is"}, {"start": 6922.88, "end": 6928.16, "text": " we certainly haven't I would say solved initialization and we haven't solved back propagation"}, {"start": 6928.16, "end": 6931.92, "text": " and these are still very much an active area of research people are still trying to figure out"}, {"start": 6931.92, "end": 6935.28, "text": " where's the best way to initialize these networks what is the best update rule to use"}, {"start": 6936.5599999999995, "end": 6940.72, "text": " and so on so none of this is really solved and we don't really have all the answers to all the"}, {"start": 6942.0, "end": 6946.96, "text": " to you know all these cases but at least you know we're making progress and at least we have some"}, {"start": 6946.96, "end": 6953.599999999999, "text": " tools to tell us whether or not things are on the right track for now so I think we've made"}, {"start": 6953.6, "end": 6961.04, "text": " positive progress in this lecture and I hope you enjoyed that and I will see you next time"}]
Neural Networks: Zero to Hero
https://www.youtube.com/watch?v=VMj-3S1tku0
The spelled-out intro to neural networks and backpropagation: building micrograd
This is the most step-by-step spelled-out explanation of backpropagation and training of neural networks. It only assumes basic knowledge of Python and a vague recollection of calculus from high school. Links: - micrograd on github: https://github.com/karpathy/micrograd - jupyter notebooks I built in this video: https://github.com/karpathy/nn-zero-to-hero/tree/master/lectures/micrograd - my website: https://karpathy.ai - my twitter: https://twitter.com/karpathy - "discussion forum": nvm, use youtube comments below for now :) - (new) Neural Networks: Zero to Hero series Discord channel: https://discord.gg/Hp2m3kheJn , for people who'd like to chat more and go beyond youtube comments Exercises: you should now be able to complete the following google collab, good luck!: https://colab.research.google.com/drive/1FPTx1RXtBfc4MaTkf7viZZD4U2F9gtKN?usp=sharing Chapters: 00:00:00 intro 00:00:25 micrograd overview 00:08:08 derivative of a simple function with one input 00:14:12 derivative of a function with multiple inputs 00:19:09 starting the core Value object of micrograd and its visualization 00:32:10 manual backpropagation example #1: simple expression 00:51:10 preview of a single optimization step 00:52:52 manual backpropagation example #2: a neuron 01:09:02 implementing the backward function for each operation 01:17:32 implementing the backward function for a whole expression graph 01:22:28 fixing a backprop bug when one node is used multiple times 01:27:05 breaking up a tanh, exercising with more operations 01:39:31 doing the same thing but in PyTorch: comparison 01:43:55 building out a neural net library (multi-layer perceptron) in micrograd 01:51:04 creating a tiny dataset, writing the loss function 01:57:56 collecting all of the parameters of the neural net 02:01:12 doing gradient descent optimization manually, training the network 02:14:03 summary of what we learned, how to go towards modern neural nets 02:16:46 walkthrough of the full code of micrograd on github 02:21:10 real stuff: diving into PyTorch, finding their backward pass for tanh 02:24:39 conclusion 02:25:20 outtakes :)
Hello, my name is Andre and I've been training deep neural networks for a bit more than a decade and in this lecture I'd like to show you what neural network training looks like under the hood So in particular we are going to start with a blank super notebook and by the end of this lecture We will define and train in neural net and you'll get to see everything that goes on under the hood and exactly Sort of how that works and then to it in a little Now specifically what I would like to do is I would like to take you through Building of micrograd now micrograd is this library that I released on GitHub about two years ago But at the time I only uploaded this source code and you'd have to go in by yourself and really Figure out how it works So in this lecture I will take you through it step-by-step and kind of comment on all the pieces of it So what's micrograd and why is it interesting? cute micrograd is basically an auto-grad engine Auto-grad is short for automatic gradient and really what it does is it implements back propagation Now back propagation is this algorithm that allows you to efficiently evaluate the gradient of Some kind of a loss function with respect to the weights of a neural network And what that allows us to do then is we can editorively tune the weights of that neural network to minimize the loss function And therefore improve the accuracy of the network So back propagation would be at the mathematical core of any modern deep neural network library like say PyTorch or Jax So the functionality of micrograd is I think best illustrated by an example. So if we just scroll down here You'll see that micrograd basically allows you to build out mathematical expressions and Here what we are doing is we have an expression that we're building out where you have two inputs a and b And you'll see that a and b are negative 4 and 2 but we are wrapping those Values into this value object that we are going to build out as part of micrograd So this value object will wrap the numbers themselves and then we are going to build out a mathematical expression here where a and b are Transformed into cd and eventually e f and g And I'm showing some of the function some of the functionality of micrograd and the operations that it supports So you can add two value objects. You can multiply them. You can raise them to a constant power You can also by one the gate squash at zero Square divide by constant divide by it etc And so we're building out an expression graph with with these two inputs a and b and we're creating out the value of g and micrograd will in the background Build out this entire mathematical expression So it will for example know that c is also a value c was a result of an addition operation and the Child nodes of c are a and b because the and all maintain pointers to a and b value objects So we'll basically know exactly how all of this is laid out and Then not only can we do what we call the forward pass where we actually look at the value of g Of course, that's pretty straightforward We will access that using the dot data attribute And so the output of the forward pass the value of g is 24.7 it turns out But the big deal is that we can also take this g value object and we can call dot backward And this will basically initialize back propagation at the node g And what back propagation is going to do is it's going to start at g and it's going to go backwards through that expression graph and it's going to recursively apply the chain rule from calculus and And what that allows us to do then is we're going to evaluate basically the derivative of g with respect to all the internal nodes Like ed and c, but also with respect to the inputs a and b And then we can actually query this derivative of g with respect to a for example That's a dot grad in this case it happens to be 138 and a derivative of g with respect to b Which also happens to be here 645 And this derivative we'll see soon is very important information because it's telling us how a and b are affecting g Through this mathematical expression. So in particular a dot grad is 138 So if we slightly nudge a and make it slightly larger 138 is telling us that g will grow in the slope of that growth is going to be 138 And the slope of growth of b is going to be 645 So that's going to tell us about how g will respond if a and b get tweaked a tiny amount in a positive direction Okay Now you might be confused about what this expression is that we built out here and this expression by the way is completely meaningless I just made it up. I'm just flexing about the kinds of operations that are supported by micrograd What we actually really care about are neural networks But it turns out that neural networks are just mathematical expressions just like this one But actually a slightly bit less crazy even um Neural networks are just a mathematical expression They take the input data as an input and they take the weights of a neural network as an input and some mathematical expression And the output are your predictions of your neural net or the loss function. We'll see this in a bit But basically neural networks teaches us happen to be a certain class of mathematical expressions But back propagation is actually significantly more general It doesn't actually care about neural networks at all. It only tells us about arbitrary mathematical expressions And then we happen to use that machinery for training of neural networks Now one more note I would like to make at the stage is that as you see here micrograd is a scalar valued auto-grad engine So it's working on the you know level of individual scalers like negative four and two And we're taking neural nets and we're breaking them down all the way to these atoms of individual scalers And all the little pluses and times and it's just excessive And so obviously you would never be doing any of this in production It's really just for them for pedagogical reasons because it allows us to not have to deal with these and dimensional tensors that you would use in modern deep neural network library So this is really uh done so that you understand and refactor out back propagation and chain rule and understanding of your training And then if you actually want to train bigger networks You have to be using these tensors, but none of the math changes. This is done purely for efficiency We are basically taking scale value All the scale values we're packaging them up into tensors Which are just arrays of these scalers and then because we have these large arrays We're making operations on those large arrays that allows us to take advantage of the parallelism in a computer and All those operations can be done in parallel and then the whole thing runs faster But really none of the math changes and that done purely for efficiency So I don't think that it's pedagogically useful to be dealing with tensors from scratch Uh, and I think and that's why I fundamentally wrote micrograd because you can understand how things work Uh, at the fundamental level and then you can speed it up later Okay, so here's the fun part my claim is that micrograd is what you need to train your networks and everything else It's just efficiency So you'd think that micrograd would be a very complex piece of code and that turns out to not be the case So if we just go to micrograd and you will see that there's only two files here in micrograd This is the actual engine. It doesn't know anything about neural nets And this is the entire neural nets library on top of micrograd. So engine and nn.pi so the actual back propagation quadrgrad engine That gives you the power of neural networks is literally 100 lines of code of like very simple python Which we'll understand by the end of this lecture and then nn.pi This neural network library built on top of the autograd engine Um, is like a joke. It's like We have to define what is in neuron and then we have to define what is the layer of neurons and then we define what is a multilateral perceptron Which is just a sequence of layers of neurons and so it's just a total joke So basically um, there's a lot of power that comes from only 115 lines of code and that's only need to understand to understand You know, or training and everything else is just efficiency and of course there's a lot too efficiency But fundamentally that's all that's happening Okay, so now let's dive right in and implement micrograd step by step the first thing I'd like to do is I'd like to make sure that you have a very good understanding Intuitively of what a derivative is and exactly what information it gives you So let's start with some basic imports that I copy-based in every Jupyter notebook always And let's define the function scalar valid function f of x as follows So I just make this up randomly I just want to scale a valid function that takes a single scalar x and returns a single scalar y And we can call this function of course so we can pass it say 3.0 and get 20 back Now we can also plot this function to get a sense of its shape You can tell from the mathematical expression that this is probably a parabola it's quadratic And so if we just create a set of um um Skip skill the values that we can feed in using for example a range from negative 5 to 5 and steps up 0.25 So this is so x is just from negative 5 to 5 not including 5 in steps of 0.25 And we can actually call this function on this non-py array as well So we get a set of y's if we call f on x's and These y's are basically also applying function on every one of these elements independently and we can plot this using math plotlib So plt.plot x's and y's and we get nice parabola So previously here we fed in 3.0 somewhere here and we received 20 back which is here the y coordinate So now I'd like to think through what is the derivative of this function at any single input point x Right, so what is the derivative at different points x of this function Now if you remember back to your calculus class you've probably derived derivatives So we take this mathematical expression 3x square minus 4x plus 5 and you would write out on a piece of paper And you would you know apply the product rule and all the other rules and derive the mathematical expression of the great derivative of the original function And then you could plug in different taxes and see what the derivative is We're not going to actually do that because no one in neural networks actually writes out the expression for the neural net It would be a massive expression um it would be you know thousands since thousands of terms no one actually Derives that derivative of course And so we're not going to take this kind of like symbolic approach Instead what I'd like to do is I'd like to look at the Definition of derivative and just make sure that we really understand what derivative is measuring What is telling you about the function? And so if we just look up derivative We see that um Okay, so this is not a very good definition of derivative. This is a definition of what it means to be differentiable But if you remember from your calculus it is the limit sh goes to 0 of f of x plus h minus f of x over h So basically what it's saying is if you slightly bump up You're at some point x that you're interested in or hey and if you slightly bump up You know you slightly increase it by small number h How does the function respond with what sensitivity does it respond? Where's the slope at that point does the function go up or does it go down and by how much and that's the slope of that function The the slope of that response at that point and so we can basically evaluate The derivative here numerically by taking a very small h of course the definition would ask us to take h to zero We're just going to pick a very small h 0.001 And let's say we're interested in 0.3.0 So we can look at f of x of course is 20 And now f of x plus h So if we slightly nudge x in a positive direction how is the function going to respond? And just looking at this do you expect you expect f of x plus h to be slightly greater than 20 or do you expect to be slightly lower than 20? And so since 3 is here and this is 20 if we slightly go positively the function will respond positively So you'd expect this to be slightly greater than 20 And now by how much is telling you the sort of the strength of that slope right the size of the slope So f of x plus h of f of x this is how much the function responded in the positive direction and we have to normalize by the run so we have the rise over run to get the slope So this of course is just numerical approximation of the slope because we have to make a very very small to converge to the exact amount Now if I'm doing too many zeros At some point I'm going to get an incorrect answer because we're using floating point arithmetic And the representations of all these numbers in computer memory is finite and at some point we get into trouble So we can converge towards the right answer with this approach But basically at 3 the slope is 14 And you can see that by taking 3x square minus 4x plus 5 and differentiating it in our head So 3x square would be 6x minus 4 And then we plug in x equals 3 so that's 18 minus 4 is 14 so this is correct So that's at 3 now how about the slope at say negative 3 Would you expect would you expect for the slope now telling the exact value is really hard But what is the sign of that slope? So at negative 3 If we slightly go in the positive direction at x the function would actually go down And so that tells you that the slope would be negative So we'll get a slight number below Below 20 and so if we take the slope we expect something negative negative 22 Okay And at some point here of course the slope would be 0 Now for this specific function I looked it up previously and it's at point 2 over 3 So at roughly 2 over 3 But somewhere here This derivative would be 0 So basically at that precise point Yeah At that precise point if we nudge in a positive direction the function doesn't respond This stays the same almost and so that's why the slope is 0 Okay now let's look at a bit more complex case So we're going to start you know complexifying a bit So now we have a function Here With output variable d That is a function of 3 scalar inputs a b and c So a b and c are some specific values 3 inputs into our expression graph And a single output d And so if we just print d we get 4 And now what I like to do is I'd like to again look at the derivative of d with respect to a b and c And think through Again just the intuition of what this derivative is telling us So in rooted evaluates derivative We're going to get a bit hacky here. We're going to again have a very small value of h And then we're gonna fix the inputs at some values that we're interested in So these are the this is the point a bc at which we're going to be evaluating the derivative of d with respect to all a b and c at that point So there's the inputs and now we have d1 is that expression And then we're going to for example look at the derivative of d with respect to a So we'll take a and we'll bump it by h and then we'll get d2 to be the exact same function And now we're going to print You know fun I want d1 is d1 d2 is d2 and print slope So the derivative or slope here will be of course d2 minus d1 divided h So d2 minus d1 is how much the function increased when we bumped the The specific input that we're interested in by a tiny amount and This is the normalized by h to get the slope So Yeah, so this so I just run this we're going to print d1 Which we know is for Now d2 will be bumped a will be bumped by h So let's just think through a little bit What d2 will be printed out here in particular d1 will be for Will d2 be a number slightly greater than 4 or slightly lower than 4 and it's going to tell us the the sign of the derivative So We're bumping a by h b is minus 3 c is 10 So you can just in total think through this derivative and what is doing a will be slightly more positive And but b is a negative number So if a is slightly more positive Because b is negative 3 We're actually going to be adding less to d So you'd actually expect that the value of the function will go down So let's just see this Yeah, and so we went from 4 to 3.9996 And that tells you that the slope will be negative and then Uh will be a negative number because we went down and then the exact number of slope will be exact amount of slope is negative 3 And you can also convince yourself that negative 3 is the right answer mathematically and analytically Because if you have 8 times b plus c and you are you know you have calculus Then differentiating 8 times b plus c with respect to a gives you just b And indeed the value of b is negative 3 which is the derivative that we have so you can tell that that's correct So now if we do this with b So if we bump b by a little bit in a positive direction We'd get different slopes. So what is the influence of b on the output d So if we bump b by tiny amount in the positive direction then because a is positive We'll be adding more to d Right So um, and now what is the what is the sensitivity? What is the slope of that addition? And it might not surprise you that this should be 2 And why is it 2 because d of d by db The differentiating respect to b would be would give us a and the value of a is 2 So that's also working all And then if c gets bumped a tiny amount in h by h Then of course 8 times b is unaffected and now c becomes slightly bit higher What does that do to the function? It makes it slightly bit higher because we're simply adding c And it makes it slightly bit higher by the exact same amount that we added to c And so that tells you that the slope is 1 That will be the um The rate at which d will increase as we scale c Okay, so we now have some intuitive sense of what this derivative is telling you about the function And we'd like to move to neural networks Now as I mentioned neural networks will be pretty massive expressions mathematical expressions So we need some data structures that maintain these expressions and that's what we're going to start to build out now So we're going to Build out this value object that I showed you in the read me page of micrograd So let me copy paste a skeleton of the first very simple value object So class value takes a single scalar value that it wraps and keeps track of and that's it So we can for example do value of 2.0 and then we can Get we can look at its content and um python will internally use the wrapper function to return uh this straight clips like that So this is a value object with data equals to the overcreating here Now what we'd like to do is like we'd like to be able to Have not just like two values But we'd like to do a blocky right we'd like to add them So currently you would get an error because python doesn't know how to add Two value objects. So we have to tell it So here's an addition So you have to basically use these special double underscore methods in python to define these operators for these objects so if we call um the If we use this plus operator python will internally call a dot Add of b that's what will happen internally and so b will be the other And self will be a And so we see that what we're going to return is a new value object And it's just uh is it going to be wrapping the plus of their data But remember now because uh data is the actual like numbered python number So this operator here is just the typical floating point plus addition now It's not an addition of value objects and will return a new value So now a plus b should work and it should print value of Negative one Because that's two plus minus three There we go Okay, let's now implement multiply Just so we can recreate this expression here So multiply I think it won't surprise you will be fairly similar So instead of add we're going to be using mul And then here of course we want to do times And so now we can create a C value object which will be 10.0 And now we should be able to do a times b Well, let's just do a times b first um That's value of negative six now And by the way I skipped over this a little bit uh suppose that I didn't have the wrapper function here Then it's just that you'll get some kind of an ugly expression So what wrapper is doing is it's providing us a way to print out like a nicer looking expression in python Uh, so we don't just have something cryptic. We actually are you know, it's value of negative six So this gives us a times and then this we should now be able to add C to it because we've defined and told the python how to do mul and add And so this will call this will basically be equivalent to a dot mul B and then this new value object will be dot add of C And so let's see if that worked Yep, so that worked well that gave us four which is what we expect from before And I believe you can just call the manually as well. There we go. So Yeah, okay, so now what we are missing is the connected tissue of this expression As I mentioned we want to keep these expression graphs So we need to know and keep pointers about what values produce what other values So here for example, we are going to introduce a new variable which will call children And by default it will be an empty tuple And then we're actually going to keep a slightly different variable in the class which will call underscore prime Which will be the set of children Uh, this is how I done. I did it in the original micro grad looking at my code here I can't remember exactly the reason I believe it was efficiency But this underscore children will be a tuple for convenience But then when we actually maintain it in the class it will be just this set Yes, I believe for efficiency um So now when we are creating a value like this with a constructor children will be empty and prep will be the empty set But when we're creating a value through addition or multiplication We're going to feed in the children of this value which in this case is self and other So those are the children here So now we can do d dot prep and we'll see that the children of the We now know are this a value of negative six and value of ten and this of course is the value resulting from a times b And the c value which is ten Now the last piece of information we don't know so we know now that the children of every single value We don't know what operation created this value So we need one more element here. Let's call it underscore pop And by default this is the empty set for leaves And then we'll just maintain it here And now the operation will be just a simple string and in the case of addition it's plus in the case of multiplication it's times So now we Not just have d dot prep we also have a d dot up And we know that d was produced by an addition of those two values And so now we have the full mathematical expression And we're building out this data structure and we know exactly how each value came to be By word expression and from what other values Now because these expressions are about to get quite a bit larger We'd like a way to nicely visualize these expressions that we're building out So for that I'm going to copy paste a bunch of slightly scary code That's going to visualize this this expression graphs for us So here's the code and I'll explain it in a bit But first let me just show you what this code does Basically what it does is it creates a new function draw dot that we can call on some root node And then it's going to visualize it So if we call draw dot on d which is this final value here that is a times b plus c It creates something like this. So this is d and you see that this is a times b Create a value plus c gives us this output node d So that's draw dot of d and I'm not going to go through this in complete detail You can take a look at graph is and it's api A graph is an open source graph visualization software And what we're doing here is we're building out this graph in the graph is api And you can basically see that trace is this helper function that enumerates all the nodes and edges in the graph So that's just built a set of all the nodes and edges And then we iterate through all the nodes and we create special node objects for them in using dot node And then we also create edges using dot dot edge And the only thing that's like slightly tricky here is you notice that I basically add these fake nodes Which are these operation nodes. So for example this node here is just like a plus node and I create these These special op nodes here and I connect them accordingly So these nodes of course are not actual nodes in the original graph They're not actually a value object. The only value objects here are the things in squares Those are actual value objects or representations thereof And these op nodes are just created in this draw dot routine so that it looks nice Let's also add labels to these graphs just so we know what variables are where so let's create a special underscore label Um, or let's just do label equals ft by default and save it to each node And then here we're going to do label is a label is the label is c um And then Let's create a special um E equals a times b And dot label will be It's kind of notty And E will be E plus C And a d dot label will be B Okay, so nothing really changes. I just added this new E function A new E variable And then here when we are printing this I'm going to print the label here. So this will be a percent s bar and this will be end dot label And so now We have the label on the left here. So it says a b creating e and then E plus C creates d Just like we have it here And finally, let's make this expression just one layer deeper So d will not be the final output node Instead after d we are going to create a new value object Called f we're going to start running out of variable soon f will be negative 2.0 And it's label will of course just the f And then l will capital L will be the output of our graph And l will be p times f Okay, so l will be negative eight is the output Uh, so Now we don't just draw a d draw L Okay And somehow the label of L is undefined loops. I'll that label as to be explicitly So given to it There we go. So l is the output So let's quickly recap what we've done so far We are able to build out mathematical expressions using only plus and times so far Uh, they are scalar valued along the way and we can do this forward pass And build out a mathematical expression So we have multiple inputs here a b c and f Going into a mathematical expression that produces a single output L And this here is vis-visualizing the forward pass So the output of the forward pass is negative eight. That's the value Now what we'd like to do next is we'd like to run back propagation And in back propagation we are going to start here at the end And we're going to reverse And calculate the gradient along along all these intermediate values And really what we're computing for every single value here Um, we're going to compute the derivative of that node with respect to L So the derivative of l with respect to l is just uh one And then we're going to derive what is the derivative of l with respect to f with respect to d With respect to c with respect to e With respect to b and with respect to a And in neural network setting It'd be very interested in the derivative of basically this loss function l With respect to the weights of a neural network And here of course we have just these variables a b c and f But some of these will eventually represent the weights of a neural net And so we'll need to know how those weights are impacting The loss function So we'll be interested basically in the derivative of the output With respect to some of its leaf nodes And those leaf nodes will be the weights of the neural net And the other leaf nodes of course will be the data itself But usually we will not want or use the derivative of the loss function with respect to data Because the data is fixed But the weights will be iterated on Using the gradient information So next we are going to create a variable inside the value class That maintains the derivative of l with respect to that value And we will call this variable grad So there is a dot data and there's a self-adgrad And initially it will be zero And remember that zero is basically means no effect So at initialization we're assuming that every value does not impact Does not affect the output Right because if the gradient is zero That means that changing this variable is not changing the loss function So by default we assume that the gradient is zero And then now that we have grad and it's 0.0 We are going to be able to visualize it here after data So here grad is 0.4f And this will be in that grad And now we are going to be showing both the data and the grad And initialize that zero And we are just about getting ready to calculate the back propagation And of course this grad again as I mentioned Is representing the derivative of the output In this case l with respect to this value So with respect to So this is the derivative of l with respect to f Respect to d and so on So let's now fill in those gradients And actually do back propagation manually So let's start filling in these gradients And start all the way at the end as I mentioned here First we are interested to fill in this gradient here So what is the derivative of l with respect to l In other words if I change l by a tiny amount of h How much does l change? It changes by h So it's proportional and therefore the derivative will be 1 We can of course measure these or estimate these numerical gradients Numerically just like we've seen before So if I take this expression And I create a def lol function here And put this here Now the reason I'm creating a gating function lol here Is because I don't want to pollute or mess up the global scope here This is just kind of like a little staging area And as you know in Python all of these will be local variables to this function So I'm not changing any of the global scope here So here l1 will be l And then copy based on this expression We're going to add a small amount h In for example a Right and this will be measuring the derivative of l with respect to a So here this will be l2 And then we want to print that derivative So print l2 minus l1 Which is how much l changed And then normalize it by h So this is the rise over run And we have to be careful because l is a valid node So we actually want its data Um So that these are floats dividing by h And this should print the derivative of l with respect to a Because a is the one that we bumped a little bit by h So what is the derivative of l with respect to a It's 6 Okay And obviously If we change l by h Then that would be Here effectively Um This looks really awkward but changing l by h You see the derivative here is one Um That's kind of like the base case of what we are doing here So basically we come out comp here And we can manually set l dot grad to one This is our manual back propagation l dot grad is one And let's redraw And we'll see that we filled in Grad is one for l We're now going to continue the back propagation So let's here look at the derivatives of l with respect to d and f Uh let's do a d first So what we are interested in if I create a mark down on here Is we'd like to know Basically we have that l is d times f And we'd like to know what is uh dl by dd What is that? And if you know you're a calculus uh l is d times f So what is dl by dd? It would be f And if you don't believe me We can also just derive it because the proof would be fairly straightforward Uh we go to the definition of the A derivative Which is f of x plus h minus f of x divide h As a limit Limit of h goes to zero of this kind of expression So when we have l is d times f Then increasing d by h would give us the output of d plus h times f That's basically a full of x plus h, right minus d times f And then divide h And symbolically expanding out here We would have basically d times f plus h times f minus d times f divide h And then you see how the df minus df cancels So you're left with h times f divide h Which is f So in the limit as h goes to zero of You know derivative um definition We just get f in a case of d times f So symmetrically dl by d f will just be d So what we have is that f dot grad We see now is just the value of d Which is four And we see that d dot grad is just uh the value of f And so the value of f is negative two So we'll set those manually Let me erase this markdown node and then let's redraw what we have Okay, and let's just make sure that these were correct So we seem to think that dl by dd is negative two. So let's double check Um, let me erase this plus h from before and now we want the derivative with respect to f So let's just come here when I create f and let's do a plus h here And they should print a derivative of l with respect to f. So we expect to see four Yeah, and this is four up to floating point funquiness And then dl by dd should be f Which is negative two grad is negative two So if we again come here and we change d d dot d dot plus equals h right here So we expect so we've added a little h and then we see how l changed and we expect to print Uh negative two There we go So we've numerically verified what we're doing here is what kind of like an inline gradient check gradient check is when we are deriving this like back propagation and getting the derivative with respect to all the intermediate results and then numerical gradient is just you know Um, estimating it using small step size Now we're going to the crux of back propagation So this will be the most important node to understand because if you understand the gradient for this node You understand all of back propagation and all of training on neural nets basically So we need to derive dl by dc in other words the derivative of l with respect to c Because we've computed all these other gradients already Now we're coming here and we're continuing the back propagation manually So we want dl by dc and then we'll also derive dl by dE Now here's the problem How do we derive dl by dc? We actually know the derivative l with respect to d so we know how l is sensitive to d But how is l sensitive to c? So if we wiggle c how does that impact l through d? So we know dl by dc And we also here know how c impacts d and so just very intuitively if you know the impact that c is having on d And the impact that d is having on l then you should be able to somehow put that information together to figure out how c impacts l And indeed this is what we can actually do So in particular we know just concentrating on d first Let's look at how what is the derivative basically of d with respect to c? So in other words what is dd by dc So here we know that d is c times c plus ee That's what we know and our interesting dd by dc If you would just know your calculus again and you remember then differentiating c plus e with respect to c You know that that gives you 1.0 And we can also go back to the basics and derive this because again we can go to our f of x plus h minus f of x derogh divide by h That's the definition of a derivative as h goes to 0 And so here Focusing on c and its effect on d we can basically do the f of x plus h will be c is incremented by h plus e That's the first evaluation of our function minus c plus e And then divide h and so what is this? Just expanding the sound this will be c plus h plus e minus c minus e And then you see here how c minus c cancels e minus e cancels were left with h over h which is 1.0 And so by symmetry also dd by dd will be 1.0 as well So basically the derivative of a some expression is very simple and this is the local derivative So I call this the local derivative because we have the final output value all the way at the end of this graph And we're now like a small node here and this is a little plus node And it the little plus node doesn't know anything about the rest of the graph that it's embedded in All it knows is that it did it plus it took a c and a e added them and created a d And this plus node also knows the local influence of c on d Or rather were rather the derivative of d with respect to c And it also knows the the derivative of d with respect to e But that's not what we want. That's just a local derivative What we actually want is dl by dc and l could l is here just one step away But in a general case this little plus node is could be embedded in like a massive graph So Again, we know how l impacts d and now we know how c and e impact d How do we put that information together to write dl by dc and the answer of course is the chain rule in calculus And so I pulled up a chain rule here from capidia And I'm going to go through this very briefly. So chain rule We capidia sometimes can be very confusing and calculus can can be very confusing like this is the way I learned Chain rule and was very confusing like what is happening? It's just complicated. So I like this expression much better If a variable z depends on a variable y which itself depends on a variable x Then z depends on x as well obviously through the intermediate variable y And in this case the chain rule is expressed as if you want dz by dx Then you take the dz by dy and you multiply it by dy by dx So the chain rule fundamentally is telling you how We chain these Uh derivatives together correctly So to differentiate through a function composition We have to apply a multiplication of those derivatives So that's really what chain rule is telling us and there's a nice little intuitive explanation here Which I also think is kind of cute The chain rule says that knowing the instantaneous rate of change of z with respect to y and y relative to x allows one to calculate the instantaneous rate of change of z relative to x As a product of those two rates of change simply the product of those two So here's a good one If a car travels twice as fast as a bicycle and the bicycle is four times as fast as a walking man Then the car travels two times four eight times as fast as a man And so this makes it very clear that the correct thing to do sort of is to multiply So cars twice as fast as a bicycle and bicycle is four times as fast as man So the car will be eight times as fast as the man And so we can take these intermediate rates of change if you will and Multiply them together and that justifies the Chain rule intuitively. So I have a look at chain rule about here. Really what it means for us is there's a very simple recipe for deriving what we want Which is dfl by dc And what we have so far is we know one And we know What is the impact of d on l so we know dl by dd the derivative of l respect to dd We know that that's negative two and now because of this local Reason that we've done here we know dd by dc So how does c impact d and in particular this is a plus node So the local derivative is simply 1.0 is very simple And so the chain rule tells us that dl by dc going through this intermediate variable We'll just be simply dl by dd times dd by dc That's chain rule So this is identical to what's happening here except Z is rl y is our d and x is our c So we literally just have to multiply these and because These local derivatives like dd by dc are just one We basically just copy over dl by dd because this is just times one So what is it did so because dl by dd is negative two what is dl by dc? Well, it's the local gradient 1.0 times dl by dd which is negative two So literally what a plus node does you can look at it that way is it literally just routes the gradient Because the plus nodes local derivatives are just one and so in the chain rule one times dl by dd is Is just dl by dd and so that derivative just gets routed to both c and to e in the skates So basically We have that e dot grad Or what's our good c since that's the one we built that is negative two times one negative two And in the same way by symmetry e dot grad will be negative two that's the claim So we can set those We can redraw And you see how we just assign negative two negative two So there's back propagating signal which is carrying the information of like what is the derivative of l with respect to all the intermediate nodes We can imagine it almost like flowing backwards through the graph and a plus node will simply distribute the derivative to all the leaf nodes Assuring to all the children nodes of it So this is the claim and now let's verify it So let me remove the plus h here from before And now instead what we're going to do is we want to increment c so c dot data will be incremented by h And when I run this we expect to see negative two Negative two and then of course for e So e dot data plus equals h and we expect to see negative two Simple So those are the derivatives of these internal nodes And now we're going to recurse our way backwards again And we're again going to apply the chain rule So here we go our second application of chain rule and we will apply it all the way through the graph Which just happened to only have one more node remaining We have that d l by d e As we have just calculated is negative two so we know that So we know the derivative of l with respect to e And now we want d l by d a Right and the chain rule is telling us that that's just d l by d e Negative two times the local gradient So what is the local gradient basically d e by d a We have to look at that So I'm a little times node Inside a massive graph and I only know that I did a times b and I produced an e So now what is d e by d a and d e by d b that's the only thing that I sort of know about that's my local gradient So because we have that e is a times b We're asking what is d e by d a And of course we just did that here we have a times so I'm not going to redrive it But if you want to differentiate this with respect to a you'll just get b right the value of b Which in this case is negative 3.0 So basically we have that d l by d a Well, let me just do it right here We have that a dot grad and we are applying chain rule here Is d l by d e which we see here is negative two times what is d e by d a It's the value of b which is negative three That's it And then we have b dot grad is again d l by d e which is negative two Just the same way times What is d e by d um d b is the value of a which is 2.0 That's the value of a So these are our claimed derivatives Let's read draw and we see here that a dot grad turns out to be six because that is negative two times negative three And b dot grad is negative four times sorry is negative two times two which is negative four So those are our claims let's delete this and let's verify them We have a here a dot data plus equals h So the claim is that a dot grad is six. Let's verify six and we have b dot data plus equals h So nudging b by h and looking at what happens We claim it's negative four And indeed it's negative four plus minus again float oddness um And uh That's it. This that was the manual backpropagation All the way from here to all the leaf notes and I've done it piece by piece And really all we've done is as you saw we iterated through all the nodes one by one And locally applied the chain rule We always know what is the derivative of l with respect to this little output And then we look at how this output was produced This output was produced through some operation And we have the pointers to the children nodes of this operation And so in this little operation we know what the local derivatives are And we just multiply them onto the derivative always So we just go through and recursively multiply on the local derivatives And that's what backpropagation is is just a recursive application of chain rule Backwards through the computation graph Let's see this power in action just very briefly What we're good to do is we're going to Uh, nudge our inputs to try to make l go up So in particular what we're doing is we want a dot data. We're going to change it And if we want l to go up that means we just have to go in the direction of the gradient So a Should increase in the direction of gradient by like some small step amount. This is the step size And we don't just want this for b, but also for b Also for c Also for f Those are leaf nodes which we usually have control over And if we nudge in direction of the gradient we expect a positive influence on l So we expect l to go up positively Uh, so it should become less negative it should go up to say negative, you know, six or something like that Uh, it's hard to tell exactly and we have to reroute the forward path. So let me just um Do that here um This would be the forward pass f would be unchanged. This is effectively the forward pass And now if we print l dot data We expect because we nudge all the values or the inputs in the rational gradient We expect it less negative l. We expect it to go up So maybe it's negative six or so. Let's see what happens Okay negative seven And uh, this is basically one step of an optimization that will end up running And really this gradient just give us some power because we know how to influence the final outcome And this will be extremely useful for training. You know, that's as we'll see So now I would like to do one more uh example of manual back propagation using a bit more complex and uh useful example We are going to back propagate through a neuron so We want to eventually build out neuron that works in an as simplest cases are multi-layer perceptrons as they're called So this is a two layer neuron that And it's got these hidden layers made up of neurons and these neurons are fully connected to each other Now biologically neurons are very complicated devices But we have very simple mathematical models of them And so this is a very simple mathematical model of a neuron. You have some inputs xs And then you have these synapses that have weights on them So um, the w's are weights Um, and then the synapse interacts with the input to this neuron multiplicatively So what flows to the cell body Of this neuron is w times x But there's multiple inputs. There's many w times x is flowing to the cell body The cell body then has also like some bias So this is kind of like the In in their innate sort of trigger happiness of this neuron So this bias can make it a bit more trigger happy or a little less trigger happy regardless of the input But basically we're taking all the w times x Of all the inputs adding the bias and then we take it through an activation function And this activation function is usually some kind of a squashing function Like a sigmoid or 10-h or something like that. So as an example We're going to use the 10-h in this example um, numpy has a NP.10-h So um, we can call it on a range Then we can plot it This is the 10-h function and you see that the inputs as they come in Get squashed on the wipe coordinate here. So um Right at zero. We're going to get exactly zero and then as you go more positive in the input Then you'll see that the function will only go up to one and then plateau out And so if you pass in very positive inputs, we're gonna cap it smoothly at one And on the negative side we're gonna cap it smoothly to negative one So that's 10-h and that's the squashing function or an activation function And what comes out of this neuron is just the activation function applied to the dot product of the weights and the inputs So let's write one out um I'm going to copy paste because I don't want to type too much But okay, so here we have the inputs x1 x2 so this is a two-dimensional neuron. So two inputs are gonna come in These are thought out as the weights of the neuron weights w1 w2 and these weights again are the synaptic strengths for each input And this is the bias of the neuron b And now we want to do is according to this model we need to multiply x1 times w1 and x2 times w2 And then we need to add bias on top of it And it gets a little messy here, but all we are trying to do is x1 w1 plus x2 w2 plus b And these are multiply here Except I'm doing it in small steps so that we actually have pointers to all these intermediate nodes So we have x1 w1 variable x times x2 w2 variable and I'm also labeling them So n is now the cell body raw activation without the activation function for now And this should be enough to basically plot it so draw out of n Gives us x1 times w1 x2 times w2 Being added then the bias gets added on top of this and this n is this sum So we're now going to take it through an activation function And let's say we use the 10h so that we produce the output So what we'd like to do here is we'd like to do the output and I'll call it O is n dot 10h Okay, but we haven't yet written the 10h Now the reason that we need to implement another 10h function here is that 10h is a hyperbolic function and we've only so far implemented plus and at times and you can't make a 10h out of just pluses and times You also need explanation So 10h is this kind of a formula here You can use either one of these and you see that there is Explanation involved which we have not implemented yet for our little value node here So we're not going to be able to produce 10h yet and we have to go back up and implement something like it now one option here is We could actually implement X-managementation And we could return the X-off value instead of a 10h of a value Because if we had X then we have everything else that we need So because we know how to add and we know how to um We know how to add and we know how to multiply So we'd be able to create 10h if we knew how to X-off But for the purposes of this example, I specifically wanted to Show you That we don't necessarily need to have the most atomic pieces in um In this value object we can actually like create functions at arbitrary Points of abstraction they can be complicated functions But they can be also very very simple functions like a plus and it's totally up to us The only thing that matters is that we know how to differentiate through any one function So we take some inputs and we make an output the only thing that matters can be arbitrarily complex function As long as you know how to create the local derivative If you know the local derivative of how the inputs impact the output then that's all you need So we're going to cluster up all of this expression And we're not going to break it down to its atomic pieces We're just going to directly implement 10h So let's do that depth 10h And then out will be a value Of And we need this expression here. So um Let me actually copy-based What's graph n which is a solid data and then this I believe is the 10h Math dot x off two No n that m minus one over two n plus one maybe I can call this x Just that it matches exactly Okay, and now this will be t And uh children of this node they're just one child And I'm wrapping it in a tuple so this is a couple of one object just self And here the name of this operation will be 10h And we're going to return that Okay So now values should be implementing 10h and now it's rolled away down here And we can actually do n dot 10h and that's going to return the 10h Output of n and now we should be able to draw that of oh not of n So let's see how that worked There we go and went through 10h To produce this up it So now 10h is a sort of Our little micrograt supported node here as an operation And as long as we know derivative of 10h Then we'll be able to back propagate through it Now let's see this 10h in action Currently, it's not squashing too much because the input to it is pretty low So if the bias was increased to say eight Then we'll see that what's flowing into the 10h now is two And 10h is squashing to point nine six So we're already hitting the tail of this 10h And it will sort of smoothly go up to one and then plateau out over there Okay, so now I'm going to do something slightly strange I'm going to change this bias from eight to this number 6.88 etc And I'm going to do this for specific reasons because we're about to start back propagation And I want to make sure that our numbers come out nice They're not like very crazy numbers. They're nice numbers that we can sort of understand in our head Let me also add pose label Oh, it's short for output here So that's the R Okay, so 28 flows into 10h comes up point seven So now we're going to do back propagation and we're going to fill in all the gradients So what is the derivative or with respect to all the inputs here And of course in a typical neural network setting what we really care about the most is the derivative of These neurons on the weights specifically the W2 and W1 because those are the weights that we're going to be changing part of the optimization And the other thing that we have to remember is here we have only single neuron But in the neural net you typically have many neurons and they're connected So this is only like a one small neuron a piece of a much bigger puzzle and eventually there's a loss function That sort of measures the accuracy of the neural net and we're back propagating with respect to that accuracy and trying to increase it So let's start off back propagation here and and What is the derivative? Oh with respect to oh the base case sort of we know always is that the gradient is just one point there So let me fill it in and then let me split out the drawing function um here And then here sell Clear this output here, okay So now when we draw oh we'll see that oh that grad is one So now we're going to back propagate through the 10 H So to back propagate through 10 H we need to know the local derivative of 10 H So if we have that oh is 10 H of n Then what is d oh by d n? Now what you could do is you could come here and you could take this expression and you could do your calculus derivative taking Um, and that would work But we can also just scroll down with the pd i here Into a section that hopefully tells us that derivative uh d by dx of 10 H of x is Any of these I like this one one minus 10 H square of x So this is one minus 10 H of x squared So basically what this is saying is that d oh by d n is one minus 10 H often squared And we already have 10 H of n. It's just oh So it's one minus oh squared. So it was the output here. So the output is this number Odadega is this number and then What this is saying is that d oh by d n is one minus this squared. So one minus Odadega squared It's point five conveniently So the local derivative of this 10 H operation here is point five and uh, so that would be d oh by d n so we can fill in that n dot grad Is point five. We'll just fill in So this is exactly point five one half So now we're going to continue the back propagation This is point five and this is a plus node So how is back prop going to what is back prop going to do here And if you remember our previous example a plus is just a distributor of gradient So this gradient will simply flow to both of these equally And that's because the local derivative of this operation is one for every one of its nodes So one times point five is point five So therefore we know that this node here which we called this It's grad is just point five and we know that b dot grad is also point five So let's set those and let's draw So those are point five continuing. We have another plus point five again. We'll just distribute you So point five will flow to both of these so we can set theirs x2w2 as well that grad is point five And let's read wrong. Pluses are my favorite operations to back propagate through because it's very simple So now it's flowing into these expressions is point five And so really again keep in mind what the derivative is telling us at every point in time along here This is saying that if we want the output of this neuron to increase Then the influence on these expressions is positive on the output both of them are positive Contribution to the output So now back propagating to x2 and w2 first This is a times node so we know that the local derivative is no the other term So if we want to calculate x2 dot grad Then can you think through what it's going to be So x2 dot grad will be w2 dot data times This x2w2 dot grad right And w2 dot grad will be x2 dot data times x2w2 dot grad Right, so that's the little local piece of chain rule Let's set them and let's redraw So here we see that the gradient on our weight two is zero because x2's data was zero Right, but x2 will have the gradient point five because data here was one And so what's interesting here, right is because the input x2 was zero And because of the way the times works Um, of course this gradient will be zero and think about intuitively why that is Derbit it always tells us the influence of This on the final output if I will w2 how is the output changing? It's not changing because we're multiplying by zero So because it's not changing there is no derivative and zero is the correct answer Because we're multiplying or swashing with that zero And let's do it here point five should come here and flow through this times And so we'll have that x1 dot grad is Can you think through a little bit what what this should be The local derivative of times with respect to x1 is going to be w1 So w1's data times x1 w1 dot grad And w1 dot grad will be x1 dot data times x1 w2 w1 dot grad Let's see what those came out to be So this is point five. So this would be negative 1.5 and this would be one And we backpropagated through this expression these are the actual final derivatives So if we want this neurons output to increase We know that what's necessary is that uh W2 we have no gradient W2 doesn't actually matter to this neuron right now But this neuron this weight should uh go up So if this weight goes up then this neuron's output would have gone up And proportionally because the gradient is one Okay, so during the backpropagation manual is obviously ridiculous So we are now going to put an end to this suffering And we're going to see how we can implement uh the backward pass a bit more automatically. We're not going to be doing all of it manually out here It's now pretty obvious to us by example how these pluses and times are backpropagated ingredients So let's go up to the value object and we're going to start codifying what we've seen uh in the examples below So we're going to do this by storing a special self-doubt backward And uh underscore backward and this will be a function Which is going to do that little piece of chain rule at each little node that complete that took inputs and produced output Uh we're going to store How we are going to chain the the outputs gradient into the inputs gradients So by default This will be a function that uh doesn't do anything Uh so um And you can also see that here in the value in micrograd So with this backward function And by default doesn't do anything This is a function And that would be sort of the case for example for leaf node for leaf node. There's nothing to do But now if when we're creating these out values These out values are an addition of self and other And so we're going to want to self set Out's backward to be the function that propagates the gradient So So Let's define what should happen And we're going to store it in a closure. Let's define what should happen when we call Out's grad for an addition Our job is to take Out's grad and propagate it into self-scrad and other dot grad So basically we want to self-self grad to something And We want to set others that grad to something Okay And the way we saw below how chain rule works We want to take the local derivative times the um sort of global derivative I should call it which is the derivative of the final output of the expression with respect to out's data Respect to out so The local derivative of self in an addition is 1.0 So it's just 1.0 times out's grad That's the chain rule And others that grad will be 1.0 times out grad And what you basically what you're seeing here is that out's grad Will simply be copied onto self-scrad and others grad as we saw happens for an addition operation So we're going to later call this function to propagate the gradient having done an addition Let's not do multiplication we're going to also define a dot backward And we're going to set its backward to be backward And we want to chain out grad into self-scrad And others that grad And this will be a little piece of chain rule for multiplication So we'll have so what should it be? Can you think through So what is the local derivative? Here the local derivative was others that data And then There's other stuff data and then times out that grad that's chain rule And here we have self-that data times out that grad That's what we've been doing And finally here for 10h that backward And then we want to set out backwards to be just backward And here we need to Back-propagate we have out that grad and we want to chain it into self-that grad And self-that grad will be The local derivative of this operation that we've done here which is 10h And so we saw that the local gradient is 1 minus the 10h of x squared which here is t That's the local derivative because that's t is the output of this 10h So 1 minus t squared is the local derivative And then gradient Has to be multiplied because of the chain rule So out grad is chained through the local gradient into self-that grad And that should be basically it So we're going to redefine our value node We're going to swing all the way down here And we're going to redefine our expression Make sure that all the grads are zero Okay, but now we don't have to do this manually anymore We are going to basically be calling the dot backward in the right order So first we want to call oaths dot backward So o was the outcome of 10h Right so column oaths that back those those backward Will be this function. This is what it will do Now we have to be careful Because there's times out that grad and out that grad remember is initialized to zero So here we see grad zero. So as a base case We need to set oath dot grad to 1.0 To initialize this with one And then once this is one We can call o dot backward and what that should do is it should propagate this grad through 10h So the local derivative times the global derivative which is initialize at one. So this should Um Uh, no So I thought about redoing it but I figured I should just leave the error in here because it's pretty funny Why is not I object not collable Uh, it's because I screwed up we're trying to save these functions. So this is correct this here We don't want to call the function because that returns none these functions return none Which is want to store the function So let me redefine the value object And then we're going to come back and redefine the expression draw dot Everything is great. O dot grad is one O dot grad is one and now Now this should work of course Okay, so all that backward should have This grad should now be point five if we withdraw and everything was correctly point five yay Okay, so now we need to call ns dot grad ns dot backward sorry ns backward So that seems to have worked So ns dot backward Rapted the gradient to both of these so this is looking great Now we can of course call b dot grad Be the backwards or What's going to happen? Well b doesn't have it backward Bees backward because b is a leaf node Bees backward is by initialization the empty function So nothing would happen but we can call call it on it But when we call This one backwards Then we expect this point five to get further routed Right, so there we go point five point five And then finally We want to call it here on x2w2 And on x1w1 Let's do both of those and there we go So we get 0.5 negative 1.5 and 1 exactly as we did before But now we've done it through calling that backward Sir manually So we have one last piece to get rid of which is us calling underscore backward manually So let's think through what we are actually doing Um We've laid out a mathematical expression and now we're trying to go backwards through that expression Um, so going backwards through the expression just means that we never want to call a dot backward for any node Before we've done sort of um Everything after it So we have to do everything after it before ever going to call dot backward on any one node We have to get all of its full dependencies everything that it depends on has to Propagate to it before we can continue that propagation So this ordering of graphs can be achieved using something called topological sort So topological sort Is basically a laying out of a graph Such that all the edges go only from left to right basically So here we have a graph So direction as such like a graph a dag And this is two different topological orders of it, I believe Where basically you'll see that it's a laying out of the nodes such that all the edges go only one way from left to right And implementing topological sort you can look in Wikipedia and so on. I'm not going to go through it in detail But basically this is what builds a topological graph Um, we maintain a set of visited nodes and then we are um Going through starting at some root node which for us is oh, that's what we want to start the top logical sort And starting at oh we go through all of its children and we need to lay them out from left to right And basically this starts at oh if it's not visited then it marks it as visited and then it iterates through all of its children And calls build topological on them And then uh after it's gone through all the children it adds itself So basically This node that we're going to call it on like say oh is only going to add itself to the topical list After all of the children have been processed and that's how this function is guaranteeing That you're only going to be in the list once all your children are in the list and that's the invariant that is being maintained So if we built up on oh and then inspect this list We're going to see that it ordered our value objects And the last one is the value of 0.7 which is the output So this is oh and then this is n and then all the other nodes get laid out before it So that built the topological graph and really what we're doing now is we're just calling that underscore backward on all of the nodes in a topological order So if we just reset the gradients they're all zero So what did we do? We started by setting o.grad to be one That's that base case Then we built the topological order And then we went for node in reversed octopo Now in the reverse order because this list goes from you know we need to go through it in reverse order So starting at o node dot backward and this should be it There we go Those are the correct derivatives finally we are going to hide this functionality So I'm going to copy this and we're going to hide it inside the value class because we don't want to have all that code lying around So instead of an underscore backward we're now going to define an actual backward so that backward without the underscore And that's going to do all the stuff that we just derived So let me just clean this up a little bit. So We're first going to Build the topological graph Starting at self So build topo of self Will populate the topological order into the topo list which is a local variable Then we set self-adgrad to be one And then for each node in the reversed list so starting at us and going to all the children Uh, underscore backward And um, that should be it. So save Come down here we define Okay, all the grand are zero And now what we can do is oh, down backward without the underscore and There we go and that's uh, that's back propagation Please for one euro now we shouldn't be too happy with ourselves actually because we have a bad bug Um, and we have not surfaced the bug because of some specific conditions that we are have we have to think about right now So here's the simplest case that shows the bug Say I create a single node a And then I create a b that is e plus a And then I call it backward So what's gonna happen is a is three and then a is b is a plus a so there's two arrows on top of each other here Then we can see that b is of course the forward pass works b is just a plus a which is six But the gradient here is not actually correct That we calculate it automatically And that's because um You Of course, uh, just doing calculus in your head the derivative of b with respect to a should be uh two One plus one it's not one And totally what's happening here, right? So b is the result of a plus a and then we call backward on it So let's go up and see what that does um, b is a result of addition so out as b And then when we call backward what happened is self that grad was set to one And then other that grad was set to one But because we're doing a plus a self and other are actually these as a object So we are overriding the gradient we are setting it to one and then we are setting it again to one and that's why it stays at one So that's a problem There's another way to see this in a little bit more complicated expression So here we have a and b and then uh, d will be the multiplication of the two and he will be the addition of the two and um Then we multiply times d to get f and then we call it f that backward And these gradients if you check will be incorrect So fundamentally what's happening here again is um Basically, we're going to see an issue anytime we use a variable more than once Until now in these expressions above every variable is used exactly once so we didn't see the issue But here if a variable is used more than once what's going to happen during backward pass We're backpropagating from f to e to d so far so good But now e calls it backward and it deposits its gradients to a and b But then we come back to d and call backward and it overrides those gradients at a and b So that's obviously a problem And the solution here if you look at the multi-variate case of the chain rule and its generalization there The solution there is basically that we have to accumulate these gradients these gradients add And so instead of setting those gradients We can simply do plus equals we need to accumulate those gradients plus equals plus equals plus equals plus equals And this will be okay remember because we are initializing them at zero. So they started zero and then any contribution that flows backwards Will simply add So now if we redefine this one Because the plus equals this now works because a dot grad started at zero and we called b dot backward We deposit one and then we deposit one again and now this is two which is correct And here this will also work and we'll get correct gradients Because when we call e dot backward we will deposit the gradients from this branch And then we get to back to d dot backward it will deposit its own gradients And then those gradients simply add on top of each other And so we just accumulate those gradients and that fixes the issue Okay, now before we move on let me actually do a bit of cleanup here and delete some of these some of the intermediate work So I'm not going to need any of this now that we've derived all of it um We are going to keep this because I want to come back to it Delete the 10h delete arm when you can example delete the step delete this keep the code that draws And then delete this example and leave behind only the definition of value And now let's come back to this non-linearity here that we implemented the 10h Now I told you that we could have broken down 10h into its explicit atoms In terms of other expressions if we had the x function So if you remember 10h is defined like this And we chose to develop 10h as a single function And we can do that because we know it's derivative and we can back propagate through it But we can also break down 10h into an expressive function of x And I would like to do that now because I want to prove to you that you can all the same results and all the same gradients Um, but also because it forces us to implement a few more expressions It forces us to do Accumentation, addition, subtraction, division and things like that And I think it's a good exercise to go through a few more of these Okay, so let's scroll up To the definition of value And here one thing that we currently can't do is we can do like a value of say 2.0 But we can't do you know here for example we want to add a constant one and we can't do something like this And we can't do it because it's just into object has no attribute data That's because a plus one comes right here to add And then other is the integer one and then here Python is trying to access one dot data And that's not a thing And that's because basically one is not a value object and we only have addition from value objects So as a matter of convenience so that we can create expressions like this and make them make sense We can simply do something like this Basically we let other alone if other is an instance of value But if it's not an instance of value we're going to assume that it's a number like an integer or float And we're going to simply wrap it in in value And then other will just become value of other and then other will have a data attribute and this should work So if I just say this read the farm value then this should work There we go Okay, now let's do the exact same thing for multiply because we can't do something like this Again for the exact same reason so we just have to go to mall and if other is Not a value then let's wrap it in value Let's redefine value and now this works Now here's a kind of unfortunate and not obvious part A times two works we saw that but two times a is that going to work You'd expect it to write but actually it will not And the reason it won't is because Python doesn't know Like when when you do a times two Basically um, so a times two Python will go and it will basically do something like a dot mall Of two that's basically what we'll call but to it two times a is the same as two dot mall of a And it doesn't two can't multiply Value and so it's really confused about that So instead what happens is in Python the way this works is you are free to define something called the armol and And armol is kind of like a fallback So if the Python can't do two times a it will check if um If by any chance a knows how to multiply it too and that will be called into armol So because Python can't do two times a it will check is there an armol in value and because there is it will now call that And what we'll do here is we will swap the order of the operands So basically two times a will redirect to armol and armol will basically call it times two and that's how that will work So redefining that with armol two times a becomes four Okay, now looking at the other elements that we still need we need to know how to exponentiate and how to divide So let's first the explanation to the explanation part. We're going to introduce a single function x here And x is going to mirror 10h in the sense that it's a single single function that transform a single scalar value and outputs a single scalar value So we pop out the Python number We use method x to x-manitiate it create a new value object Everything that we've seen before The tricky part of course is how do you back propagate through e to the x? And uh, so here you can potentially pause the video and think about what should go here Okay, so basically we need to know what is the local derivative of e to the x So d by dx of e to the x is famously just e to the x And we've already just calculated e to the x and it's inside out that data So we can do about that data times And out that grad that's the chain So we're just chaining on to the current run grad And this is what the expression looks like it looks a little confusing But uh, this is what it is and that's the explanation So redefining we should not be able to call a dot x And uh, hopefully the backward pass works as well Okay, and the last thing we'd like to do of course is if we'd like to be able to divide Now I actually will implement something slightly more powerful than division because division is just a special case of Something a bit more powerful So in particular just by rearranging If we have some kind of a b equals Uh, value of 4.0 here we'd like to basically be able to do a divide b and we'd like this to be able to give us 0.5 Now division actually can be reshoffled as follows If we have a divide b That's actually the same as a multiplying 1 over b And that's the same as a multiplying b to the power of negative 1 And so what I'd like to do instead is I basically like to implement the operation of x to the k for some constant uh, k so it's an integer or a float um, and we would like to be able to differentiate this and then as a special case Uh, negative 1 will be division And so I'm doing that just because uh, it's more general and um, yeah, you might as well do it that way So basically what I'm saying is we can redefine uh, division Which we will put here somewhere You know, we can put this here somewhere What I'm saying is that we can redefine division so self-divided other It can actually be rewritten as self times other to the power of negative 1 And now Value raised to the power of negative 1 we had to now define that So here's So we need to implement the pow function Where am I going to put the pow function maybe here somewhere This is this call it from Fort So this function will be called when we try to raise a value to some power and other will be that power Now I'd like to make sure that other is only an int or a float usually other is some kind of a different value object But here other will be forced to be an int or a float otherwise the math Uh, won't work for for we're trying to achieve in the specific case That would be a different derivative expression if we wanted other to be a value So here we create the up the value which is just uh, you know, this data raised to the power of other and other here could be for example negative 1 That's what we are hoping to achieve And then uh, this is the backward stub and this is the fun part which is what is the uh chain rule expression here for back for um Backpropagating through the power function where the power is to the power of some kind of a constant So this is the exercise and maybe pause the video here and see if you can figure it out yourself as to what we should put here Okay, so um, you can actually go here and look at the derivative rules as an example And we see lots of the derivatives that you can hopefully know from calculus in particular what we're looking for is the power rule Because that's telling us that if we're trying to take d by dx of x to the n which is what we're doing here Then that is just n times x to the n minus 1 right Okay So that's telling us about the local derivative of this power operation So all we want here Basically n is now other and self that data is x and so this now becomes Other which is n times self that data Which is now a python in to reflote Uh, it's not a valley object. We're accessing the data attribute raised To the power of other minus one or n minus one I can put brackets around this, but this doesn't matter because um Power takes precedence over multiply and pi hell so that would have been okay And that's the local derivative only but now we have to chain it and we chain it just as simply by multiplying by on thought graph That's chain rule and this should uh technically work And we're gonna find out soon, but now if we do this this should now work And we get one five so the forward pass works, but thus the backward pass work And I realized that we actually also have to know how to subtract so right now a minus b will not work To make it work. We need one more piece of code here and basically this is the Subtraction and the way we're gonna implement subtraction is we're gonna implement it by addition of an negation and then to implement negation We're gonna multiply by negative one So just again using the stuff we've already built and just um expressing it in terms of what we have And a minus b does not work in Okay, so now let's scroll again to this expression here for this neuron And let's just compute the backward pass here once we've defined oh and let's draw it So here's the gradients for all these lead nodes for this two dimensional neuron that has a 10H that we've seen before So now what I'd like to do is I'd like to break up this 10H into this expression here So let me copy paste this here And now instead of we'll preserve the label and we will change how we define oh So in particular we're going to implement this formula here So we need each of the two x minus one over each of the x plus one So e to the two x we need to take two times m and we need to explain it That's e to the two x and then because we're using it twice Let's create an intermediate variable e And then define oh as e plus one over e minus one over e plus one e minus one over e plus one And that should be it and then we should be able to draw dot of oh So now before I run this what do we expect to see Number one we're expecting to see a much longer graph here because we've broken up 10H into a bunch of other operations But those operations are mathematically equivalent and so what we're expecting to see is number one The same result here. So the forward pass works and number two because of that mathematical equivalence We expect to see the same backward pass and the same gradients on these lead nodes. So these gradients should be identical So let's run this So number one let's verify that instead of a single 10H node we have now x and we have Plus we have times negative one. This is the division And we end up with the same forward pass here and then the gradients we have to be careful because they're in slightly different order potentially The gradients for w2 x2 should be 0 and 0.5 W2 and x2 are 0 and 0.5 and w1 x1 are 1 and negative 1.5 1 and negative 1.5 So that means that both our forward passes and backward passes were correct because this turned out to be equivalent to 10H before And so the reason I wanted to go through this exercises number one we got to practice a few more operations and Writing more backwards passes and number two. I wanted to illustrate the point that the um The level at which you implement your operations is totally up to you. You can implement backward passes for tiny Expressions like a single individual plus or a single times or you can implement them for say 10H Which is a kind of a potentially you can see it as a composite operation because it's made up of all these more atomic operations But really all of this is kind of like a fake concept all that matters is we have some kind of inputs and some kind of an output And this output is a function of the inputs in some way and as long as you can do forward pass and the backward pass of that little operation It doesn't matter what that operation is um and how composite it is If you can write the local gradients you can chain the gradient and you can continue back propagation So the design of what those functions are is completely up to you So now I would like to show you how you can do the exact same thing But using a modern deep neural network library like for example PyTorch Which I've roughly modeled micrograd By and so PyTorch is something you would use in production and I'll show you how you can do the exact same thing But in PyTorch API So I'm just going to copy-paste it in and walk you through it a little bit. This is what it looks like So we're going to import PyTorch and then we need to define these Value objects like we have here now micrograd is a scalar valued um engine so we only have scalar values like 2.0 But in PyTorch everything is based around tensors and like I mentioned tensors are just Indimensional arrays of scalars So that's why things get a little bit more complicated here. I just need a scalar valued tensor A tensor with just a single element But by default when you work with PyTorch you would use um more complicated tensors like this so if I import PyTorch Then I can create tensors like this and this tensor for example is a 2 by 3 array Of scalar scalars Um in a single compact representation. So you can check it shape. We see that it's a 2 by 3 array and so So this is usually what you would work with um in the actual libraries. So here I'm creating a tensor That has only a single element 2.0 And then I'm casting it to be double Because PyTorch is by default using double precision force floating point numbers So I like everything to be identical by default the data type of these tensors will be float 32 So it's only using a single precision float So I'm casting it to double So that we have float 64 just like in Python So I'm casting to double and then we get something similar to value of 2 The next thing I have to do is because these are leaf nodes by default PyTorch assumes that they do not require gradients So I need to explicitly say that all of these nodes require gradients Okay, so this is going to construct scalar valued one element tensors Make sure that PyTorch knows that they require gradients Now by default these are said to false by the way because of efficiency reasons Because usually you would not want gradients for leaf nodes Like the inputs to the network and this is just trying to be efficient in the most common cases So once we've defined all of our values in PyTorch land We can perform arithmetic just like we can here in micrograd land So this will just work and then there's a torshtot 10h also And when we get back is a tensor again And we can just like in micrograd it's got a data attribute and it's got grad attributes So these tensor objects just like in micrograd have a dot data and a dot grad And the only difference here is that we need to call a dot item because otherwise um PyTorch Dot item basically takes a single tensor of one element and it just returns that element stripping out the tensor So let me just run this and hopefully we are going to get this is going to print the forward pass Which is 0.707 and this will be the gradients which hopefully are 0.5 to 0 negative 0.5 and 1 So if we just run this There we go 0.7 so the forward pass agrees and then 0.5 0, 80, 0.5 and 1 So PyTorch agrees with us And just to show you here basically oh Here's a tensor with a single element And it's a double and we can call that item on it to just get the single number out So that's what item does and oh is a tensor object like I mentioned and it's got a backward function just like we've implemented And then all of these also have a dot grad so like x2 for example on the grad and it's a tensor And we can pop out the individual number with dot actum So basically torches Torch can do what we did in micrograd as a special case when your tensors are all single element tensors But the big deal with PyTorch is that everything is significantly more efficient Because we are working with these tensor objects and we can do lots of operations in parallel on all of these tensors But otherwise what we've built very much agrees with the API of PyTorch Okay, so now that we have some machinery to build out pretty complicated mathematical expressions We can also start building up neural nets and as I mentioned neural nets are just a specific class of mathematical expressions So we're going to start building out a neural net piece by piece and eventually we'll build out a two layer multi layer Layer perceptron as it's called and I'll show you exactly what that means Let's start with a single individual neuron We've implemented one here But here I'm going to implement one that also subscribes to the PyTorch API and how it designs its neural network modules So just like we saw that we can like matched API of PyTorch On the autograd side We're going to try to do that on the neural network modules So here's class neuron And just for the sake of efficiency I'm going to copy-base some sections that are relatively straightforward So the constructor will take a number of inputs to this neuron which is how many inputs come to a neuron So this one for example is three inputs And then it's going to create a weight that is some random number between negative one and one for every one of those inputs And a bias that controls the overall trigger happiness of this neuron And then we're going to implement a depth underscore underscore call Of self and x, sum input x And really what we're not going to do here is w times x plus b We're w times x here because they dot power specifically Now if you haven't seen call Let me just return 0.0 here from now The way this works now is we can have an x which is say like 2.0 3.0 Then we can initialize a neuron that is two-dimensional Because these are two numbers And then we can feed those two numbers into that neuron to again and output And so when you use this notation n of x python will use call So currently call just returns 0.0 Now we'd like to actually do the forward pass of this neuron instead So what we're going to do here first is we need to basically multiply all of the elements of w with all of the elements of x pairwise We need to multiply them So the first thing we're going to do is we're going to zip up sultan w and x And in python zip takes two iterators And it creates a new iterator that iterates over the tuples of their corresponding entries So for example just to show you we can print this list And still returns 0.0 here Sorry So we see that these w's are paired up with the x's w with x And now what we're going to do is For wixi in We want to multiply w times wixi And then we want to sum all of that together to come up with an activation And add also sultan b on top So that's the real activation And then of course we need to pass that through a non-mejority So what we're going to be returning is act.h And here's out So now we see that we are getting some outputs And we get a different output from neuron each time because we are initializing different weights and biases And then to be a bit more efficient here actually Some by the way takes a second optional parameter which is the start And by default the start is 0 So these elements of this sum will be added on top of 0 to begin with But actually we can just start with sultan b And then we just have an expression like this And then the generator expression here must be parenthesisized by the line There we go Yep, so now we can forward a single neuron And next up we're going to define a layer of neurons So here we have a schematic for a mlp So we see that these mlp's each layer, this is one layer Has actually a number of neurons And they're not connected to each other but all of them are fully connected to the input So what is a layer of neurons? It's just a set of neurons evaluated independently So in the interest of time I'm going to do something fairly straightforward here It's um Literally a layer is just a list of neurons And then how many neurons do we have? We take that as an input argument here How many neurons do you want in your layer? A number of outputs in this layer And so we just initialize completely independent neurons With this given dimensionality And when we call on it we just independently evaluate them So now instead of a neuron we can make a layer of neurons They are two dimensional neurons and let's say three of them And now we see that we have three independent evaluations of three different neurons Right? Okay and finally let's complete this picture And define an entire multilayer perception or mlp And as we can see here in an mlp These layers just speed into each other sequentially So let's come here and I'm just going to Copy the code here in interest of time So an mlp is very similar We're taking the number of inputs as before But now instead of taking a single n out which is number of neurons and a single layer We're going to take a list of n outs And this list defines the sizes of all the layers that we want in our mlp So here we just put them all together And then iterate over consecutive pairs Of these sizes and create a layer objects for them And then in the call function we are just calling them sequentially So that's an mlp really And let's actually re-implement this picture So we want three input neurons And then two layers of four and an output unit So we want Three dimensional input Say this is an example input We want three inputs into two layers of four and one output And this of course is an mlp And there we go That's a forward passive in mlp To make this a little bit nicer You see how we have just a single element But it's wrapped in a list Because layer always returns lists So for convenience return out at zero if Lend out is exactly a single element Else return fullest And this will allow us to just get a single value out At the last layer that only has a single neuron And finally we should be able to prod out of n of x And as you might imagine These expressions are now getting Relatively involved So this is an entire mlp that we're defining now All the way until a single output Okay And so obviously you would never differentiate on pen and paper These expressions but with micrograd We will be able to back propagate all the way through this And back propagate Into these weights of all these neurons So let's see how that works Okay, so let's create ourselves a very simple Example data set here So this data set has four examples And so we have four possible inputs into the neural net And we have four desired targets So we'd like the neural net to Assign Or output 1.0 when it's fed this example Negative 1 when it's fed these examples And 1 when it's fed this example So it's a very simple binary classifier neural net Basically that we would like here Now let's think what the neural net currently thinks about these four examples We can just get their predictions Basically we can just call n of x for x in axis And then we can print So these are the outputs of the neural net on those four examples So the first one is 0.91 But we'd like it to be 1 So we should push this one higher This one we want to be higher This one says 0.88 And we want this to be negative 1 This is 0.88 We want it to be negative 1 And this one is 0.88 we want it to be 1 So how do we make the neural net And how do we tune the weights To better predict the desired targets And the trick used in deep learning to achieve this Is to calculate a single number That somehow measures the total performance of your neural net And we call this single number the loss So the loss First is a single number That we're going to define That basically measures how well the neural net is performing Right now we have the intuitive sense That it's not performing very well Because we're not very much close to this So the loss will be high And we'll want to minimize the loss So in particular in this case what we're going to do Is we're going to implement the mean squared error loss So what this is doing Is we're going to basically iterate For y ground truth And y output can zip off Ys and y bread So we're going to pair up the Ground truth with the predictions And this zip iterates over tuples of them And for each Y ground truth and y output We're going to subtract them And it's squared So let's first see what these losses are These are individual loss components And so basically for each One of the four We are taking the prediction And the ground truth We are subtracting them And squaring them So because this one is so close to its target 0.91 is almost one Subtracting them gives a very small number So here we would get like a negative point one And then squaring it Just makes sure That regardless of whether we are more negative Or more positive We always get a positive number Instead of squaring the Schrold We could also take for example the absolute value We need to discard the sign And so you see that the expression is Ranged so that you only get 0 exactly When y out is equal to y ground truth When those two are equal So your prediction is exactly the target You are going to get 0 And if your prediction is not the target You are going to get some other number So here for example we are way off And so that's why the loss is quite high And the more off we are The greater the loss will be So we don't want high loss We want low loss And so the final loss here Will be just the sum Of all of these numbers So you see that this should be 0 roughly plus 0 roughly But plus 7 So loss should be about 7 here And now we want to minimize the loss We want the loss to be low Because the loss is low Then every one of the predictions Is equal to 0 Then every one of the predictions is equal to its target So the loss, the loss, the loss, it can be 0 And the greater it is The worse off the neural net is predicting So now of course if we do Loss that backward Something magical happened when I hit enter And the magical thing of course that happened Is that we can look at endout layers That neuron, endout layers At say like the first layer That neurons at 0 Because remember that MLP has layers which is a list And each layer has a neurons which is a list And that gives us individual neuron And then it's got some weights And so we can for example Look at the weights at 0 Oops, it's not cold weights It's called W And that's a value But now this value also has a graph Because of the backward values And so we see that because this gradient here On this particular weight of this particular neuron Of this particular layer is negative We see that its influence on the loss Is also negative So slightly increasing this particular weight Of this neuron of this layer Would make the loss go down And we actually have this information For every single one of our neurons And all of their parameters Actually it's worth looking at Also the draw dot loss So the draw dot loss by the way So previously we looked at the draw dot Of a single neuron Neuralin forward pass And that was already a large expression But what is this expression? We actually forwarded Every one of those four examples And then we have the loss in top with them With the mean squared error And so this is a really massive graph Because this graph that we built up now Oh my gosh This graph that we built up now Which is kind of excessive It's excessive because it has four forward passes Of a neural net for every one of the examples And then it has the loss on top And it ends with the value of the loss Which for 7.1.2 And this loss will now back propagate Through all the forward passes All the way through just every single Intermediate value of the neural net All the way back to Of course the parameters of the weights Which are the input So these weight parameters here To this neural net And these numbers here These scalars are inputs to the neural net So if we went around here We will probably find Some of these examples This 1.0 potentially maybe this 1.0 Or you know some of the others And you'll see that they all have gradients as well The thing is these gradients on the input data Are not that useful to us And that's because The input data seems to be Not changeable And it's not given to the problem And so it's a fixed input We're not going to be changing it or messing with it Even though we do have gradients for it But some of these gradients here Will be for the neural network parameters The W's and the B's And those we of course we want to change Okay so now we're going to Want some convenience code To gather up all of the parameters of the neural net So that we can operate on on all of them simultaneously And every one of them A tiny amount Based on the gradient depermission So let's collect the parameters of the neural net all in one array So let's create a parameters of self That just returns Salt that W which is a list Concatenated with A list of Salt that B So this will just return a list List plus list just You know gives you a list So that's parameters of neural And I'm calling it this way because also Pipe Torch has a parameters on every single And in module And it does exactly what we're doing here It just returns the Parameter tensors for us is the primary scalers Now layer is also a module So it will have parameters Self And basically what we want to do here is Something like this like Param's is here And then for Neuron in salt that neurons We want to get neuron that parameters And we want to Param's that extend Right so these are the Parameters of this neuron And then we want to put them on top of Param's so Param's that extend of Peace And then we want to return Param's So this is way too much code So actually there's a way to simplify this Which is Return P For neuron in self That neurons For P in neuron dot parameters So it's a single list comprehension In python you can sort of nest Then like this and you can Then create The desired array So these are identical We can take this out And then let's do the same here Deframeters Self And return A parameter for layer In self dot layers For P in layer dot parameters And that should be good Now let me pop out this So we don't re-initialize our network Because we need to re-initialize our Okay so unfortunately we Will have to probably re-initialize Network because we just had Functionality because this class Of course we i want to get All the end dot parameters But that's not going to work because this is the old class Okay so unfortunately we do have to re-initialize the network Which will change some of the numbers But let me do that so That we can do that So that we pick up the new API We can now do end dot parameters And these are all the weights and biases Inside the entire neural net So in total this MLP has 41 parameters And now we'll be able to change them If we recalculate the loss here We see that unfortunately we have slightly different Predictionality Predictions and slightly different loss But that's okay Okay so we see that this neuron's Gradient is slightly negative We can also look at its data right now Which is 0.85 So this is the current value of this neuron And this is its gradient on the loss So what we want to do now is We want to iterate for every P N end dot parameters So for all the 41 parameters of this neuron net We actually want to change P data data Slightly according to the gradient information Okay so data to do here But this will be basically a tiny update In this gradient descent scheme And gradient descent we are thinking of the gradient As a vector pointing in the direction of increased loss And so in gradient descent We are modifying P dot data By a small step size in the direction of the gradient So the step size as an example could be Like a very small number of 0.01 is the step size Times P dot grad Right But we have to think through some of the signs here So in particular Working with this specific example here We see that if we just left it like this Then this neurons value Would be currently increased by a tiny amount of the gradient The gradient is negative So this value of this neuron would go slightly down It would become like 0.8 You know, 4 or something like that But if this neurons value goes lower That would actually increase the loss That because the derivative of this neuron is negative So increasing this makes the loss go down So increasing it is what we want to do instead of decreasing it So basically what we are missing here is We are actually missing a negative sign And again this other interpretation And that's because we want to minimize the loss We don't want to maximize the loss We want to decrease it And the other interpretation as I mentioned Is you can think of the gradient vector So basically just the vector of all the gradients As pointing in the direction of increasing the loss But then we want to decrease it So we actually want to go in the opposite direction And so you can convince yourself that this Or like thus the right thing here with the negative Because we want to minimize the loss So if we notch all the parameters by a tiny amount Then we'll see that this data will change a little bit So now this neuron is a tiny amount creator A tiny amount creator Value So 0.854, once it's 0.857 And that's a good thing Because slightly increasing this neuron Data makes the loss go down According to the gradient And so the correct thing has happened signwise And so now what we would expect of course is that Because we've changed all these parameters We expect that the loss should have gone down a bit So we want to re-evaluate the loss Let me basically This is just a data definition that hasn't changed But the forward pass here Of the network we can recalculate And actually let me do it outside here So that we can compare the two loss values So here if I recalculate the loss We'd expect the new loss now to be slightly lower than this number So hopefully what we're getting now Is a tiny bit lower than 4.854 4.36 And remember the way we've arranged this Is that low loss means that our predictions are matching the targets So our predictions now are probably slightly closer to the targets And now all we have to do is we have to iterate this process So again we've done the forward pass And this is the loss Now we can lost that backward And we can do a step size And now we should have a slightly lower loss 4.36 goes to 3.9 And okay so We've done the forward pass Here's the backward pass And now the loss is 3.66 3.47 And you get the idea We just continue doing this And this is gradient descent We're just iteratively doing forward pass Backward pass Updates Forward pass, backward pass, update And the neural net is improving its predictions So here if we look at wide-pred now Wide-pred We see that This value should be getting closer to 1 So this value should be getting more positive These should be getting more negative And this one should be also getting more positive So if we just iterate this A few more times Actually we'll be able to afford to go a bit faster Let's try a slightly higher learning rate Whoops, okay There we go, so now we're at 0.31 If you go too fast by the way If you try to make it too big of a step You may actually overstep It's overconfidence Because again remember we don't actually know exactly about the loss function The loss function has all kinds of structure And we only know about the very local Dependence of all these parameters on the loss But if we step too far We may step into you know A part of the loss that is completely different And that can destabilize training And make your loss actually blow up even So the loss is now 0.04 So actually the predictions should be really quite close Let's take a look So you see how this is almost 1 Almost negative 1, almost 1 We can continue going So, yep, backward, update Up, there we go So we went way too fast And we actually overstepped So we got to 2, 2 eager Where are we now? Oops Okay, 7 in negative 9 So this is very, very low loss And the predictions are basically perfect So somehow we were doing way too big updates And we briefly explored it But then somehow we ended up getting into a really good spot So usually this learning rate and the tuning of it is a subtle art You want to set your learning rate If it's too low, you're going to take way too long to converge But if it's too high, the whole thing gets unstable And you might actually even explode the loss Depending on your loss function So finding the step size to be just right It's a pretty subtle art sometimes When you're using sort of a null-accradient descent But we happen to get into a good spot We can look at end dot parameters And we can see that the result is really good So we can see that we have a very good spot We can look at end dot parameters So this is the setting of weights and biases That makes our network predict the desired targets very, very close And basically we successfully trained in neural nut Okay, let's make this a tiny bit more respectable And implement an actual training loop And what that looks like is the initialization of that state This is the forward pass So for K in range We're going to take a bunch of steps First, you do the forward pass We evaluate the loss Let's re-initialize the neural nut from scratch And here's the data And we first do forward pass Then we do the backward pass And then we do an update That's gradient descent And then we do an update That's gradient descent And then we should be able to iterate this And we should be able to print the current step The current loss Let's just print the sort of Number of the loss And that should be it And then the learning rate 0.01 is a little too small 0.1 we saw is like a little bit dangerous if you buy Let's go somewhere between And we'll optimize this for Not 10 steps But let's go for say 20 steps Let me erase all of this junk And let's run the optimization And you see how we've actually converged slower In a more controlled manner And got through a loss that is very low So I expect wide-pred to be quite good There we go And that's it Okay, so this is kind of embarrassing But we actually have a really terrible bug In here And it's a subtle bug And it's a very common bug And I can't believe I've done it For the 20th time in my life Especially on camera And I could have reshot the bug And I could have reshot the whole thing But I think it's pretty funny And you know, you get to appreciate a bit What working with neural nuts maybe Is like sometimes We are guilty of A common bug I've actually tweeted The most common neural nut mistakes A long time ago now And I'm not really Gonna explain any of these Except for we are guilty of number three You forgot to zero grad Before dot backward What is that? Basically what's happening And it's a subtle bug and I'm not sure if you saw it Is that All of these Wates here have a dot data and a dot grad And dot grad starts at zero And then we do backward And we fill in the gradients And then we do an update on the data But we don't flush the grad It stays there So when we do the second Forward pass and we do backward again Remember that all the backward operations Do a plus equals on the grad And so these gradients Just add up and they never get reset to zero So basically we didn't zero grad So here's how we zero grad Before backward We need to iterate over all the parameters And we need to make sure that P dot grad is set to zero We need to reset it to zero Just like it is in the constructor So remember all the way here For all these value nodes Grad is reset to zero And then all these backward passes do a plus equals on that grad But we need to make sure that We reset these grads to zero So that when we do backward All of them start at zero And the actual backward pass accumulates The loss derivatives into the grads So this is zero grad in PyTorch And we will slightly We will get a slightly different optimization Let's reset the neural net The data is the same This is now I think correct And we get a much more You know we get a much more slower descent We still end up with pretty good results And we can continue this a bit more To get down lower And lower And lower Yeah So the only reason that the previous thing worked It's extremely buggy The only reason that worked Is that This is a very very simple problem And it's very easy for this neural net to fit this data And so the grads ended up accumulating And it effectively gave us a massive step size And it made us converge extremely fast But basically now we have to do more steps To get to very low values of loss And get why I pray to be really good We can try to step a bit greater Yeah We're going to get closer and closer to one minus one And one So we're going to do lots of sometimes tricky Because You may have lots of bugs in the code And your network might actually work Just like ours worked But chances are is that We had a more complex problem Than actually this bug would have made us not optimize the loss very well And we were only able to get away with it because The problem is very simple So let's now bring everything together And summarize what we learned What are neural nets? Neural nets are these mathematical expressions They're really simple mathematical expressions In case of multi-layer perceptron That take Input as the data And they take input the weights and the parameters of the neural net Mathematical expression for the forward pass Followed by a loss function And the loss function tries to measure the accuracy of the predictions And usually the loss will be low When your predictions are matching your targets Or where the network is basically behaving well So we manipulate the loss function so that when the loss is low The network is doing what you wanted to do on your problem And then we backward the loss Use back propagation to get the gradient And then we know how to tune all the parameters to Decrease the loss locally But then we have to iterate that process many times In what's called the gradient descent So we simply follow the gradient information And that minimizes the loss And the loss is arranged so that when the loss is minimized The network is doing what you want it to do And yeah, so we just have a blob of neural stuff And we can make it do arbitrary things And that's what gives neural nets their power It's, you know, this is a very tiny network with 41 parameters But you can build significantly more complicated neural nets With billions at this point, almost trillions of parameters And it's a massive blob of neural tissue Simulated neural tissue Roughly speaking And you can make it do extremely complex problems And these neural nets then have all kinds of very fascinating Emergent properties In when you try to make them do significantly hard problems So you can make it more difficult To do significantly hard problems As in the case of GPT for example We have massive amounts of text from the internet And we're trying to get a neural net to predict To take like a few words And try to predict the next word in a sequence That's the learning problem And it turns out that when you train this on all of the internet The neural net actually has like really remarkable Emergent properties But that neural net would have hundreds of billions of parameters But it works on fundamentally these exact same principles And the neural net of course will be a bit more complex But otherwise the Evaluating the gradient is there And it would be identical And the gradient descent would be there And would be basically identical But people usually use slightly different updates This is a very simple stochastic gradient sent update And the last function would not be an eSquared error They would be using something called the cross-entropy loss For predicting the next token So there's a few more details But fundamentally the neural network setup Or training is identical and pervasive And now you understand intuitively How that works under the hood In the beginning of this video I told you that by the end of it You would understand everything in micrograd And then we'd slowly build it up Let me briefly prove that to you So I'm going to start through all the code that is in micrograd As of today Actually, potentially some of the code will change By the time you watch this video Because I intend to continue developing micrograd But let's look at what we have so far at least When you go to engine.py that has the value Everything here you should mostly recognize So we have the dead.data.grad attributes We have the backward function We have the previous set of children And the operation that produced this value We have addition, multiplication, and raising to a scalar power We have the Rellou nonlinearity Which is slightly different type of nonlinearity than 10H That we used in this video Both of them are nonlinearties And notably 10H is not actually present in micrograd As of right now But I intend to add it later With the backward, which is identical And then all of these other operations Which are built up on top of operations here So value should be very recognizable Except for the nonlinearity used in this video There's no massive difference between Rellou and 10H And sigmoid and these other nonlinearties They're all roughly equivalent and can be used in MLPs So I use 10H because it's a bit smoother And because it's a little bit more complicated than Rellou And therefore it's stressed a little bit more And therefore it's stressed a little bit more And therefore it's stressed a little bit more The local gradients and working with those derivatives Which I probably would be useful And then the pi is the neural networks library as I mentioned So you should recognize identical implementation of your own Layer and MLP Notably, for not so much We have a class module here There's a parent class of all these modules I did that because there's an end-up module class in PyTorch And so this exactly matches that API And end-up module in PyTorch has also a zero-brad Which I refactored out here So that's the end of micro-grad really Then there's a test Which you'll see basically creates two chunks of code One in micro-grad and one in PyTorch And we'll make sure that the forward and the backward pass agree identically For a slightly less complicated expression A slightly more complicated expression Everything agrees So we agree with PyTorch on all of these operations And finally there's a demo that IpyY And finally there's a demo that IpyY And finally there's a demo that IpyY And finally there's a demo that IpyY and B here And it's a bit more complicated binary classification demo than the one I covered in this lecture So we only had a tiny data set of for examples Here we have a bit more complicated example With lots of blue points and lots of red points And we're trying to again build a binary classifier to distinguish Two dimensional points as red or blue It's a bit more complicated MLP here with it's a bigger MLP The loss is a bit more complicated because It supports batches It supports batches So because our data set was so tiny We always did a forward pass on the entire data set of for examples But when your data set is like a million examples What we usually do in practice is we We basically pick out some random subset We call that a batch And then we only process the batch Forward, backward, and update So we don't have to forward the entire training set So this supports batching Because there's a lot more examples here We do a forward pass The loss is slightly more different This is a max margin loss that I implement here The one that we used was the mean squared error loss Because it's the simplest one There's also the binary cross-entropy loss All of them can be used for binary classification And don't make too much of a difference In the simple examples that we looked at so far There's something called Alt-to-regularization used here This has to do with generalization of the neural net And controls the overfitting in machine learning setting So I did not cover these concepts in this video Potentially later And the training loop you should recognize So forward, backward, with, zero grad And update, and so on You'll notice that in the update here The learning rate is scaled as a function of number of iterations And it shrinks And this is something called learning rate decay So in the beginning you have a high learning rate And as the network sort of stabilizes near the end You bring down the learning rate to get some of the fine details in the end And in the end we see the decision surface of the neural net And we see that it learns to separate out the red and the blue area Based on the data points So that's a slightly more complicated example And the demo that I buy by YMB That you're free to go over But yeah, as of today, that is micro-grad I also wanted to show you a little bit of real stuff So that you get to see how this is actually implemented In the production grade library like by torch So in particular, I wanted to show I wanted to find and show you The backward password 10H in PyTorch So here in micro-grad We see that the backward password 10H is 1-t square Where t is the output of the 10H of X Times of that grad which is the chain rule So we're looking for something that looks like this Now I went to PyTorch Which has open source GitHub code base And I looked through a lot of its code And honestly, I wanted to show you I looked through a lot of its code and honestly I spent about 15 minutes and I couldn't find 10H And that's because these libraries Unfortunately, they grow in size and entropy And if you just search for 10H You get apparently 2,800 results And 400 and 406 files So I don't know what these files are doing honestly And why there are so many mentions of 10H But unfortunately, these libraries are quite complex They're meant to be used Not really inspected Eventually, I did stumble on someone Who tries to change the 10H Backward code for some reason And someone here pointed to the CPU kernel And the CUDA kernel for 10H backward So this basically depends on if you're using PyTorch On the CPU device or on the GPU Which these are different devices and I haven't covered this But this is the 10H backward kernel For CPU And the reason it's so large Is that a number one This is like if you're using a complex type Which we haven't even talked about If you're using a specific data type of B-float 16 Which we haven't talked about And then if you're not Then this is the kernel and deep here We see something that resembles our backward pass So they have 8 times 1 minus B-square So this B here Must be the output of the 10H And this is the out.grad So here we found it Deep inside PyTorch and this location For some reason inside binary ops kernel When 10H is not actually binary op And then this is the GPU kernel We're not complex We're here And here we go with online opcode So we did find it But basically unfortunately These code bases are very large And micrograd is very very simple But if you actually Want to use real stuff Finding the code for it You'll actually find that difficult I also wanted to show you The whole example here Where PyTorch is showing you how you can Register a new type of function That you want to add to PyTorch As a LEGO building block So here if you want to for example add A like jonder polynomial 3 Here's how you can do it You will register it And then you have to tell PyTorch How to forward your new function And how to backward through it So as long as you can do the forward pass Of this little function piece that you want to add And as long as you know The local derivatives Local gradients which are implemented in the Backward PyTorch will be able to Backpropagate through your function And then you can use this as a LEGO block In a larger LEGO castle Of all the different LEGO blocks that PyTorch already has And so that's the only thing you have to tell PyTorch And everything would just work And you can register new types of functions In this way following this example And that is everything that I wanted to cover in this lecture So I hope you enjoyed building out micro-grad With me, I hope you find it interesting And insightful And yeah, I will post a lot of the links That are related to this video In the video description below I will also probably post a link to a discussion forum Or discussion group Where you can ask questions related to this video And then I can answer Or someone else can answer your questions And I may also do a follow-up video That answers some of the most common questions But for now, that's it I hope you enjoyed it If you did, then please like and subscribe So that YouTube knows to feature this video to more people And that's it for now I'll see you later Now here's the problem We know DL by Wait, what is the problem And that's everything I wanted to cover in this lecture So I hope you enjoyed Us building out micro-grad Micro-grad Okay now let's do the exact same thing for multiple Because we can't do something like Eight times two Oops I know what happened there I know what happened there I know what happened there I know what happened there
[{"start": 0.0, "end": 5.86, "text": " Hello, my name is Andre and I've been training deep neural networks for a bit more than a decade and in this lecture"}, {"start": 5.86, "end": 9.18, "text": " I'd like to show you what neural network training looks like under the hood"}, {"start": 9.540000000000001, "end": 14.44, "text": " So in particular we are going to start with a blank super notebook and by the end of this lecture"}, {"start": 14.44, "end": 19.580000000000002, "text": " We will define and train in neural net and you'll get to see everything that goes on under the hood and exactly"}, {"start": 20.14, "end": 22.14, "text": " Sort of how that works and then to it in a little"}, {"start": 22.54, "end": 25.82, "text": " Now specifically what I would like to do is I would like to take you through"}, {"start": 25.82, "end": 31.86, "text": " Building of micrograd now micrograd is this library that I released on GitHub about two years ago"}, {"start": 31.86, "end": 36.46, "text": " But at the time I only uploaded this source code and you'd have to go in by yourself and really"}, {"start": 37.26, "end": 39.26, "text": " Figure out how it works"}, {"start": 39.26, "end": 43.78, "text": " So in this lecture I will take you through it step-by-step and kind of comment on all the pieces of it"}, {"start": 43.78, "end": 46.3, "text": " So what's micrograd and why is it interesting?"}, {"start": 47.260000000000005, "end": 49.260000000000005, "text": " cute"}, {"start": 49.74, "end": 52.06, "text": " micrograd is basically an auto-grad engine"}, {"start": 52.06, "end": 57.18, "text": " Auto-grad is short for automatic gradient and really what it does is it implements back propagation"}, {"start": 57.38, "end": 62.1, "text": " Now back propagation is this algorithm that allows you to efficiently evaluate the gradient of"}, {"start": 63.46, "end": 67.7, "text": " Some kind of a loss function with respect to the weights of a neural network"}, {"start": 67.7, "end": 73.86, "text": " And what that allows us to do then is we can editorively tune the weights of that neural network to minimize the loss function"}, {"start": 73.86, "end": 75.86, "text": " And therefore improve the accuracy of the network"}, {"start": 75.86, "end": 83.1, "text": " So back propagation would be at the mathematical core of any modern deep neural network library like say PyTorch or Jax"}, {"start": 83.86, "end": 88.86, "text": " So the functionality of micrograd is I think best illustrated by an example. So if we just scroll down here"}, {"start": 89.7, "end": 93.86, "text": " You'll see that micrograd basically allows you to build out mathematical expressions and"}, {"start": 95.22, "end": 100.14, "text": " Here what we are doing is we have an expression that we're building out where you have two inputs a and b"}, {"start": 100.14, "end": 105.66, "text": " And you'll see that a and b are negative 4 and 2 but we are wrapping those"}, {"start": 106.26, "end": 110.54, "text": " Values into this value object that we are going to build out as part of micrograd"}, {"start": 110.94, "end": 117.68, "text": " So this value object will wrap the numbers themselves and then we are going to build out a mathematical expression here where"}, {"start": 118.3, "end": 119.98, "text": " a and b are"}, {"start": 119.98, "end": 123.34, "text": " Transformed into cd and eventually e f and g"}, {"start": 123.34, "end": 128.52, "text": " And I'm showing some of the function some of the functionality of micrograd and the operations that it supports"}, {"start": 128.52, "end": 134.12, "text": " So you can add two value objects. You can multiply them. You can raise them to a constant power"}, {"start": 134.36, "end": 137.48000000000002, "text": " You can also by one the gate squash at zero"}, {"start": 138.68, "end": 142.16000000000003, "text": " Square divide by constant divide by it etc"}, {"start": 142.52, "end": 149.56, "text": " And so we're building out an expression graph with with these two inputs a and b and we're creating out the value of g"}, {"start": 149.56, "end": 150.76000000000002, "text": " and"}, {"start": 150.76000000000002, "end": 152.60000000000002, "text": " micrograd will in the background"}, {"start": 152.60000000000002, "end": 154.84, "text": " Build out this entire mathematical expression"}, {"start": 154.84, "end": 161.16, "text": " So it will for example know that c is also a value c was a result of an addition operation and"}, {"start": 161.8, "end": 162.84, "text": " the"}, {"start": 162.84, "end": 169.72, "text": " Child nodes of c are a and b because the and all maintain pointers to a and b value objects"}, {"start": 169.8, "end": 172.8, "text": " So we'll basically know exactly how all of this is laid out and"}, {"start": 173.4, "end": 178.04, "text": " Then not only can we do what we call the forward pass where we actually look at the value of g"}, {"start": 178.04, "end": 179.64000000000001, "text": " Of course, that's pretty straightforward"}, {"start": 179.64000000000001, "end": 182.8, "text": " We will access that using the dot data attribute"}, {"start": 182.8, "end": 188.56, "text": " And so the output of the forward pass the value of g is 24.7 it turns out"}, {"start": 188.8, "end": 194.4, "text": " But the big deal is that we can also take this g value object and we can call dot backward"}, {"start": 194.96, "end": 198.64000000000001, "text": " And this will basically initialize back propagation at the node g"}, {"start": 199.84, "end": 206.0, "text": " And what back propagation is going to do is it's going to start at g and it's going to go backwards through that expression graph"}, {"start": 206.24, "end": 209.68, "text": " and it's going to recursively apply the chain rule from calculus and"}, {"start": 209.68, "end": 217.6, "text": " And what that allows us to do then is we're going to evaluate basically the derivative of g with respect to all the internal nodes"}, {"start": 218.24, "end": 222.64000000000001, "text": " Like ed and c, but also with respect to the inputs a and b"}, {"start": 223.28, "end": 228.48000000000002, "text": " And then we can actually query this derivative of g with respect to a for example"}, {"start": 228.48000000000002, "end": 233.76000000000002, "text": " That's a dot grad in this case it happens to be 138 and a derivative of g with respect to b"}, {"start": 234.32, "end": 236.88, "text": " Which also happens to be here 645"}, {"start": 236.88, "end": 244.48, "text": " And this derivative we'll see soon is very important information because it's telling us how a and b are affecting g"}, {"start": 245.04, "end": 250.0, "text": " Through this mathematical expression. So in particular a dot grad is 138"}, {"start": 250.48, "end": 254.07999999999998, "text": " So if we slightly nudge a and make it slightly larger"}, {"start": 254.96, "end": 260.15999999999997, "text": " 138 is telling us that g will grow in the slope of that growth is going to be 138"}, {"start": 260.96, "end": 264.08, "text": " And the slope of growth of b is going to be 645"}, {"start": 264.08, "end": 270.32, "text": " So that's going to tell us about how g will respond if a and b get tweaked a tiny amount in a positive direction"}, {"start": 271.28, "end": 273.28, "text": " Okay"}, {"start": 273.52, "end": 279.52, "text": " Now you might be confused about what this expression is that we built out here and this expression by the way is completely meaningless"}, {"start": 279.84, "end": 284.4, "text": " I just made it up. I'm just flexing about the kinds of operations that are supported by micrograd"}, {"start": 284.88, "end": 287.12, "text": " What we actually really care about are neural networks"}, {"start": 287.52, "end": 291.52, "text": " But it turns out that neural networks are just mathematical expressions just like this one"}, {"start": 291.52, "end": 293.84, "text": " But actually a slightly bit less crazy even"}, {"start": 294.47999999999996, "end": 295.03999999999996, "text": " um"}, {"start": 295.03999999999996, "end": 297.03999999999996, "text": " Neural networks are just a mathematical expression"}, {"start": 297.2, "end": 303.59999999999997, "text": " They take the input data as an input and they take the weights of a neural network as an input and some mathematical expression"}, {"start": 303.91999999999996, "end": 308.4, "text": " And the output are your predictions of your neural net or the loss function. We'll see this in a bit"}, {"start": 309.03999999999996, "end": 313.28, "text": " But basically neural networks teaches us happen to be a certain class of mathematical expressions"}, {"start": 313.91999999999996, "end": 316.71999999999997, "text": " But back propagation is actually significantly more general"}, {"start": 316.72, "end": 321.76000000000005, "text": " It doesn't actually care about neural networks at all. It only tells us about arbitrary mathematical expressions"}, {"start": 322.08000000000004, "end": 326.08000000000004, "text": " And then we happen to use that machinery for training of neural networks"}, {"start": 326.40000000000003, "end": 332.0, "text": " Now one more note I would like to make at the stage is that as you see here micrograd is a scalar valued auto-grad engine"}, {"start": 332.32000000000005, "end": 336.48, "text": " So it's working on the you know level of individual scalers like negative four and two"}, {"start": 336.88000000000005, "end": 341.28000000000003, "text": " And we're taking neural nets and we're breaking them down all the way to these atoms of individual scalers"}, {"start": 341.36, "end": 344.32000000000005, "text": " And all the little pluses and times and it's just excessive"}, {"start": 344.32, "end": 347.68, "text": " And so obviously you would never be doing any of this in production"}, {"start": 348.0, "end": 353.2, "text": " It's really just for them for pedagogical reasons because it allows us to not have to deal with these and dimensional"}, {"start": 353.28, "end": 356.48, "text": " tensors that you would use in modern deep neural network library"}, {"start": 356.88, "end": 363.84, "text": " So this is really uh done so that you understand and refactor out back propagation and chain rule and understanding of your"}, {"start": 364.08, "end": 365.03999999999996, "text": " training"}, {"start": 365.03999999999996, "end": 367.44, "text": " And then if you actually want to train bigger networks"}, {"start": 367.6, "end": 371.92, "text": " You have to be using these tensors, but none of the math changes. This is done purely for efficiency"}, {"start": 371.92, "end": 373.92, "text": " We are basically taking scale value"}, {"start": 374.40000000000003, "end": 377.04, "text": " All the scale values we're packaging them up into tensors"}, {"start": 377.28000000000003, "end": 381.68, "text": " Which are just arrays of these scalers and then because we have these large arrays"}, {"start": 382.0, "end": 387.76, "text": " We're making operations on those large arrays that allows us to take advantage of the parallelism in a computer and"}, {"start": 388.16, "end": 391.92, "text": " All those operations can be done in parallel and then the whole thing runs faster"}, {"start": 392.32, "end": 395.12, "text": " But really none of the math changes and that done purely for efficiency"}, {"start": 395.52000000000004, "end": 399.04, "text": " So I don't think that it's pedagogically useful to be dealing with tensors from scratch"}, {"start": 399.04, "end": 403.84000000000003, "text": " Uh, and I think and that's why I fundamentally wrote micrograd because you can understand how things work"}, {"start": 404.24, "end": 407.36, "text": " Uh, at the fundamental level and then you can speed it up later"}, {"start": 408.08000000000004, "end": 413.52000000000004, "text": " Okay, so here's the fun part my claim is that micrograd is what you need to train your networks and everything else"}, {"start": 413.6, "end": 414.8, "text": " It's just efficiency"}, {"start": 414.88, "end": 420.48, "text": " So you'd think that micrograd would be a very complex piece of code and that turns out to not be the case"}, {"start": 421.12, "end": 426.64000000000004, "text": " So if we just go to micrograd and you will see that there's only two files here in micrograd"}, {"start": 426.64, "end": 429.91999999999996, "text": " This is the actual engine. It doesn't know anything about neural nets"}, {"start": 430.24, "end": 436.64, "text": " And this is the entire neural nets library on top of micrograd. So engine and nn.pi"}, {"start": 437.28, "end": 439.28, "text": " so the actual back propagation"}, {"start": 439.36, "end": 441.2, "text": " quadrgrad engine"}, {"start": 441.28, "end": 444.24, "text": " That gives you the power of neural networks is literally"}, {"start": 446.24, "end": 448.96, "text": " 100 lines of code of like very simple python"}, {"start": 450.08, "end": 453.28, "text": " Which we'll understand by the end of this lecture and then nn.pi"}, {"start": 453.28, "end": 457.11999999999995, "text": " This neural network library built on top of the autograd engine"}, {"start": 457.91999999999996, "end": 460.0, "text": " Um, is like a joke. It's like"}, {"start": 460.71999999999997, "end": 467.28, "text": " We have to define what is in neuron and then we have to define what is the layer of neurons and then we define what is a multilateral perceptron"}, {"start": 467.35999999999996, "end": 471.52, "text": " Which is just a sequence of layers of neurons and so it's just a total joke"}, {"start": 472.15999999999997, "end": 479.67999999999995, "text": " So basically um, there's a lot of power that comes from only 115 lines of code and that's only need to understand to understand"}, {"start": 479.68, "end": 485.2, "text": " You know, or training and everything else is just efficiency and of course there's a lot too efficiency"}, {"start": 485.76, "end": 487.76, "text": " But fundamentally that's all that's happening"}, {"start": 487.84000000000003, "end": 492.40000000000003, "text": " Okay, so now let's dive right in and implement micrograd step by step the first thing I'd like to do is"}, {"start": 492.40000000000003, "end": 494.72, "text": " I'd like to make sure that you have a very good understanding"}, {"start": 494.96000000000004, "end": 499.2, "text": " Intuitively of what a derivative is and exactly what information it gives you"}, {"start": 499.76, "end": 504.4, "text": " So let's start with some basic imports that I copy-based in every Jupyter notebook always"}, {"start": 505.44, "end": 507.44, "text": " And let's define the function"}, {"start": 507.44, "end": 510.88, "text": " scalar valid function f of x as follows"}, {"start": 511.52, "end": 513.12, "text": " So I just make this up randomly"}, {"start": 513.12, "end": 517.84, "text": " I just want to scale a valid function that takes a single scalar x and returns a single scalar y"}, {"start": 518.64, "end": 523.2, "text": " And we can call this function of course so we can pass it say 3.0 and get 20 back"}, {"start": 524.0, "end": 526.64, "text": " Now we can also plot this function to get a sense of its shape"}, {"start": 527.04, "end": 531.36, "text": " You can tell from the mathematical expression that this is probably a parabola it's quadratic"}, {"start": 531.92, "end": 535.28, "text": " And so if we just create a set of um"}, {"start": 535.28, "end": 536.8, "text": " um"}, {"start": 537.52, "end": 543.1999999999999, "text": " Skip skill the values that we can feed in using for example a range from negative 5 to 5 and steps up 0.25"}, {"start": 544.0799999999999, "end": 550.64, "text": " So this is so x is just from negative 5 to 5 not including 5 in steps of 0.25"}, {"start": 551.52, "end": 554.0799999999999, "text": " And we can actually call this function on this non-py array as well"}, {"start": 554.24, "end": 556.9599999999999, "text": " So we get a set of y's if we call f on x's and"}, {"start": 557.8399999999999, "end": 559.8399999999999, "text": " These y's are basically"}, {"start": 560.56, "end": 562.24, "text": " also applying"}, {"start": 562.24, "end": 567.6800000000001, "text": " function on every one of these elements independently and we can plot this using math plotlib"}, {"start": 568.0, "end": 571.84, "text": " So plt.plot x's and y's and we get nice parabola"}, {"start": 572.32, "end": 578.64, "text": " So previously here we fed in 3.0 somewhere here and we received 20 back which is here the y coordinate"}, {"start": 579.12, "end": 584.88, "text": " So now I'd like to think through what is the derivative of this function at any single input point x"}, {"start": 585.6800000000001, "end": 589.28, "text": " Right, so what is the derivative at different points x of this function"}, {"start": 589.28, "end": 592.88, "text": " Now if you remember back to your calculus class you've probably derived derivatives"}, {"start": 593.28, "end": 598.8, "text": " So we take this mathematical expression 3x square minus 4x plus 5 and you would write out on a piece of paper"}, {"start": 598.8, "end": 605.8399999999999, "text": " And you would you know apply the product rule and all the other rules and derive the mathematical expression of the great derivative of the original function"}, {"start": 606.0799999999999, "end": 608.88, "text": " And then you could plug in different taxes and see what the derivative is"}, {"start": 609.8399999999999, "end": 615.92, "text": " We're not going to actually do that because no one in neural networks actually writes out the expression for the neural net"}, {"start": 615.92, "end": 621.04, "text": " It would be a massive expression um it would be you know thousands since thousands of terms no one actually"}, {"start": 621.4399999999999, "end": 623.4399999999999, "text": " Derives that derivative of course"}, {"start": 623.52, "end": 626.3199999999999, "text": " And so we're not going to take this kind of like symbolic approach"}, {"start": 626.3199999999999, "end": 628.3199999999999, "text": " Instead what I'd like to do is I'd like to look at the"}, {"start": 628.3199999999999, "end": 632.4799999999999, "text": " Definition of derivative and just make sure that we really understand what derivative is measuring"}, {"start": 632.4799999999999, "end": 634.0799999999999, "text": " What is telling you about the function?"}, {"start": 634.88, "end": 636.88, "text": " And so if we just look up derivative"}, {"start": 642.4, "end": 643.68, "text": " We see that um"}, {"start": 643.68, "end": 648.0, "text": " Okay, so this is not a very good definition of derivative. This is a definition of what it means to be differentiable"}, {"start": 648.64, "end": 655.3599999999999, "text": " But if you remember from your calculus it is the limit sh goes to 0 of f of x plus h minus f of x over h"}, {"start": 655.8399999999999, "end": 659.52, "text": " So basically what it's saying is if you slightly bump up"}, {"start": 660.16, "end": 664.0799999999999, "text": " You're at some point x that you're interested in or hey and if you slightly bump up"}, {"start": 664.7199999999999, "end": 667.12, "text": " You know you slightly increase it by small number h"}, {"start": 668.16, "end": 671.1999999999999, "text": " How does the function respond with what sensitivity does it respond?"}, {"start": 671.2, "end": 678.1600000000001, "text": " Where's the slope at that point does the function go up or does it go down and by how much and that's the slope of that function"}, {"start": 678.1600000000001, "end": 683.2800000000001, "text": " The the slope of that response at that point and so we can basically evaluate"}, {"start": 684.1600000000001, "end": 690.48, "text": " The derivative here numerically by taking a very small h of course the definition would ask us to take h to zero"}, {"start": 690.88, "end": 693.6, "text": " We're just going to pick a very small h 0.001"}, {"start": 694.08, "end": 696.08, "text": " And let's say we're interested in 0.3.0"}, {"start": 696.24, "end": 698.32, "text": " So we can look at f of x of course is 20"}, {"start": 698.32, "end": 700.48, "text": " And now f of x plus h"}, {"start": 701.0400000000001, "end": 705.0400000000001, "text": " So if we slightly nudge x in a positive direction how is the function going to respond?"}, {"start": 705.7600000000001, "end": 714.0, "text": " And just looking at this do you expect you expect f of x plus h to be slightly greater than 20 or do you expect to be slightly lower than 20?"}, {"start": 714.72, "end": 720.72, "text": " And so since 3 is here and this is 20 if we slightly go positively the function will respond positively"}, {"start": 721.2800000000001, "end": 723.6, "text": " So you'd expect this to be slightly greater than 20"}, {"start": 723.6, "end": 730.96, "text": " And now by how much is telling you the sort of the strength of that slope right the size of the slope"}, {"start": 731.28, "end": 735.44, "text": " So f of x plus h of f of x this is how much the function responded"}, {"start": 736.08, "end": 742.96, "text": " in the positive direction and we have to normalize by the run so we have the rise over run to get the slope"}, {"start": 743.9200000000001, "end": 751.84, "text": " So this of course is just numerical approximation of the slope because we have to make a very very small to converge to the exact amount"}, {"start": 751.84, "end": 754.64, "text": " Now if I'm doing too many zeros"}, {"start": 755.52, "end": 760.24, "text": " At some point I'm going to get an incorrect answer because we're using floating point arithmetic"}, {"start": 760.5600000000001, "end": 766.0, "text": " And the representations of all these numbers in computer memory is finite and at some point we get into trouble"}, {"start": 766.5600000000001, "end": 768.96, "text": " So we can converge towards the right answer with this approach"}, {"start": 770.48, "end": 774.0, "text": " But basically at 3 the slope is 14"}, {"start": 774.64, "end": 779.84, "text": " And you can see that by taking 3x square minus 4x plus 5 and differentiating it in our head"}, {"start": 779.84, "end": 784.24, "text": " So 3x square would be 6x minus 4"}, {"start": 784.96, "end": 789.44, "text": " And then we plug in x equals 3 so that's 18 minus 4 is 14 so this is correct"}, {"start": 790.96, "end": 796.48, "text": " So that's at 3 now how about the slope at say negative 3"}, {"start": 797.44, "end": 802.72, "text": " Would you expect would you expect for the slope now telling the exact value is really hard"}, {"start": 802.8000000000001, "end": 804.8000000000001, "text": " But what is the sign of that slope?"}, {"start": 805.12, "end": 806.08, "text": " So at negative 3"}, {"start": 806.08, "end": 811.6800000000001, "text": " If we slightly go in the positive direction at x the function would actually go down"}, {"start": 812.0, "end": 813.84, "text": " And so that tells you that the slope would be negative"}, {"start": 814.0, "end": 815.5200000000001, "text": " So we'll get a slight number below"}, {"start": 816.88, "end": 820.32, "text": " Below 20 and so if we take the slope we expect something negative"}, {"start": 820.96, "end": 822.0, "text": " negative 22"}, {"start": 822.0, "end": 824.0, "text": " Okay"}, {"start": 824.0, "end": 826.96, "text": " And at some point here of course the slope would be 0"}, {"start": 827.2800000000001, "end": 831.6, "text": " Now for this specific function I looked it up previously and it's at point 2 over 3"}, {"start": 832.32, "end": 834.0, "text": " So at roughly 2 over 3"}, {"start": 834.0, "end": 835.52, "text": " But somewhere here"}, {"start": 837.04, "end": 838.72, "text": " This derivative would be 0"}, {"start": 839.28, "end": 841.28, "text": " So basically at that precise point"}, {"start": 843.52, "end": 844.32, "text": " Yeah"}, {"start": 844.32, "end": 848.32, "text": " At that precise point if we nudge in a positive direction the function doesn't respond"}, {"start": 848.32, "end": 851.12, "text": " This stays the same almost and so that's why the slope is 0"}, {"start": 851.52, "end": 853.52, "text": " Okay now let's look at a bit more complex case"}, {"start": 854.4, "end": 857.12, "text": " So we're going to start you know complexifying a bit"}, {"start": 857.36, "end": 859.36, "text": " So now we have a function"}, {"start": 859.76, "end": 861.04, "text": " Here"}, {"start": 861.04, "end": 862.8, "text": " With output variable d"}, {"start": 862.8, "end": 865.4399999999999, "text": " That is a function of 3 scalar inputs a b and c"}, {"start": 866.3199999999999, "end": 868.56, "text": " So a b and c are some specific values"}, {"start": 868.56, "end": 870.7199999999999, "text": " 3 inputs into our expression graph"}, {"start": 870.7199999999999, "end": 872.16, "text": " And a single output d"}, {"start": 872.9599999999999, "end": 875.52, "text": " And so if we just print d we get 4"}, {"start": 876.4799999999999, "end": 881.76, "text": " And now what I like to do is I'd like to again look at the derivative of d with respect to a b and c"}, {"start": 882.56, "end": 883.76, "text": " And think through"}, {"start": 884.56, "end": 886.8, "text": " Again just the intuition of what this derivative is telling us"}, {"start": 887.5999999999999, "end": 889.92, "text": " So in rooted evaluates derivative"}, {"start": 889.92, "end": 894.64, "text": " We're going to get a bit hacky here. We're going to again have a very small value of h"}, {"start": 895.52, "end": 899.76, "text": " And then we're gonna fix the inputs at some values that we're interested in"}, {"start": 900.4799999999999, "end": 904.88, "text": " So these are the this is the point a bc at which we're going to be evaluating the"}, {"start": 905.5999999999999, "end": 908.88, "text": " derivative of d with respect to all a b and c at that point"}, {"start": 909.8399999999999, "end": 913.12, "text": " So there's the inputs and now we have d1 is that expression"}, {"start": 913.8399999999999, "end": 917.04, "text": " And then we're going to for example look at the derivative of d with respect to a"}, {"start": 917.04, "end": 922.64, "text": " So we'll take a and we'll bump it by h and then we'll get d2 to be the exact same function"}, {"start": 923.92, "end": 925.92, "text": " And now we're going to print"}, {"start": 926.88, "end": 930.16, "text": " You know fun I want d1 is d1"}, {"start": 931.1999999999999, "end": 934.0, "text": " d2 is d2 and print slope"}, {"start": 935.28, "end": 938.48, "text": " So the derivative or slope here will be"}, {"start": 939.76, "end": 941.28, "text": " of course"}, {"start": 941.28, "end": 943.52, "text": " d2 minus d1 divided h"}, {"start": 943.52, "end": 947.92, "text": " So d2 minus d1 is how much the function increased"}, {"start": 949.12, "end": 951.12, "text": " when we bumped the"}, {"start": 952.0, "end": 955.1999999999999, "text": " The specific input that we're interested in by a tiny amount and"}, {"start": 956.0, "end": 960.16, "text": " This is the normalized by h to get the slope"}, {"start": 962.88, "end": 964.88, "text": " So"}, {"start": 965.28, "end": 969.68, "text": " Yeah, so this so I just run this we're going to print"}, {"start": 970.48, "end": 972.16, "text": " d1"}, {"start": 972.16, "end": 974.4, "text": " Which we know is for"}, {"start": 975.52, "end": 979.28, "text": " Now d2 will be bumped a will be bumped by h"}, {"start": 980.4, "end": 983.12, "text": " So let's just think through a little bit"}, {"start": 983.92, "end": 985.92, "text": " What d2 will be"}, {"start": 986.3199999999999, "end": 988.3199999999999, "text": " printed out here in particular"}, {"start": 989.36, "end": 991.36, "text": " d1 will be for"}, {"start": 991.28, "end": 996.8, "text": " Will d2 be a number slightly greater than 4 or slightly lower than 4 and it's going to tell us the"}, {"start": 997.4399999999999, "end": 999.4399999999999, "text": " the sign of the derivative"}, {"start": 999.44, "end": 1001.44, "text": " So"}, {"start": 1002.6400000000001, "end": 1004.6400000000001, "text": " We're bumping a by h"}, {"start": 1005.5200000000001, "end": 1007.6800000000001, "text": " b is minus 3 c is 10"}, {"start": 1008.72, "end": 1013.5200000000001, "text": " So you can just in total think through this derivative and what is doing a will be slightly more positive"}, {"start": 1014.96, "end": 1016.72, "text": " And but b is a negative number"}, {"start": 1017.6800000000001, "end": 1019.6800000000001, "text": " So if a is slightly more positive"}, {"start": 1020.48, "end": 1022.48, "text": " Because b is negative 3"}, {"start": 1023.44, "end": 1026.56, "text": " We're actually going to be adding less to d"}, {"start": 1026.56, "end": 1032.72, "text": " So you'd actually expect that the value of the function will go down"}, {"start": 1033.84, "end": 1035.84, "text": " So let's just see this"}, {"start": 1036.6399999999999, "end": 1040.0, "text": " Yeah, and so we went from 4 to 3.9996"}, {"start": 1040.96, "end": 1043.76, "text": " And that tells you that the slope will be negative and then"}, {"start": 1044.48, "end": 1050.6399999999999, "text": " Uh will be a negative number because we went down and then the exact number of slope will be"}, {"start": 1051.2, "end": 1052.8799999999999, "text": " exact amount of slope is negative 3"}, {"start": 1052.88, "end": 1056.5600000000002, "text": " And you can also convince yourself that negative 3 is the right answer"}, {"start": 1057.0400000000002, "end": 1058.8000000000002, "text": " mathematically and analytically"}, {"start": 1058.88, "end": 1062.64, "text": " Because if you have 8 times b plus c and you are you know you have calculus"}, {"start": 1063.2, "end": 1067.7600000000002, "text": " Then differentiating 8 times b plus c with respect to a gives you just b"}, {"start": 1068.64, "end": 1073.8400000000001, "text": " And indeed the value of b is negative 3 which is the derivative that we have so you can tell that that's correct"}, {"start": 1075.1200000000001, "end": 1076.64, "text": " So now if we do this with b"}, {"start": 1077.8400000000001, "end": 1080.64, "text": " So if we bump b by a little bit in a positive direction"}, {"start": 1080.64, "end": 1085.44, "text": " We'd get different slopes. So what is the influence of b on the output d"}, {"start": 1086.48, "end": 1090.72, "text": " So if we bump b by tiny amount in the positive direction then because a is positive"}, {"start": 1091.68, "end": 1093.2, "text": " We'll be adding more to d"}, {"start": 1093.92, "end": 1094.8000000000002, "text": " Right"}, {"start": 1094.8000000000002, "end": 1099.3600000000001, "text": " So um, and now what is the what is the sensitivity? What is the slope of that addition?"}, {"start": 1100.0, "end": 1102.0, "text": " And it might not surprise you that this should be"}, {"start": 1103.0400000000002, "end": 1104.3200000000002, "text": " 2"}, {"start": 1104.3200000000002, "end": 1108.64, "text": " And why is it 2 because d of d by db"}, {"start": 1108.64, "end": 1113.44, "text": " The differentiating respect to b would be would give us a and the value of a is 2"}, {"start": 1113.6000000000001, "end": 1115.6000000000001, "text": " So that's also working all"}, {"start": 1115.6000000000001, "end": 1119.1200000000001, "text": " And then if c gets bumped a tiny amount in h by h"}, {"start": 1120.0800000000002, "end": 1124.24, "text": " Then of course 8 times b is unaffected and now c becomes slightly bit higher"}, {"start": 1124.48, "end": 1125.8400000000001, "text": " What does that do to the function?"}, {"start": 1125.8400000000001, "end": 1128.4, "text": " It makes it slightly bit higher because we're simply adding c"}, {"start": 1128.96, "end": 1133.1200000000001, "text": " And it makes it slightly bit higher by the exact same amount that we added to c"}, {"start": 1133.2800000000002, "end": 1135.5200000000002, "text": " And so that tells you that the slope is 1"}, {"start": 1135.52, "end": 1138.16, "text": " That will be the um"}, {"start": 1139.28, "end": 1143.12, "text": " The rate at which d will increase as we scale"}, {"start": 1144.24, "end": 1145.28, "text": " c"}, {"start": 1145.28, "end": 1149.12, "text": " Okay, so we now have some intuitive sense of what this derivative is telling you about the function"}, {"start": 1149.44, "end": 1151.12, "text": " And we'd like to move to neural networks"}, {"start": 1151.12, "end": 1154.8799999999999, "text": " Now as I mentioned neural networks will be pretty massive expressions mathematical expressions"}, {"start": 1155.2, "end": 1159.68, "text": " So we need some data structures that maintain these expressions and that's what we're going to start to build out now"}, {"start": 1160.6399999999999, "end": 1162.32, "text": " So we're going to"}, {"start": 1162.32, "end": 1166.8799999999999, "text": " Build out this value object that I showed you in the read me page of micrograd"}, {"start": 1167.6, "end": 1172.56, "text": " So let me copy paste a skeleton of the first very simple value object"}, {"start": 1173.6, "end": 1175.6, "text": " So class value takes a single"}, {"start": 1176.56, "end": 1180.56, "text": " scalar value that it wraps and keeps track of and that's it"}, {"start": 1180.72, "end": 1184.1599999999999, "text": " So we can for example do value of 2.0 and then we can"}, {"start": 1185.52, "end": 1188.1599999999999, "text": " Get we can look at its content and"}, {"start": 1188.16, "end": 1193.76, "text": " um python will internally use the wrapper function to return"}, {"start": 1194.48, "end": 1196.48, "text": " uh this straight clips"}, {"start": 1196.96, "end": 1198.72, "text": " like that"}, {"start": 1198.72, "end": 1202.48, "text": " So this is a value object with data equals to the overcreating here"}, {"start": 1203.3600000000001, "end": 1205.6000000000001, "text": " Now what we'd like to do is like we'd like to be able to"}, {"start": 1207.6000000000001, "end": 1209.6000000000001, "text": " Have not just like two values"}, {"start": 1210.16, "end": 1212.88, "text": " But we'd like to do a blocky right we'd like to add them"}, {"start": 1213.8400000000001, "end": 1217.3600000000001, "text": " So currently you would get an error because python doesn't know how to add"}, {"start": 1217.36, "end": 1220.08, "text": " Two value objects. So we have to tell it"}, {"start": 1221.52, "end": 1223.52, "text": " So here's an addition"}, {"start": 1226.3999999999999, "end": 1232.56, "text": " So you have to basically use these special double underscore methods in python to define these operators for these objects"}, {"start": 1233.12, "end": 1234.6399999999999, "text": " so if we call um"}, {"start": 1235.52, "end": 1236.7199999999998, "text": " the"}, {"start": 1236.7199999999998, "end": 1239.1999999999998, "text": " If we use this plus operator"}, {"start": 1239.1999999999998, "end": 1241.6799999999998, "text": " python will internally call a dot"}, {"start": 1241.68, "end": 1247.3600000000001, "text": " Add of b that's what will happen internally and so b will be the other"}, {"start": 1248.24, "end": 1250.4, "text": " And self will be a"}, {"start": 1251.1200000000001, "end": 1254.0800000000002, "text": " And so we see that what we're going to return is a new value object"}, {"start": 1254.0800000000002, "end": 1259.2, "text": " And it's just uh is it going to be wrapping the plus of their data"}, {"start": 1259.92, "end": 1264.16, "text": " But remember now because uh data is the actual like numbered python number"}, {"start": 1264.24, "end": 1269.04, "text": " So this operator here is just the typical floating point plus addition now"}, {"start": 1269.04, "end": 1273.28, "text": " It's not an addition of value objects and will return a new value"}, {"start": 1273.76, "end": 1276.72, "text": " So now a plus b should work and it should print value of"}, {"start": 1277.28, "end": 1278.48, "text": " Negative one"}, {"start": 1278.48, "end": 1280.48, "text": " Because that's two plus minus three"}, {"start": 1280.48, "end": 1281.76, "text": " There we go"}, {"start": 1281.76, "end": 1283.6, "text": " Okay, let's now implement multiply"}, {"start": 1284.32, "end": 1286.32, "text": " Just so we can recreate this expression here"}, {"start": 1286.96, "end": 1290.32, "text": " So multiply I think it won't surprise you will be fairly similar"}, {"start": 1291.76, "end": 1293.84, "text": " So instead of add we're going to be using mul"}, {"start": 1294.3999999999999, "end": 1296.3999999999999, "text": " And then here of course we want to do times"}, {"start": 1296.4, "end": 1300.16, "text": " And so now we can create a C value object which will be 10.0"}, {"start": 1300.8000000000002, "end": 1302.8000000000002, "text": " And now we should be able to do a times b"}, {"start": 1304.0, "end": 1306.0, "text": " Well, let's just do a times b first"}, {"start": 1306.88, "end": 1308.64, "text": " um"}, {"start": 1308.64, "end": 1310.64, "text": " That's value of negative six now"}, {"start": 1310.88, "end": 1314.96, "text": " And by the way I skipped over this a little bit uh suppose that I didn't have the wrapper function here"}, {"start": 1315.6000000000001, "end": 1318.5600000000002, "text": " Then it's just that you'll get some kind of an ugly expression"}, {"start": 1318.96, "end": 1324.5600000000002, "text": " So what wrapper is doing is it's providing us a way to print out like a nicer looking expression in python"}, {"start": 1324.56, "end": 1331.2, "text": " Uh, so we don't just have something cryptic. We actually are you know, it's value of negative six"}, {"start": 1332.32, "end": 1340.24, "text": " So this gives us a times and then this we should now be able to add C to it because we've defined and told the python how to do mul and add"}, {"start": 1340.72, "end": 1344.32, "text": " And so this will call this will basically be equivalent to a dot"}, {"start": 1345.2, "end": 1347.2, "text": " mul"}, {"start": 1347.2, "end": 1351.9199999999998, "text": " B and then this new value object will be dot add of C"}, {"start": 1351.92, "end": 1353.92, "text": " And so let's see if that worked"}, {"start": 1354.88, "end": 1358.48, "text": " Yep, so that worked well that gave us four which is what we expect from before"}, {"start": 1359.28, "end": 1362.88, "text": " And I believe you can just call the manually as well. There we go. So"}, {"start": 1364.4, "end": 1368.72, "text": " Yeah, okay, so now what we are missing is the connected tissue of this expression"}, {"start": 1369.04, "end": 1371.3600000000001, "text": " As I mentioned we want to keep these expression graphs"}, {"start": 1371.68, "end": 1376.0, "text": " So we need to know and keep pointers about what values produce what other values"}, {"start": 1376.96, "end": 1380.8000000000002, "text": " So here for example, we are going to introduce a new variable which will call children"}, {"start": 1380.8, "end": 1382.8, "text": " And by default it will be an empty tuple"}, {"start": 1383.6, "end": 1387.9199999999998, "text": " And then we're actually going to keep a slightly different variable in the class which will call underscore prime"}, {"start": 1388.48, "end": 1390.48, "text": " Which will be the set of children"}, {"start": 1391.44, "end": 1395.52, "text": " Uh, this is how I done. I did it in the original micro grad looking at my code here"}, {"start": 1395.84, "end": 1398.48, "text": " I can't remember exactly the reason I believe it was efficiency"}, {"start": 1398.8, "end": 1401.68, "text": " But this underscore children will be a tuple for convenience"}, {"start": 1401.84, "end": 1404.56, "text": " But then when we actually maintain it in the class it will be just this set"}, {"start": 1405.04, "end": 1407.04, "text": " Yes, I believe for efficiency"}, {"start": 1407.76, "end": 1408.96, "text": " um"}, {"start": 1408.96, "end": 1415.52, "text": " So now when we are creating a value like this with a constructor children will be empty and prep will be the empty set"}, {"start": 1416.08, "end": 1418.88, "text": " But when we're creating a value through addition or multiplication"}, {"start": 1419.1200000000001, "end": 1424.8, "text": " We're going to feed in the children of this value which in this case is self and other"}, {"start": 1426.56, "end": 1428.56, "text": " So those are the children here"}, {"start": 1430.8, "end": 1435.3600000000001, "text": " So now we can do d dot prep and we'll see that the children of the"}, {"start": 1435.36, "end": 1443.04, "text": " We now know are this a value of negative six and value of ten and this of course is the value resulting from a times b"}, {"start": 1443.52, "end": 1445.52, "text": " And the c value which is ten"}, {"start": 1446.7199999999998, "end": 1451.76, "text": " Now the last piece of information we don't know so we know now that the children of every single value"}, {"start": 1452.0, "end": 1454.24, "text": " We don't know what operation created this value"}, {"start": 1454.8799999999999, "end": 1457.76, "text": " So we need one more element here. Let's call it underscore pop"}, {"start": 1459.28, "end": 1461.6799999999998, "text": " And by default this is the empty set for leaves"}, {"start": 1461.68, "end": 1464.72, "text": " And then we'll just maintain it here"}, {"start": 1465.76, "end": 1472.64, "text": " And now the operation will be just a simple string and in the case of addition it's plus in the case of multiplication it's times"}, {"start": 1474.0, "end": 1475.44, "text": " So now we"}, {"start": 1475.44, "end": 1478.16, "text": " Not just have d dot prep we also have a d dot up"}, {"start": 1478.8, "end": 1482.0, "text": " And we know that d was produced by an addition of those two values"}, {"start": 1482.5600000000002, "end": 1485.1200000000001, "text": " And so now we have the full mathematical expression"}, {"start": 1485.8400000000001, "end": 1489.68, "text": " And we're building out this data structure and we know exactly how each value came to be"}, {"start": 1489.68, "end": 1492.0800000000002, "text": " By word expression and from what other values"}, {"start": 1494.8, "end": 1497.3600000000001, "text": " Now because these expressions are about to get quite a bit larger"}, {"start": 1497.76, "end": 1501.76, "text": " We'd like a way to nicely visualize these expressions that we're building out"}, {"start": 1502.16, "end": 1505.52, "text": " So for that I'm going to copy paste a bunch of slightly scary code"}, {"start": 1506.0800000000002, "end": 1509.1200000000001, "text": " That's going to visualize this this expression graphs for us"}, {"start": 1509.68, "end": 1511.6000000000001, "text": " So here's the code and I'll explain it in a bit"}, {"start": 1512.0800000000002, "end": 1514.3200000000002, "text": " But first let me just show you what this code does"}, {"start": 1514.32, "end": 1519.9199999999998, "text": " Basically what it does is it creates a new function draw dot that we can call on some root node"}, {"start": 1520.8799999999999, "end": 1522.6399999999999, "text": " And then it's going to visualize it"}, {"start": 1522.6399999999999, "end": 1527.9199999999998, "text": " So if we call draw dot on d which is this final value here that is a times b plus c"}, {"start": 1529.84, "end": 1534.08, "text": " It creates something like this. So this is d and you see that this is a times b"}, {"start": 1534.72, "end": 1539.04, "text": " Create a value plus c gives us this output node d"}, {"start": 1539.04, "end": 1545.2, "text": " So that's draw dot of d and I'm not going to go through this in complete detail"}, {"start": 1545.52, "end": 1548.0, "text": " You can take a look at graph is and it's api"}, {"start": 1548.0, "end": 1550.8799999999999, "text": " A graph is an open source graph visualization software"}, {"start": 1551.52, "end": 1555.12, "text": " And what we're doing here is we're building out this graph in the graph is api"}, {"start": 1555.92, "end": 1562.24, "text": " And you can basically see that trace is this helper function that enumerates all the nodes and edges in the graph"}, {"start": 1562.8799999999999, "end": 1565.28, "text": " So that's just built a set of all the nodes and edges"}, {"start": 1565.28, "end": 1569.84, "text": " And then we iterate through all the nodes and we create special node objects for them in"}, {"start": 1571.44, "end": 1573.2, "text": " using dot node"}, {"start": 1573.28, "end": 1576.24, "text": " And then we also create edges using dot dot edge"}, {"start": 1576.96, "end": 1582.16, "text": " And the only thing that's like slightly tricky here is you notice that I basically add these fake nodes"}, {"start": 1582.16, "end": 1587.04, "text": " Which are these operation nodes. So for example this node here is just like a plus node"}, {"start": 1587.76, "end": 1590.0, "text": " and I create these"}, {"start": 1590.0, "end": 1596.4, "text": " These special op nodes here and I connect them accordingly"}, {"start": 1596.96, "end": 1601.2, "text": " So these nodes of course are not actual nodes in the original graph"}, {"start": 1601.84, "end": 1606.96, "text": " They're not actually a value object. The only value objects here are the things in squares"}, {"start": 1607.28, "end": 1610.08, "text": " Those are actual value objects or representations thereof"}, {"start": 1610.56, "end": 1614.8, "text": " And these op nodes are just created in this draw dot routine so that it looks nice"}, {"start": 1614.8, "end": 1622.1599999999999, "text": " Let's also add labels to these graphs just so we know what variables are where so let's create a special underscore label"}, {"start": 1622.96, "end": 1628.96, "text": " Um, or let's just do label equals ft by default and save it to each node"}, {"start": 1630.24, "end": 1638.72, "text": " And then here we're going to do label is a label is the label is c"}, {"start": 1640.6399999999999, "end": 1642.6399999999999, "text": " um"}, {"start": 1642.64, "end": 1644.64, "text": " And then"}, {"start": 1644.88, "end": 1646.88, "text": " Let's create a special um"}, {"start": 1647.8400000000001, "end": 1649.8400000000001, "text": " E equals a times b"}, {"start": 1650.88, "end": 1652.88, "text": " And dot label will be"}, {"start": 1654.24, "end": 1655.76, "text": " It's kind of notty"}, {"start": 1655.76, "end": 1657.3600000000001, "text": " And E will be E plus C"}, {"start": 1658.24, "end": 1660.0800000000002, "text": " And a d dot label will be"}, {"start": 1660.88, "end": 1662.88, "text": " B"}, {"start": 1662.72, "end": 1666.0, "text": " Okay, so nothing really changes. I just added this new E function"}, {"start": 1666.64, "end": 1668.64, "text": " A new E variable"}, {"start": 1668.5600000000002, "end": 1671.3600000000001, "text": " And then here when we are printing this"}, {"start": 1671.36, "end": 1678.3999999999999, "text": " I'm going to print the label here. So this will be a percent s bar and this will be end dot label"}, {"start": 1681.36, "end": 1683.36, "text": " And so now"}, {"start": 1683.52, "end": 1688.32, "text": " We have the label on the left here. So it says a b creating e and then E plus C creates d"}, {"start": 1688.8, "end": 1690.8, "text": " Just like we have it here"}, {"start": 1690.9599999999998, "end": 1693.6, "text": " And finally, let's make this expression just one layer deeper"}, {"start": 1694.4799999999998, "end": 1696.4799999999998, "text": " So d will not be the final output node"}, {"start": 1696.48, "end": 1701.04, "text": " Instead after d we are going to create a new value object"}, {"start": 1701.92, "end": 1706.56, "text": " Called f we're going to start running out of variable soon f will be negative 2.0"}, {"start": 1707.44, "end": 1709.44, "text": " And it's label will of course just the f"}, {"start": 1710.64, "end": 1714.88, "text": " And then l will capital L will be the output of our graph"}, {"start": 1715.52, "end": 1717.04, "text": " And l will be p times f"}, {"start": 1718.08, "end": 1720.8, "text": " Okay, so l will be negative eight is the output"}, {"start": 1722.16, "end": 1724.16, "text": " Uh, so"}, {"start": 1724.16, "end": 1727.68, "text": " Now we don't just draw a d draw L"}, {"start": 1730.0800000000002, "end": 1732.0800000000002, "text": " Okay"}, {"start": 1732.0800000000002, "end": 1737.44, "text": " And somehow the label of L is undefined loops. I'll that label as to be explicitly"}, {"start": 1737.76, "end": 1739.76, "text": " So given to it"}, {"start": 1739.76, "end": 1741.76, "text": " There we go. So l is the output"}, {"start": 1741.8400000000001, "end": 1744.16, "text": " So let's quickly recap what we've done so far"}, {"start": 1744.24, "end": 1748.3200000000002, "text": " We are able to build out mathematical expressions using only plus and times so far"}, {"start": 1748.32, "end": 1753.6, "text": " Uh, they are scalar valued along the way and we can do this forward pass"}, {"start": 1753.6, "end": 1756.0, "text": " And build out a mathematical expression"}, {"start": 1756.48, "end": 1759.12, "text": " So we have multiple inputs here a b c and f"}, {"start": 1759.6, "end": 1763.4399999999998, "text": " Going into a mathematical expression that produces a single output L"}, {"start": 1764.08, "end": 1766.8, "text": " And this here is vis-visualizing the forward pass"}, {"start": 1767.36, "end": 1770.72, "text": " So the output of the forward pass is negative eight. That's the value"}, {"start": 1771.76, "end": 1774.8, "text": " Now what we'd like to do next is we'd like to run back propagation"}, {"start": 1774.8, "end": 1778.24, "text": " And in back propagation we are going to start here at the end"}, {"start": 1778.72, "end": 1780.72, "text": " And we're going to reverse"}, {"start": 1781.04, "end": 1784.96, "text": " And calculate the gradient along along all these intermediate values"}, {"start": 1785.6, "end": 1788.32, "text": " And really what we're computing for every single value here"}, {"start": 1788.96, "end": 1794.3999999999999, "text": " Um, we're going to compute the derivative of that node with respect to L"}, {"start": 1795.52, "end": 1799.76, "text": " So the derivative of l with respect to l is just uh one"}, {"start": 1799.76, "end": 1804.72, "text": " And then we're going to derive what is the derivative of l with respect to f with respect to d"}, {"start": 1805.12, "end": 1807.12, "text": " With respect to c with respect to e"}, {"start": 1807.6, "end": 1809.6, "text": " With respect to b and with respect to a"}, {"start": 1810.16, "end": 1811.76, "text": " And in neural network setting"}, {"start": 1812.0, "end": 1816.0, "text": " It'd be very interested in the derivative of basically this loss function l"}, {"start": 1816.8, "end": 1818.8799999999999, "text": " With respect to the weights of a neural network"}, {"start": 1819.44, "end": 1822.16, "text": " And here of course we have just these variables a b c and f"}, {"start": 1822.48, "end": 1825.76, "text": " But some of these will eventually represent the weights of a neural net"}, {"start": 1825.92, "end": 1828.64, "text": " And so we'll need to know how those weights are impacting"}, {"start": 1828.64, "end": 1830.0800000000002, "text": " The loss function"}, {"start": 1830.4, "end": 1832.88, "text": " So we'll be interested basically in the derivative of the output"}, {"start": 1833.0400000000002, "end": 1835.2800000000002, "text": " With respect to some of its leaf nodes"}, {"start": 1835.44, "end": 1837.6000000000001, "text": " And those leaf nodes will be the weights of the neural net"}, {"start": 1838.16, "end": 1840.8000000000002, "text": " And the other leaf nodes of course will be the data itself"}, {"start": 1840.96, "end": 1845.8400000000001, "text": " But usually we will not want or use the derivative of the loss function with respect to data"}, {"start": 1845.92, "end": 1847.2, "text": " Because the data is fixed"}, {"start": 1847.44, "end": 1849.8400000000001, "text": " But the weights will be iterated on"}, {"start": 1850.72, "end": 1852.24, "text": " Using the gradient information"}, {"start": 1852.24, "end": 1855.44, "text": " So next we are going to create a variable inside the value class"}, {"start": 1855.44, "end": 1860.4, "text": " That maintains the derivative of l with respect to that value"}, {"start": 1861.1200000000001, "end": 1863.1200000000001, "text": " And we will call this variable grad"}, {"start": 1863.92, "end": 1866.72, "text": " So there is a dot data and there's a self-adgrad"}, {"start": 1867.3600000000001, "end": 1869.3600000000001, "text": " And initially it will be zero"}, {"start": 1869.52, "end": 1872.88, "text": " And remember that zero is basically means no effect"}, {"start": 1873.04, "end": 1876.56, "text": " So at initialization we're assuming that every value does not impact"}, {"start": 1876.64, "end": 1879.04, "text": " Does not affect the output"}, {"start": 1879.68, "end": 1881.44, "text": " Right because if the gradient is zero"}, {"start": 1881.44, "end": 1885.6000000000001, "text": " That means that changing this variable is not changing the loss function"}, {"start": 1885.8400000000001, "end": 1888.24, "text": " So by default we assume that the gradient is zero"}, {"start": 1888.96, "end": 1894.24, "text": " And then now that we have grad and it's 0.0"}, {"start": 1896.72, "end": 1899.76, "text": " We are going to be able to visualize it here after data"}, {"start": 1899.76, "end": 1901.52, "text": " So here grad is 0.4f"}, {"start": 1902.88, "end": 1904.4, "text": " And this will be in that grad"}, {"start": 1905.8400000000001, "end": 1909.44, "text": " And now we are going to be showing both the data and the grad"}, {"start": 1909.44, "end": 1912.0800000000002, "text": " And initialize that zero"}, {"start": 1913.8400000000001, "end": 1916.88, "text": " And we are just about getting ready to calculate the back propagation"}, {"start": 1917.44, "end": 1919.28, "text": " And of course this grad again as I mentioned"}, {"start": 1919.28, "end": 1922.0800000000002, "text": " Is representing the derivative of the output"}, {"start": 1922.0800000000002, "end": 1924.88, "text": " In this case l with respect to this value"}, {"start": 1925.04, "end": 1926.0800000000002, "text": " So with respect to"}, {"start": 1926.0800000000002, "end": 1928.56, "text": " So this is the derivative of l with respect to f"}, {"start": 1928.56, "end": 1930.0800000000002, "text": " Respect to d and so on"}, {"start": 1930.56, "end": 1932.24, "text": " So let's now fill in those gradients"}, {"start": 1932.24, "end": 1934.3200000000002, "text": " And actually do back propagation manually"}, {"start": 1934.3200000000002, "end": 1935.92, "text": " So let's start filling in these gradients"}, {"start": 1935.92, "end": 1937.92, "text": " And start all the way at the end as I mentioned here"}, {"start": 1937.92, "end": 1941.3600000000001, "text": " First we are interested to fill in this gradient here"}, {"start": 1941.92, "end": 1944.5600000000002, "text": " So what is the derivative of l with respect to l"}, {"start": 1945.3600000000001, "end": 1948.0, "text": " In other words if I change l by a tiny amount of h"}, {"start": 1949.2, "end": 1951.44, "text": " How much does l change?"}, {"start": 1952.48, "end": 1953.92, "text": " It changes by h"}, {"start": 1953.92, "end": 1956.5600000000002, "text": " So it's proportional and therefore the derivative will be 1"}, {"start": 1957.44, "end": 1961.1200000000001, "text": " We can of course measure these or estimate these numerical gradients"}, {"start": 1961.1200000000001, "end": 1963.28, "text": " Numerically just like we've seen before"}, {"start": 1963.28, "end": 1964.8000000000002, "text": " So if I take this expression"}, {"start": 1964.8, "end": 1968.32, "text": " And I create a def lol function here"}, {"start": 1969.36, "end": 1970.56, "text": " And put this here"}, {"start": 1970.56, "end": 1973.52, "text": " Now the reason I'm creating a gating function lol here"}, {"start": 1973.52, "end": 1976.72, "text": " Is because I don't want to pollute or mess up the global scope here"}, {"start": 1976.72, "end": 1978.8799999999999, "text": " This is just kind of like a little staging area"}, {"start": 1978.8799999999999, "end": 1982.48, "text": " And as you know in Python all of these will be local variables to this function"}, {"start": 1982.48, "end": 1984.96, "text": " So I'm not changing any of the global scope here"}, {"start": 1985.84, "end": 1987.6, "text": " So here l1 will be l"}, {"start": 1990.08, "end": 1992.32, "text": " And then copy based on this expression"}, {"start": 1992.32, "end": 1995.04, "text": " We're going to add a small amount h"}, {"start": 1997.4399999999998, "end": 1998.96, "text": " In for example a"}, {"start": 2000.6399999999999, "end": 2004.6399999999999, "text": " Right and this will be measuring the derivative of l with respect to a"}, {"start": 2005.52, "end": 2007.12, "text": " So here this will be l2"}, {"start": 2008.24, "end": 2010.24, "text": " And then we want to print that derivative"}, {"start": 2010.24, "end": 2012.8799999999999, "text": " So print l2 minus l1"}, {"start": 2012.8799999999999, "end": 2014.8799999999999, "text": " Which is how much l changed"}, {"start": 2015.28, "end": 2017.28, "text": " And then normalize it by h"}, {"start": 2017.4399999999998, "end": 2018.96, "text": " So this is the rise over run"}, {"start": 2018.96, "end": 2022.32, "text": " And we have to be careful because l is a valid node"}, {"start": 2022.32, "end": 2023.8400000000001, "text": " So we actually want its data"}, {"start": 2025.28, "end": 2025.52, "text": " Um"}, {"start": 2026.4, "end": 2029.04, "text": " So that these are floats dividing by h"}, {"start": 2029.04, "end": 2032.32, "text": " And this should print the derivative of l with respect to a"}, {"start": 2032.32, "end": 2035.04, "text": " Because a is the one that we bumped a little bit by h"}, {"start": 2035.76, "end": 2038.88, "text": " So what is the derivative of l with respect to a"}, {"start": 2039.44, "end": 2040.24, "text": " It's 6"}, {"start": 2041.1200000000001, "end": 2041.8400000000001, "text": " Okay"}, {"start": 2041.8400000000001, "end": 2042.64, "text": " And obviously"}, {"start": 2043.68, "end": 2045.92, "text": " If we change l by h"}, {"start": 2045.92, "end": 2048.4, "text": " Then that would be"}, {"start": 2050.0, "end": 2051.52, "text": " Here effectively"}, {"start": 2051.52, "end": 2052.32, "text": " Um"}, {"start": 2052.32, "end": 2055.2000000000003, "text": " This looks really awkward but changing l by h"}, {"start": 2056.2400000000002, "end": 2057.76, "text": " You see the derivative here is one"}, {"start": 2058.88, "end": 2060.08, "text": " Um"}, {"start": 2060.96, "end": 2064.2400000000002, "text": " That's kind of like the base case of what we are doing here"}, {"start": 2064.88, "end": 2066.64, "text": " So basically we come out comp here"}, {"start": 2066.64, "end": 2069.76, "text": " And we can manually set l dot grad to one"}, {"start": 2069.76, "end": 2071.2000000000003, "text": " This is our manual back propagation"}, {"start": 2072.08, "end": 2073.36, "text": " l dot grad is one"}, {"start": 2073.36, "end": 2074.32, "text": " And let's redraw"}, {"start": 2074.32, "end": 2077.36, "text": " And we'll see that we filled in"}, {"start": 2077.36, "end": 2078.56, "text": " Grad is one for l"}, {"start": 2079.2000000000003, "end": 2081.28, "text": " We're now going to continue the back propagation"}, {"start": 2081.28, "end": 2084.7200000000003, "text": " So let's here look at the derivatives of l with respect to d and f"}, {"start": 2085.44, "end": 2087.2000000000003, "text": " Uh let's do a d first"}, {"start": 2087.84, "end": 2090.56, "text": " So what we are interested in if I create a mark down on here"}, {"start": 2090.56, "end": 2091.44, "text": " Is we'd like to know"}, {"start": 2092.0, "end": 2093.84, "text": " Basically we have that l is d times f"}, {"start": 2094.4, "end": 2099.04, "text": " And we'd like to know what is uh dl by dd"}, {"start": 2100.32, "end": 2101.1200000000003, "text": " What is that?"}, {"start": 2101.12, "end": 2104.16, "text": " And if you know you're a calculus uh l is d times f"}, {"start": 2104.16, "end": 2106.08, "text": " So what is dl by dd?"}, {"start": 2106.08, "end": 2107.2, "text": " It would be f"}, {"start": 2108.16, "end": 2109.7599999999998, "text": " And if you don't believe me"}, {"start": 2109.7599999999998, "end": 2112.88, "text": " We can also just derive it because the proof would be fairly straightforward"}, {"start": 2113.44, "end": 2116.7999999999997, "text": " Uh we go to the definition of the"}, {"start": 2116.7999999999997, "end": 2117.7599999999998, "text": " A derivative"}, {"start": 2117.7599999999998, "end": 2120.88, "text": " Which is f of x plus h minus f of x divide h"}, {"start": 2122.24, "end": 2122.96, "text": " As a limit"}, {"start": 2122.96, "end": 2125.3599999999997, "text": " Limit of h goes to zero of this kind of expression"}, {"start": 2126.0, "end": 2127.52, "text": " So when we have l is d times f"}, {"start": 2127.52, "end": 2134.56, "text": " Then increasing d by h would give us the output of d plus h times f"}, {"start": 2135.84, "end": 2137.6, "text": " That's basically a full of x plus h, right"}, {"start": 2139.04, "end": 2140.56, "text": " minus d times f"}, {"start": 2142.4, "end": 2143.36, "text": " And then divide h"}, {"start": 2143.92, "end": 2146.0, "text": " And symbolically expanding out here"}, {"start": 2146.0, "end": 2150.96, "text": " We would have basically d times f plus h times f minus d times f"}, {"start": 2151.2, "end": 2151.7599999999998, "text": " divide h"}, {"start": 2152.4, "end": 2154.96, "text": " And then you see how the df minus df cancels"}, {"start": 2154.96, "end": 2157.52, "text": " So you're left with h times f"}, {"start": 2157.52, "end": 2158.32, "text": " divide h"}, {"start": 2158.7200000000003, "end": 2159.28, "text": " Which is f"}, {"start": 2160.08, "end": 2162.48, "text": " So in the limit as h goes to zero of"}, {"start": 2163.36, "end": 2164.56, "text": " You know"}, {"start": 2164.56, "end": 2166.88, "text": " derivative um"}, {"start": 2166.88, "end": 2167.84, "text": " definition"}, {"start": 2167.84, "end": 2170.64, "text": " We just get f in a case of d times f"}, {"start": 2172.4, "end": 2177.36, "text": " So symmetrically dl by d f will just be d"}, {"start": 2178.64, "end": 2180.88, "text": " So what we have is that f dot grad"}, {"start": 2181.44, "end": 2184.56, "text": " We see now is just the value of d"}, {"start": 2184.56, "end": 2186.24, "text": " Which is four"}, {"start": 2188.88, "end": 2193.68, "text": " And we see that d dot grad is just uh the value of f"}, {"start": 2196.96, "end": 2200.08, "text": " And so the value of f is negative two"}, {"start": 2201.36, "end": 2203.36, "text": " So we'll set those manually"}, {"start": 2205.2799999999997, "end": 2208.48, "text": " Let me erase this markdown node and then let's redraw what we have"}, {"start": 2210.96, "end": 2213.84, "text": " Okay, and let's just make sure that these were correct"}, {"start": 2213.84, "end": 2218.32, "text": " So we seem to think that dl by dd is negative two. So let's double check"}, {"start": 2219.6000000000004, "end": 2224.2400000000002, "text": " Um, let me erase this plus h from before and now we want the derivative with respect to f"}, {"start": 2225.2000000000003, "end": 2228.32, "text": " So let's just come here when I create f and let's do a plus h here"}, {"start": 2228.8, "end": 2232.88, "text": " And they should print a derivative of l with respect to f. So we expect to see four"}, {"start": 2234.2400000000002, "end": 2237.76, "text": " Yeah, and this is four up to floating point funquiness"}, {"start": 2239.04, "end": 2242.4, "text": " And then dl by dd should be f"}, {"start": 2242.4, "end": 2244.1600000000003, "text": " Which is negative two"}, {"start": 2245.12, "end": 2246.8, "text": " grad is negative two"}, {"start": 2246.8, "end": 2249.6, "text": " So if we again come here and we change d"}, {"start": 2251.92, "end": 2254.48, "text": " d dot d dot plus equals h right here"}, {"start": 2255.28, "end": 2261.44, "text": " So we expect so we've added a little h and then we see how l changed and we expect to print"}, {"start": 2262.48, "end": 2264.48, "text": " Uh negative two"}, {"start": 2264.8, "end": 2266.0, "text": " There we go"}, {"start": 2266.0, "end": 2272.16, "text": " So we've numerically verified what we're doing here is what kind of like an inline gradient check"}, {"start": 2272.64, "end": 2278.8, "text": " gradient check is when we are deriving this like back propagation and getting the derivative with respect to all the intermediate"}, {"start": 2279.2, "end": 2282.08, "text": " results and then numerical gradient is just you know"}, {"start": 2282.56, "end": 2285.6, "text": " Um, estimating it using small step size"}, {"start": 2286.24, "end": 2289.12, "text": " Now we're going to the crux of back propagation"}, {"start": 2289.28, "end": 2294.72, "text": " So this will be the most important node to understand because if you understand the gradient for this node"}, {"start": 2294.72, "end": 2298.9599999999996, "text": " You understand all of back propagation and all of training on neural nets basically"}, {"start": 2299.68, "end": 2305.9199999999996, "text": " So we need to derive dl by dc in other words the derivative of l with respect to c"}, {"start": 2306.64, "end": 2308.9599999999996, "text": " Because we've computed all these other gradients already"}, {"start": 2309.52, "end": 2312.9599999999996, "text": " Now we're coming here and we're continuing the back propagation manually"}, {"start": 2313.7599999999998, "end": 2317.6, "text": " So we want dl by dc and then we'll also derive dl by dE"}, {"start": 2318.48, "end": 2320.16, "text": " Now here's the problem"}, {"start": 2320.16, "end": 2322.7999999999997, "text": " How do we derive dl by dc?"}, {"start": 2322.8, "end": 2329.44, "text": " We actually know the derivative l with respect to d so we know how l is sensitive to d"}, {"start": 2330.2400000000002, "end": 2336.2400000000002, "text": " But how is l sensitive to c? So if we wiggle c how does that impact l through d?"}, {"start": 2338.1600000000003, "end": 2340.1600000000003, "text": " So we know dl by dc"}, {"start": 2342.0, "end": 2348.5600000000004, "text": " And we also here know how c impacts d and so just very intuitively if you know the impact that c is having on d"}, {"start": 2348.56, "end": 2355.68, "text": " And the impact that d is having on l then you should be able to somehow put that information together to figure out how c impacts l"}, {"start": 2356.56, "end": 2358.56, "text": " And indeed this is what we can actually do"}, {"start": 2359.2799999999997, "end": 2362.4, "text": " So in particular we know just concentrating on d first"}, {"start": 2362.64, "end": 2366.16, "text": " Let's look at how what is the derivative basically of d with respect to c?"}, {"start": 2366.56, "end": 2368.56, "text": " So in other words what is dd by dc"}, {"start": 2371.68, "end": 2375.2, "text": " So here we know that d is c times c plus ee"}, {"start": 2375.2, "end": 2379.04, "text": " That's what we know and our interesting dd by dc"}, {"start": 2379.68, "end": 2384.7999999999997, "text": " If you would just know your calculus again and you remember then differentiating c plus e with respect to c"}, {"start": 2385.04, "end": 2387.3599999999997, "text": " You know that that gives you 1.0"}, {"start": 2388.0, "end": 2394.24, "text": " And we can also go back to the basics and derive this because again we can go to our f of x plus h minus f of x"}, {"start": 2394.3999999999996, "end": 2395.52, "text": " derogh divide by h"}, {"start": 2396.56, "end": 2399.3599999999997, "text": " That's the definition of a derivative as h goes to 0"}, {"start": 2400.16, "end": 2401.68, "text": " And so here"}, {"start": 2401.68, "end": 2410.08, "text": " Focusing on c and its effect on d we can basically do the f of x plus h will be c is incremented by h plus e"}, {"start": 2411.04, "end": 2415.04, "text": " That's the first evaluation of our function minus c plus e"}, {"start": 2416.64, "end": 2419.3599999999997, "text": " And then divide h and so what is this?"}, {"start": 2420.24, "end": 2424.48, "text": " Just expanding the sound this will be c plus h plus e minus c minus e"}, {"start": 2424.48, "end": 2432.48, "text": " And then you see here how c minus c cancels e minus e cancels were left with h over h which is 1.0"}, {"start": 2433.68, "end": 2441.76, "text": " And so by symmetry also dd by dd will be 1.0 as well"}, {"start": 2443.04, "end": 2447.76, "text": " So basically the derivative of a some expression is very simple and this is the local derivative"}, {"start": 2448.4, "end": 2453.28, "text": " So I call this the local derivative because we have the final output value all the way at the end of this graph"}, {"start": 2453.28, "end": 2457.52, "text": " And we're now like a small node here and this is a little plus node"}, {"start": 2458.0800000000004, "end": 2463.1200000000003, "text": " And it the little plus node doesn't know anything about the rest of the graph that it's embedded in"}, {"start": 2463.6000000000004, "end": 2468.6400000000003, "text": " All it knows is that it did it plus it took a c and a e added them and created a d"}, {"start": 2469.2000000000003, "end": 2472.8, "text": " And this plus node also knows the local influence of c on d"}, {"start": 2473.28, "end": 2476.0, "text": " Or rather were rather the derivative of d with respect to c"}, {"start": 2476.2400000000002, "end": 2479.28, "text": " And it also knows the the derivative of d with respect to e"}, {"start": 2479.28, "end": 2482.48, "text": " But that's not what we want. That's just a local derivative"}, {"start": 2482.88, "end": 2488.8, "text": " What we actually want is dl by dc and l could l is here just one step away"}, {"start": 2489.28, "end": 2493.84, "text": " But in a general case this little plus node is could be embedded in like a massive graph"}, {"start": 2494.7200000000003, "end": 2495.92, "text": " So"}, {"start": 2495.92, "end": 2500.48, "text": " Again, we know how l impacts d and now we know how c and e impact d"}, {"start": 2500.88, "end": 2506.96, "text": " How do we put that information together to write dl by dc and the answer of course is the chain rule in calculus"}, {"start": 2506.96, "end": 2508.96, "text": " And so"}, {"start": 2510.16, "end": 2512.16, "text": " I pulled up a chain rule here from capidia"}, {"start": 2513.12, "end": 2516.4, "text": " And I'm going to go through this very briefly. So chain rule"}, {"start": 2517.36, "end": 2523.12, "text": " We capidia sometimes can be very confusing and calculus can can be very confusing like this is the way I"}, {"start": 2523.68, "end": 2525.12, "text": " learned"}, {"start": 2525.2, "end": 2531.28, "text": " Chain rule and was very confusing like what is happening? It's just complicated. So I like this expression much better"}, {"start": 2531.28, "end": 2537.1200000000003, "text": " If a variable z depends on a variable y which itself depends on a variable x"}, {"start": 2538.1600000000003, "end": 2541.84, "text": " Then z depends on x as well obviously through the intermediate variable y"}, {"start": 2542.48, "end": 2547.2000000000003, "text": " And in this case the chain rule is expressed as if you want dz by dx"}, {"start": 2548.2400000000002, "end": 2552.96, "text": " Then you take the dz by dy and you multiply it by dy by dx"}, {"start": 2553.84, "end": 2556.48, "text": " So the chain rule fundamentally is telling you how"}, {"start": 2557.44, "end": 2559.44, "text": " We chain these"}, {"start": 2559.44, "end": 2561.44, "text": " Uh derivatives together"}, {"start": 2561.68, "end": 2562.48, "text": " correctly"}, {"start": 2562.48, "end": 2565.6, "text": " So to differentiate through a function composition"}, {"start": 2566.2400000000002, "end": 2570.56, "text": " We have to apply a multiplication of those derivatives"}, {"start": 2571.84, "end": 2577.28, "text": " So that's really what chain rule is telling us and there's a nice little intuitive explanation here"}, {"start": 2577.28, "end": 2579.28, "text": " Which I also think is kind of cute"}, {"start": 2579.28, "end": 2587.2000000000003, "text": " The chain rule says that knowing the instantaneous rate of change of z with respect to y and y relative to x allows one to calculate the instantaneous rate of change of z relative to x"}, {"start": 2587.2, "end": 2591.9199999999996, "text": " As a product of those two rates of change simply the product of those two"}, {"start": 2592.64, "end": 2594.64, "text": " So here's a good one"}, {"start": 2594.64, "end": 2599.3599999999997, "text": " If a car travels twice as fast as a bicycle and the bicycle is four times as fast as a walking man"}, {"start": 2600.08, "end": 2604.3199999999997, "text": " Then the car travels two times four eight times as fast as a man"}, {"start": 2605.2799999999997, "end": 2610.24, "text": " And so this makes it very clear that the correct thing to do sort of is to multiply"}, {"start": 2611.12, "end": 2616.0, "text": " So cars twice as fast as a bicycle and bicycle is four times as fast as man"}, {"start": 2616.0, "end": 2619.52, "text": " So the car will be eight times as fast as the man"}, {"start": 2620.4, "end": 2622.4, "text": " And so we can take these"}, {"start": 2622.4, "end": 2624.4, "text": " intermediate rates of change if you will and"}, {"start": 2624.88, "end": 2628.0, "text": " Multiply them together and that justifies the"}, {"start": 2628.72, "end": 2636.48, "text": " Chain rule intuitively. So I have a look at chain rule about here. Really what it means for us is there's a very simple recipe for deriving what we want"}, {"start": 2636.56, "end": 2638.56, "text": " Which is dfl by dc"}, {"start": 2639.76, "end": 2642.32, "text": " And what we have so far is we know"}, {"start": 2642.32, "end": 2644.32, "text": " one"}, {"start": 2645.28, "end": 2647.28, "text": " And we know"}, {"start": 2647.6000000000004, "end": 2655.28, "text": " What is the impact of d on l so we know dl by dd the derivative of l respect to dd"}, {"start": 2655.6000000000004, "end": 2658.96, "text": " We know that that's negative two and now because of this local"}, {"start": 2659.52, "end": 2663.36, "text": " Reason that we've done here we know dd by dc"}, {"start": 2664.6400000000003, "end": 2668.96, "text": " So how does c impact d and in particular this is a plus node"}, {"start": 2668.96, "end": 2672.48, "text": " So the local derivative is simply 1.0 is very simple"}, {"start": 2673.36, "end": 2676.64, "text": " And so the chain rule tells us that dl by dc"}, {"start": 2677.44, "end": 2679.44, "text": " going through this intermediate variable"}, {"start": 2680.2400000000002, "end": 2685.36, "text": " We'll just be simply dl by dd times"}, {"start": 2689.36, "end": 2691.36, "text": " dd by dc"}, {"start": 2691.76, "end": 2693.28, "text": " That's chain rule"}, {"start": 2693.36, "end": 2697.28, "text": " So this is identical to what's happening here except"}, {"start": 2697.28, "end": 2702.6400000000003, "text": " Z is rl y is our d and x is our c"}, {"start": 2703.84, "end": 2707.52, "text": " So we literally just have to multiply these and because"}, {"start": 2710.4, "end": 2713.28, "text": " These local derivatives like dd by dc are just one"}, {"start": 2714.8, "end": 2718.7200000000003, "text": " We basically just copy over dl by dd because this is just times one"}, {"start": 2719.6000000000004, "end": 2724.8, "text": " So what is it did so because dl by dd is negative two what is dl by dc?"}, {"start": 2724.8, "end": 2730.7200000000003, "text": " Well, it's the local gradient 1.0 times dl by dd which is negative two"}, {"start": 2731.6000000000004, "end": 2737.04, "text": " So literally what a plus node does you can look at it that way is it literally just routes the gradient"}, {"start": 2737.6000000000004, "end": 2742.8, "text": " Because the plus nodes local derivatives are just one and so in the chain rule one times"}, {"start": 2743.6800000000003, "end": 2745.6800000000003, "text": " dl by dd"}, {"start": 2745.76, "end": 2747.76, "text": " is"}, {"start": 2747.76, "end": 2754.6400000000003, "text": " Is just dl by dd and so that derivative just gets routed to both c and to e in the skates"}, {"start": 2755.6000000000004, "end": 2757.36, "text": " So basically"}, {"start": 2757.36, "end": 2759.36, "text": " We have that e dot grad"}, {"start": 2759.44, "end": 2762.8, "text": " Or what's our good c since that's the one we built that is"}, {"start": 2764.0800000000004, "end": 2767.5200000000004, "text": " negative two times one negative two"}, {"start": 2768.6400000000003, "end": 2773.44, "text": " And in the same way by symmetry e dot grad will be negative two that's the claim"}, {"start": 2774.4, "end": 2776.4, "text": " So we can set those"}, {"start": 2776.4, "end": 2778.4, "text": " We can redraw"}, {"start": 2779.52, "end": 2781.52, "text": " And you see how we just assign negative two negative two"}, {"start": 2782.1600000000003, "end": 2788.2400000000002, "text": " So there's back propagating signal which is carrying the information of like what is the derivative of l with respect to all the intermediate nodes"}, {"start": 2789.04, "end": 2795.92, "text": " We can imagine it almost like flowing backwards through the graph and a plus node will simply distribute the derivative to all the leaf nodes"}, {"start": 2796.08, "end": 2798.08, "text": " Assuring to all the children nodes of it"}, {"start": 2799.2000000000003, "end": 2801.2000000000003, "text": " So this is the claim and now let's verify it"}, {"start": 2802.0, "end": 2804.4, "text": " So let me remove the plus h here from before"}, {"start": 2804.4, "end": 2810.08, "text": " And now instead what we're going to do is we want to increment c so c dot data will be incremented by h"}, {"start": 2810.8, "end": 2812.96, "text": " And when I run this we expect to see negative two"}, {"start": 2814.48, "end": 2817.28, "text": " Negative two and then of course for e"}, {"start": 2818.96, "end": 2822.2400000000002, "text": " So e dot data plus equals h and we expect to see negative two"}, {"start": 2823.2000000000003, "end": 2825.2000000000003, "text": " Simple"}, {"start": 2827.36, "end": 2830.2400000000002, "text": " So those are the derivatives of these internal nodes"}, {"start": 2830.24, "end": 2834.64, "text": " And now we're going to recurse our way backwards again"}, {"start": 2835.6, "end": 2837.6, "text": " And we're again going to apply the chain rule"}, {"start": 2838.0, "end": 2842.64, "text": " So here we go our second application of chain rule and we will apply it all the way through the graph"}, {"start": 2842.72, "end": 2844.72, "text": " Which just happened to only have one more node remaining"}, {"start": 2845.52, "end": 2847.9199999999996, "text": " We have that d l by d e"}, {"start": 2848.64, "end": 2851.6, "text": " As we have just calculated is negative two so we know that"}, {"start": 2852.72, "end": 2854.64, "text": " So we know the derivative of l with respect to e"}, {"start": 2854.64, "end": 2861.12, "text": " And now we want d l by d a"}, {"start": 2861.8399999999997, "end": 2866.4, "text": " Right and the chain rule is telling us that that's just d l by d e"}, {"start": 2869.12, "end": 2871.7599999999998, "text": " Negative two times the local gradient"}, {"start": 2872.4, "end": 2876.3199999999997, "text": " So what is the local gradient basically d e by d a"}, {"start": 2877.12, "end": 2879.12, "text": " We have to look at that"}, {"start": 2880.08, "end": 2882.08, "text": " So I'm a little times node"}, {"start": 2882.08, "end": 2887.84, "text": " Inside a massive graph and I only know that I did a times b and I produced an e"}, {"start": 2889.2, "end": 2895.68, "text": " So now what is d e by d a and d e by d b that's the only thing that I sort of know about that's my local gradient"}, {"start": 2897.2, "end": 2899.6, "text": " So because we have that e is a times b"}, {"start": 2900.48, "end": 2902.64, "text": " We're asking what is d e by d a"}, {"start": 2904.16, "end": 2909.6, "text": " And of course we just did that here we have a times so I'm not going to redrive it"}, {"start": 2909.6, "end": 2915.92, "text": " But if you want to differentiate this with respect to a you'll just get b right the value of b"}, {"start": 2916.72, "end": 2919.04, "text": " Which in this case is negative 3.0"}, {"start": 2921.12, "end": 2924.3199999999997, "text": " So basically we have that d l by d a"}, {"start": 2925.2799999999997, "end": 2927.2, "text": " Well, let me just do it right here"}, {"start": 2927.2, "end": 2930.24, "text": " We have that a dot grad and we are applying chain rule here"}, {"start": 2930.88, "end": 2934.88, "text": " Is d l by d e which we see here is negative two"}, {"start": 2934.88, "end": 2938.88, "text": " times what is d e by d a"}, {"start": 2939.92, "end": 2942.7200000000003, "text": " It's the value of b which is negative three"}, {"start": 2945.04, "end": 2947.04, "text": " That's it"}, {"start": 2948.0, "end": 2952.48, "text": " And then we have b dot grad is again d l by d e which is negative two"}, {"start": 2953.2000000000003, "end": 2955.2000000000003, "text": " Just the same way times"}, {"start": 2955.6800000000003, "end": 2957.44, "text": " What is d e by d"}, {"start": 2958.0, "end": 2962.8, "text": " um d b is the value of a which is 2.0"}, {"start": 2962.8, "end": 2964.8, "text": " That's the value of a"}, {"start": 2965.76, "end": 2968.0800000000004, "text": " So these are our claimed derivatives"}, {"start": 2968.88, "end": 2970.6400000000003, "text": " Let's"}, {"start": 2970.6400000000003, "end": 2973.1200000000003, "text": " read draw and we see here that"}, {"start": 2973.92, "end": 2977.84, "text": " a dot grad turns out to be six because that is negative two times negative three"}, {"start": 2978.5600000000004, "end": 2980.32, "text": " And b dot grad is negative four"}, {"start": 2981.2000000000003, "end": 2984.32, "text": " times sorry is negative two times two which is negative four"}, {"start": 2985.6000000000004, "end": 2988.5600000000004, "text": " So those are our claims let's delete this and let's verify them"}, {"start": 2990.6400000000003, "end": 2992.6400000000003, "text": " We have"}, {"start": 2992.64, "end": 2995.44, "text": " a here a dot data plus equals h"}, {"start": 2997.6, "end": 3002.56, "text": " So the claim is that a dot grad is six. Let's verify"}, {"start": 3003.6, "end": 3008.16, "text": " six and we have b dot data plus equals h"}, {"start": 3008.8799999999997, "end": 3012.48, "text": " So nudging b by h and looking at what happens"}, {"start": 3013.12, "end": 3014.4, "text": " We claim it's negative four"}, {"start": 3015.2799999999997, "end": 3019.6, "text": " And indeed it's negative four plus minus again float oddness"}, {"start": 3020.64, "end": 3021.92, "text": " um"}, {"start": 3021.92, "end": 3023.44, "text": " And uh"}, {"start": 3023.44, "end": 3026.0, "text": " That's it. This that was the manual"}, {"start": 3026.88, "end": 3028.64, "text": " backpropagation"}, {"start": 3028.64, "end": 3032.96, "text": " All the way from here to all the leaf notes and I've done it piece by piece"}, {"start": 3033.44, "end": 3038.0, "text": " And really all we've done is as you saw we iterated through all the nodes one by one"}, {"start": 3038.0, "end": 3040.0, "text": " And locally applied the chain rule"}, {"start": 3040.0, "end": 3044.0, "text": " We always know what is the derivative of l with respect to this little output"}, {"start": 3044.32, "end": 3046.56, "text": " And then we look at how this output was produced"}, {"start": 3046.56, "end": 3049.12, "text": " This output was produced through some operation"}, {"start": 3049.12, "end": 3052.16, "text": " And we have the pointers to the children nodes of this operation"}, {"start": 3052.7999999999997, "end": 3056.0, "text": " And so in this little operation we know what the local derivatives are"}, {"start": 3056.56, "end": 3059.04, "text": " And we just multiply them onto the derivative always"}, {"start": 3059.7599999999998, "end": 3064.16, "text": " So we just go through and recursively multiply on the local derivatives"}, {"start": 3064.16, "end": 3068.24, "text": " And that's what backpropagation is is just a recursive application of chain rule"}, {"start": 3068.24, "end": 3069.7599999999998, "text": " Backwards through the computation graph"}, {"start": 3070.64, "end": 3073.68, "text": " Let's see this power in action just very briefly"}, {"start": 3073.68, "end": 3075.12, "text": " What we're good to do is we're going to"}, {"start": 3075.12, "end": 3078.96, "text": " Uh, nudge our inputs to try to make l go up"}, {"start": 3079.8399999999997, "end": 3083.6, "text": " So in particular what we're doing is we want a dot data. We're going to change it"}, {"start": 3084.3199999999997, "end": 3088.64, "text": " And if we want l to go up that means we just have to go in the direction of the gradient"}, {"start": 3088.96, "end": 3090.96, "text": " So a"}, {"start": 3091.12, "end": 3095.92, "text": " Should increase in the direction of gradient by like some small step amount. This is the step size"}, {"start": 3096.64, "end": 3098.96, "text": " And we don't just want this for b, but also for b"}, {"start": 3101.6, "end": 3103.6, "text": " Also for c"}, {"start": 3103.6, "end": 3105.6, "text": " Also for f"}, {"start": 3106.3199999999997, "end": 3110.08, "text": " Those are leaf nodes which we usually have control over"}, {"start": 3110.7999999999997, "end": 3115.36, "text": " And if we nudge in direction of the gradient we expect a positive influence on l"}, {"start": 3115.92, "end": 3119.2, "text": " So we expect l to go up positively"}, {"start": 3119.7599999999998, "end": 3124.88, "text": " Uh, so it should become less negative it should go up to say negative, you know, six or something like that"}, {"start": 3126.16, "end": 3130.72, "text": " Uh, it's hard to tell exactly and we have to reroute the forward path. So let me just um"}, {"start": 3130.72, "end": 3134.3199999999997, "text": " Do that here um"}, {"start": 3136.64, "end": 3141.4399999999996, "text": " This would be the forward pass f would be unchanged. This is effectively the forward pass"}, {"start": 3142.0, "end": 3144.0, "text": " And now if we print l dot data"}, {"start": 3144.7999999999997, "end": 3149.2799999999997, "text": " We expect because we nudge all the values or the inputs in the rational gradient"}, {"start": 3149.2799999999997, "end": 3152.16, "text": " We expect it less negative l. We expect it to go up"}, {"start": 3152.9599999999996, "end": 3155.6, "text": " So maybe it's negative six or so. Let's see what happens"}, {"start": 3156.7999999999997, "end": 3158.48, "text": " Okay negative seven"}, {"start": 3158.48, "end": 3163.68, "text": " And uh, this is basically one step of an optimization that will end up running"}, {"start": 3164.0, "end": 3168.88, "text": " And really this gradient just give us some power because we know how to influence the final outcome"}, {"start": 3169.44, "end": 3172.08, "text": " And this will be extremely useful for training. You know, that's as we'll see"}, {"start": 3173.04, "end": 3181.84, "text": " So now I would like to do one more uh example of manual back propagation using a bit more complex and uh useful example"}, {"start": 3182.4, "end": 3184.64, "text": " We are going to back propagate through a neuron"}, {"start": 3185.52, "end": 3187.52, "text": " so"}, {"start": 3187.52, "end": 3192.96, "text": " We want to eventually build out neuron that works in an as simplest cases are multi-layer perceptrons as they're called"}, {"start": 3193.04, "end": 3195.04, "text": " So this is a two layer neuron that"}, {"start": 3195.7599999999998, "end": 3199.52, "text": " And it's got these hidden layers made up of neurons and these neurons are fully connected to each other"}, {"start": 3200.08, "end": 3202.88, "text": " Now biologically neurons are very complicated devices"}, {"start": 3202.96, "end": 3205.52, "text": " But we have very simple mathematical models of them"}, {"start": 3206.16, "end": 3210.96, "text": " And so this is a very simple mathematical model of a neuron. You have some inputs xs"}, {"start": 3211.84, "end": 3215.04, "text": " And then you have these synapses that have weights on them"}, {"start": 3215.04, "end": 3218.16, "text": " So um, the w's are weights"}, {"start": 3219.04, "end": 3223.2799999999997, "text": " Um, and then the synapse interacts with the input to this neuron"}, {"start": 3223.6, "end": 3224.64, "text": " multiplicatively"}, {"start": 3224.64, "end": 3226.64, "text": " So what flows to the cell body"}, {"start": 3227.2799999999997, "end": 3229.44, "text": " Of this neuron is w times x"}, {"start": 3230.08, "end": 3234.24, "text": " But there's multiple inputs. There's many w times x is flowing to the cell body"}, {"start": 3235.12, "end": 3237.44, "text": " The cell body then has also like some bias"}, {"start": 3238.08, "end": 3239.84, "text": " So this is kind of like the"}, {"start": 3239.84, "end": 3243.2, "text": " In in their innate sort of trigger happiness of this neuron"}, {"start": 3243.2, "end": 3247.9199999999996, "text": " So this bias can make it a bit more trigger happy or a little less trigger happy regardless of the input"}, {"start": 3248.56, "end": 3250.7999999999997, "text": " But basically we're taking all the w times x"}, {"start": 3251.4399999999996, "end": 3256.08, "text": " Of all the inputs adding the bias and then we take it through an activation function"}, {"start": 3257.12, "end": 3259.9199999999996, "text": " And this activation function is usually some kind of a squashing function"}, {"start": 3260.48, "end": 3264.16, "text": " Like a sigmoid or 10-h or something like that. So as an example"}, {"start": 3264.7999999999997, "end": 3266.7999999999997, "text": " We're going to use the 10-h in this example"}, {"start": 3267.6, "end": 3269.04, "text": " um, numpy has a"}, {"start": 3269.8399999999997, "end": 3271.3599999999997, "text": " NP.10-h"}, {"start": 3271.36, "end": 3274.48, "text": " So um, we can call it on a range"}, {"start": 3275.04, "end": 3276.7200000000003, "text": " Then we can plot it"}, {"start": 3276.7200000000003, "end": 3280.48, "text": " This is the 10-h function and you see that the inputs as they come in"}, {"start": 3281.2000000000003, "end": 3284.6400000000003, "text": " Get squashed on the wipe coordinate here. So um"}, {"start": 3285.44, "end": 3290.1600000000003, "text": " Right at zero. We're going to get exactly zero and then as you go more positive in the input"}, {"start": 3290.96, "end": 3294.88, "text": " Then you'll see that the function will only go up to one and then plateau out"}, {"start": 3295.92, "end": 3300.48, "text": " And so if you pass in very positive inputs, we're gonna cap it smoothly at one"}, {"start": 3300.48, "end": 3303.68, "text": " And on the negative side we're gonna cap it smoothly to negative one"}, {"start": 3304.64, "end": 3309.44, "text": " So that's 10-h and that's the squashing function or an activation function"}, {"start": 3309.92, "end": 3316.16, "text": " And what comes out of this neuron is just the activation function applied to the dot product of the weights and the"}, {"start": 3316.88, "end": 3318.16, "text": " inputs"}, {"start": 3318.2400000000002, "end": 3320.2400000000002, "text": " So let's write one out"}, {"start": 3321.12, "end": 3322.64, "text": " um"}, {"start": 3322.64, "end": 3324.64, "text": " I'm going to copy paste because"}, {"start": 3327.44, "end": 3329.12, "text": " I don't want to type too much"}, {"start": 3329.12, "end": 3331.12, "text": " But okay, so here we have the inputs"}, {"start": 3331.6, "end": 3335.04, "text": " x1 x2 so this is a two-dimensional neuron. So two inputs are gonna come in"}, {"start": 3336.0, "end": 3338.24, "text": " These are thought out as the weights of the neuron"}, {"start": 3338.96, "end": 3344.3199999999997, "text": " weights w1 w2 and these weights again are the synaptic strengths for each input"}, {"start": 3345.2799999999997, "end": 3347.2799999999997, "text": " And this is the bias of the neuron"}, {"start": 3347.6, "end": 3349.2799999999997, "text": " b"}, {"start": 3349.2799999999997, "end": 3354.88, "text": " And now we want to do is according to this model we need to multiply x1 times w1"}, {"start": 3355.68, "end": 3357.68, "text": " and x2 times w2"}, {"start": 3357.68, "end": 3360.64, "text": " And then we need to add bias on top of it"}, {"start": 3361.52, "end": 3367.44, "text": " And it gets a little messy here, but all we are trying to do is x1 w1 plus x2 w2 plus b"}, {"start": 3368.0, "end": 3370.0, "text": " And these are multiply here"}, {"start": 3370.0, "end": 3374.8799999999997, "text": " Except I'm doing it in small steps so that we actually have pointers to all these intermediate nodes"}, {"start": 3375.2, "end": 3380.48, "text": " So we have x1 w1 variable x times x2 w2 variable and I'm also labeling them"}, {"start": 3381.8399999999997, "end": 3383.8399999999997, "text": " So n is now"}, {"start": 3383.8399999999997, "end": 3385.8399999999997, "text": " the cell body raw"}, {"start": 3385.84, "end": 3389.76, "text": " activation without the activation function for now"}, {"start": 3390.6400000000003, "end": 3394.7200000000003, "text": " And this should be enough to basically plot it so draw out of n"}, {"start": 3397.92, "end": 3400.96, "text": " Gives us x1 times w1 x2 times w2"}, {"start": 3402.0, "end": 3408.56, "text": " Being added then the bias gets added on top of this and this n is this sum"}, {"start": 3409.44, "end": 3411.84, "text": " So we're now going to take it through an activation function"}, {"start": 3411.84, "end": 3416.0, "text": " And let's say we use the 10h so that we produce the output"}, {"start": 3416.6400000000003, "end": 3421.84, "text": " So what we'd like to do here is we'd like to do the output and I'll call it O is"}, {"start": 3423.2000000000003, "end": 3425.2000000000003, "text": " n dot 10h"}, {"start": 3425.28, "end": 3427.6800000000003, "text": " Okay, but we haven't yet written the 10h"}, {"start": 3428.4, "end": 3432.32, "text": " Now the reason that we need to implement another 10h function here is that"}, {"start": 3433.04, "end": 3434.88, "text": " 10h is a"}, {"start": 3434.88, "end": 3441.36, "text": " hyperbolic function and we've only so far implemented plus and at times and you can't make a 10h out of just pluses and times"}, {"start": 3442.08, "end": 3443.92, "text": " You also need explanation"}, {"start": 3443.92, "end": 3446.32, "text": " So 10h is this kind of a formula here"}, {"start": 3447.28, "end": 3449.44, "text": " You can use either one of these and you see that there is"}, {"start": 3449.6800000000003, "end": 3454.2400000000002, "text": " Explanation involved which we have not implemented yet for our little value node here"}, {"start": 3454.48, "end": 3458.32, "text": " So we're not going to be able to produce 10h yet and we have to go back up and implement something like it"}, {"start": 3459.2000000000003, "end": 3461.2000000000003, "text": " now one option here"}, {"start": 3461.2, "end": 3463.2, "text": " is"}, {"start": 3463.2, "end": 3465.2, "text": " We could actually implement"}, {"start": 3465.2, "end": 3467.2, "text": " X-managementation"}, {"start": 3467.2, "end": 3471.8399999999997, "text": " And we could return the X-off value instead of a 10h of a value"}, {"start": 3472.48, "end": 3475.52, "text": " Because if we had X then we have everything else that we need"}, {"start": 3476.08, "end": 3479.4399999999996, "text": " So because we know how to add and we know how to"}, {"start": 3480.3999999999996, "end": 3481.4399999999996, "text": " um"}, {"start": 3481.4399999999996, "end": 3483.3599999999997, "text": " We know how to add and we know how to multiply"}, {"start": 3483.3599999999997, "end": 3486.64, "text": " So we'd be able to create 10h if we knew how to X-off"}, {"start": 3486.96, "end": 3489.3599999999997, "text": " But for the purposes of this example, I specifically wanted to"}, {"start": 3489.36, "end": 3491.1200000000003, "text": " Show you"}, {"start": 3491.1200000000003, "end": 3495.52, "text": " That we don't necessarily need to have the most atomic pieces in"}, {"start": 3496.1600000000003, "end": 3496.96, "text": " um"}, {"start": 3496.96, "end": 3501.36, "text": " In this value object we can actually like create functions at arbitrary"}, {"start": 3503.2000000000003, "end": 3505.92, "text": " Points of abstraction they can be complicated functions"}, {"start": 3506.0, "end": 3509.6800000000003, "text": " But they can be also very very simple functions like a plus and it's totally up to us"}, {"start": 3510.08, "end": 3513.84, "text": " The only thing that matters is that we know how to differentiate through any one function"}, {"start": 3513.84, "end": 3519.36, "text": " So we take some inputs and we make an output the only thing that matters can be arbitrarily complex function"}, {"start": 3519.92, "end": 3522.96, "text": " As long as you know how to create the local derivative"}, {"start": 3523.04, "end": 3526.88, "text": " If you know the local derivative of how the inputs impact the output then that's all you need"}, {"start": 3527.36, "end": 3530.96, "text": " So we're going to cluster up all of this expression"}, {"start": 3531.1200000000003, "end": 3533.28, "text": " And we're not going to break it down to its atomic pieces"}, {"start": 3533.28, "end": 3535.28, "text": " We're just going to directly implement 10h"}, {"start": 3535.6000000000004, "end": 3537.6000000000004, "text": " So let's do that"}, {"start": 3537.28, "end": 3539.28, "text": " depth 10h"}, {"start": 3539.28, "end": 3541.28, "text": " And then out will be a value"}, {"start": 3541.28, "end": 3543.28, "text": " Of"}, {"start": 3543.6800000000003, "end": 3545.6800000000003, "text": " And we need this expression here. So"}, {"start": 3546.0800000000004, "end": 3548.0800000000004, "text": " um"}, {"start": 3548.5600000000004, "end": 3551.0400000000004, "text": " Let me actually copy-based"}, {"start": 3554.2400000000002, "end": 3560.0800000000004, "text": " What's graph n which is a solid data and then this I believe is the 10h"}, {"start": 3561.6000000000004, "end": 3563.6000000000004, "text": " Math dot x off"}, {"start": 3564.7200000000003, "end": 3566.0800000000004, "text": " two"}, {"start": 3566.0800000000004, "end": 3567.36, "text": " No n that"}, {"start": 3567.36, "end": 3571.76, "text": " m minus one over two n plus one maybe I can call this x"}, {"start": 3573.1200000000003, "end": 3575.1200000000003, "text": " Just that it matches exactly"}, {"start": 3575.76, "end": 3578.7200000000003, "text": " Okay, and now this will be t"}, {"start": 3580.4, "end": 3583.36, "text": " And uh children of this node they're just one child"}, {"start": 3584.1600000000003, "end": 3587.76, "text": " And I'm wrapping it in a tuple so this is a couple of one object just self"}, {"start": 3588.7200000000003, "end": 3591.36, "text": " And here the name of this operation will be 10h"}, {"start": 3592.32, "end": 3594.32, "text": " And we're going to return that"}, {"start": 3594.32, "end": 3596.32, "text": " Okay"}, {"start": 3598.6400000000003, "end": 3603.76, "text": " So now values should be implementing 10h and now it's rolled away down here"}, {"start": 3604.6400000000003, "end": 3608.48, "text": " And we can actually do n dot 10h and that's going to return the 10h"}, {"start": 3609.28, "end": 3614.1600000000003, "text": " Output of n and now we should be able to draw that of oh not of n"}, {"start": 3614.88, "end": 3616.88, "text": " So let's see how that worked"}, {"start": 3618.8, "end": 3621.04, "text": " There we go and went through 10h"}, {"start": 3621.04, "end": 3623.36, "text": " To produce this up it"}, {"start": 3624.32, "end": 3626.32, "text": " So now 10h is a"}, {"start": 3626.32, "end": 3627.92, "text": " sort of"}, {"start": 3627.92, "end": 3631.2799999999997, "text": " Our little micrograt supported node here as an operation"}, {"start": 3633.2799999999997, "end": 3635.68, "text": " And as long as we know derivative of 10h"}, {"start": 3636.32, "end": 3638.32, "text": " Then we'll be able to back propagate through it"}, {"start": 3638.32, "end": 3640.32, "text": " Now let's see this 10h in action"}, {"start": 3640.4, "end": 3644.24, "text": " Currently, it's not squashing too much because the input to it is pretty low"}, {"start": 3644.8, "end": 3647.36, "text": " So if the bias was increased to say eight"}, {"start": 3647.36, "end": 3652.32, "text": " Then we'll see that what's flowing into the 10h now is"}, {"start": 3653.2000000000003, "end": 3654.48, "text": " two"}, {"start": 3654.48, "end": 3657.04, "text": " And 10h is squashing to point nine six"}, {"start": 3657.04, "end": 3659.92, "text": " So we're already hitting the tail of this 10h"}, {"start": 3659.92, "end": 3662.88, "text": " And it will sort of smoothly go up to one and then plateau out over there"}, {"start": 3663.44, "end": 3665.52, "text": " Okay, so now I'm going to do something slightly strange"}, {"start": 3665.92, "end": 3669.2000000000003, "text": " I'm going to change this bias from eight to this number"}, {"start": 3669.84, "end": 3671.84, "text": " 6.88 etc"}, {"start": 3671.92, "end": 3676.48, "text": " And I'm going to do this for specific reasons because we're about to start back propagation"}, {"start": 3676.48, "end": 3679.92, "text": " And I want to make sure that our numbers come out nice"}, {"start": 3680.16, "end": 3684.32, "text": " They're not like very crazy numbers. They're nice numbers that we can sort of understand in our head"}, {"start": 3684.8, "end": 3686.8, "text": " Let me also add pose label"}, {"start": 3687.04, "end": 3689.04, "text": " Oh, it's short for output here"}, {"start": 3690.16, "end": 3691.68, "text": " So that's the R"}, {"start": 3691.68, "end": 3692.88, "text": " Okay, so"}, {"start": 3692.88, "end": 3695.2, "text": " 28 flows into 10h comes up point seven"}, {"start": 3695.2, "end": 3699.52, "text": " So now we're going to do back propagation and we're going to fill in all the gradients"}, {"start": 3700.4, "end": 3705.76, "text": " So what is the derivative or with respect to all the inputs here"}, {"start": 3705.76, "end": 3710.8, "text": " And of course in a typical neural network setting what we really care about the most is the derivative of"}, {"start": 3711.44, "end": 3719.0400000000004, "text": " These neurons on the weights specifically the W2 and W1 because those are the weights that we're going to be changing part of the optimization"}, {"start": 3719.76, "end": 3722.8, "text": " And the other thing that we have to remember is here we have only single neuron"}, {"start": 3722.96, "end": 3725.5200000000004, "text": " But in the neural net you typically have many neurons and they're connected"}, {"start": 3727.2000000000003, "end": 3732.0, "text": " So this is only like a one small neuron a piece of a much bigger puzzle and eventually there's a loss function"}, {"start": 3732.0, "end": 3737.84, "text": " That sort of measures the accuracy of the neural net and we're back propagating with respect to that accuracy and trying to increase it"}, {"start": 3739.28, "end": 3741.76, "text": " So let's start off back propagation here and and"}, {"start": 3742.64, "end": 3749.68, "text": " What is the derivative? Oh with respect to oh the base case sort of we know always is that the gradient is just one point there"}, {"start": 3750.56, "end": 3758.32, "text": " So let me fill it in and then let me split out the drawing function"}, {"start": 3758.32, "end": 3760.48, "text": " um here"}, {"start": 3763.76, "end": 3765.76, "text": " And then here sell"}, {"start": 3767.36, "end": 3769.36, "text": " Clear this output here, okay"}, {"start": 3770.2400000000002, "end": 3773.36, "text": " So now when we draw oh we'll see that oh that grad is one"}, {"start": 3774.0800000000004, "end": 3776.0800000000004, "text": " So now we're going to back propagate through the 10 H"}, {"start": 3776.7200000000003, "end": 3780.6400000000003, "text": " So to back propagate through 10 H we need to know the local derivative of 10 H"}, {"start": 3780.64, "end": 3787.3599999999997, "text": " So if we have that oh is 10 H of n"}, {"start": 3788.56, "end": 3791.12, "text": " Then what is d oh by d n?"}, {"start": 3792.08, "end": 3798.48, "text": " Now what you could do is you could come here and you could take this expression and you could do your calculus derivative taking"}, {"start": 3799.2, "end": 3800.64, "text": " Um, and that would work"}, {"start": 3800.8799999999997, "end": 3803.2, "text": " But we can also just scroll down with the pd i here"}, {"start": 3803.8399999999997, "end": 3807.68, "text": " Into a section that hopefully tells us that derivative"}, {"start": 3807.68, "end": 3809.12, "text": " uh"}, {"start": 3809.12, "end": 3810.8799999999997, "text": " d by dx of 10 H of x is"}, {"start": 3811.8399999999997, "end": 3814.56, "text": " Any of these I like this one one minus 10 H square of x"}, {"start": 3815.3599999999997, "end": 3818.8799999999997, "text": " So this is one minus 10 H of x squared"}, {"start": 3819.52, "end": 3826.24, "text": " So basically what this is saying is that d oh by d n is one minus 10 H"}, {"start": 3827.6, "end": 3829.12, "text": " often"}, {"start": 3829.12, "end": 3831.12, "text": " squared"}, {"start": 3831.12, "end": 3833.68, "text": " And we already have 10 H of n. It's just oh"}, {"start": 3833.68, "end": 3840.48, "text": " So it's one minus oh squared. So it was the output here. So the output is this number"}, {"start": 3841.9199999999996, "end": 3847.04, "text": " Odadega is this number and then"}, {"start": 3848.24, "end": 3854.96, "text": " What this is saying is that d oh by d n is one minus this squared. So one minus Odadega squared"}, {"start": 3856.56, "end": 3858.56, "text": " It's point five conveniently"}, {"start": 3858.56, "end": 3863.36, "text": " So the local derivative of this 10 H operation here is point five and"}, {"start": 3864.4, "end": 3870.72, "text": " uh, so that would be d oh by d n so we can fill in that n dot grad"}, {"start": 3873.44, "end": 3875.44, "text": " Is point five. We'll just fill in"}, {"start": 3882.56, "end": 3884.56, "text": " So this is exactly point five one half"}, {"start": 3884.56, "end": 3887.7599999999998, "text": " So now we're going to continue the back propagation"}, {"start": 3889.36, "end": 3891.2, "text": " This is point five and this is a plus node"}, {"start": 3892.24, "end": 3896.08, "text": " So how is back prop going to what is back prop going to do here"}, {"start": 3896.72, "end": 3901.2799999999997, "text": " And if you remember our previous example a plus is just a distributor of gradient"}, {"start": 3901.84, "end": 3904.7999999999997, "text": " So this gradient will simply flow to both of these equally"}, {"start": 3905.36, "end": 3910.16, "text": " And that's because the local derivative of this operation is one for every one of its nodes"}, {"start": 3910.4, "end": 3912.24, "text": " So one times point five is point five"}, {"start": 3912.24, "end": 3917.52, "text": " So therefore we know that this node here which we called this"}, {"start": 3918.7999999999997, "end": 3923.68, "text": " It's grad is just point five and we know that b dot grad is also point five"}, {"start": 3924.9599999999996, "end": 3926.9599999999996, "text": " So let's set those and let's draw"}, {"start": 3928.9599999999996, "end": 3934.3199999999997, "text": " So those are point five continuing. We have another plus point five again. We'll just distribute you"}, {"start": 3934.32, "end": 3941.84, "text": " So point five will flow to both of these so we can set theirs"}, {"start": 3943.76, "end": 3946.88, "text": " x2w2 as well that grad is point five"}, {"start": 3948.0800000000004, "end": 3950.32, "text": " And let's read wrong. Pluses are my favorite"}, {"start": 3950.7200000000003, "end": 3952.48, "text": " operations to back propagate through because"}, {"start": 3953.28, "end": 3955.04, "text": " it's very simple"}, {"start": 3955.2000000000003, "end": 3957.84, "text": " So now it's flowing into these expressions is point five"}, {"start": 3957.92, "end": 3962.4, "text": " And so really again keep in mind what the derivative is telling us at every point in time along here"}, {"start": 3962.4, "end": 3966.64, "text": " This is saying that if we want the output of this neuron to increase"}, {"start": 3968.1600000000003, "end": 3973.84, "text": " Then the influence on these expressions is positive on the output both of them are positive"}, {"start": 3977.12, "end": 3979.12, "text": " Contribution to the output"}, {"start": 3980.64, "end": 3983.6, "text": " So now back propagating to x2 and w2 first"}, {"start": 3984.32, "end": 3988.4, "text": " This is a times node so we know that the local derivative is no the other term"}, {"start": 3988.4, "end": 3991.6, "text": " So if we want to calculate x2 dot grad"}, {"start": 3992.88, "end": 3995.28, "text": " Then can you think through what it's going to be"}, {"start": 4001.04, "end": 4004.96, "text": " So x2 dot grad will be w2 dot data times"}, {"start": 4006.4, "end": 4009.36, "text": " This x2w2 dot grad right"}, {"start": 4011.2000000000003, "end": 4013.84, "text": " And w2 dot grad will be"}, {"start": 4013.84, "end": 4018.96, "text": " x2 dot data times x2w2 dot grad"}, {"start": 4021.44, "end": 4024.2400000000002, "text": " Right, so that's the little local piece of chain rule"}, {"start": 4027.1200000000003, "end": 4029.1200000000003, "text": " Let's set them and let's redraw"}, {"start": 4029.92, "end": 4035.04, "text": " So here we see that the gradient on our weight two is zero because x2's data was zero"}, {"start": 4035.76, "end": 4040.0, "text": " Right, but x2 will have the gradient point five because data here was one"}, {"start": 4040.0, "end": 4044.96, "text": " And so what's interesting here, right is because the input x2 was zero"}, {"start": 4045.44, "end": 4047.44, "text": " And because of the way the times works"}, {"start": 4048.16, "end": 4052.0, "text": " Um, of course this gradient will be zero and think about intuitively why that is"}, {"start": 4053.2, "end": 4055.6, "text": " Derbit it always tells us the influence of"}, {"start": 4056.16, "end": 4061.04, "text": " This on the final output if I will w2 how is the output changing?"}, {"start": 4061.6, "end": 4063.84, "text": " It's not changing because we're multiplying by zero"}, {"start": 4064.48, "end": 4068.08, "text": " So because it's not changing there is no derivative and zero is the correct answer"}, {"start": 4068.08, "end": 4070.88, "text": " Because we're multiplying or swashing with that zero"}, {"start": 4072.3199999999997, "end": 4077.04, "text": " And let's do it here point five should come here and flow through this times"}, {"start": 4077.84, "end": 4080.64, "text": " And so we'll have that x1 dot grad is"}, {"start": 4082.08, "end": 4085.12, "text": " Can you think through a little bit what what this should be"}, {"start": 4087.44, "end": 4091.84, "text": " The local derivative of times with respect to x1 is going to be w1"}, {"start": 4092.72, "end": 4094.72, "text": " So w1's data times"}, {"start": 4094.72, "end": 4097.679999999999, "text": " x1 w1 dot grad"}, {"start": 4098.96, "end": 4102.48, "text": " And w1 dot grad will be x1 dot data times"}, {"start": 4103.679999999999, "end": 4105.679999999999, "text": " x1 w2 w1 dot grad"}, {"start": 4107.36, "end": 4109.36, "text": " Let's see what those came out to be"}, {"start": 4109.36, "end": 4113.76, "text": " So this is point five. So this would be negative 1.5 and this would be one"}, {"start": 4114.719999999999, "end": 4118.88, "text": " And we backpropagated through this expression these are the actual final derivatives"}, {"start": 4119.2, "end": 4122.5599999999995, "text": " So if we want this neurons output to increase"}, {"start": 4122.56, "end": 4125.52, "text": " We know that what's necessary is that"}, {"start": 4126.56, "end": 4126.56, "text": " uh"}, {"start": 4127.200000000001, "end": 4128.72, "text": " W2 we have no gradient"}, {"start": 4128.72, "end": 4131.120000000001, "text": " W2 doesn't actually matter to this neuron right now"}, {"start": 4131.6, "end": 4134.56, "text": " But this neuron this weight should uh go up"}, {"start": 4135.200000000001, "end": 4138.96, "text": " So if this weight goes up then this neuron's output would have gone up"}, {"start": 4139.52, "end": 4141.84, "text": " And proportionally because the gradient is one"}, {"start": 4141.92, "end": 4145.120000000001, "text": " Okay, so during the backpropagation manual is obviously ridiculous"}, {"start": 4145.200000000001, "end": 4147.76, "text": " So we are now going to put an end to this suffering"}, {"start": 4147.76, "end": 4149.76, "text": " And we're going to see how we can implement"}, {"start": 4149.76, "end": 4154.400000000001, "text": " uh the backward pass a bit more automatically. We're not going to be doing all of it manually out here"}, {"start": 4154.96, "end": 4159.6, "text": " It's now pretty obvious to us by example how these pluses and times are backpropagated ingredients"}, {"start": 4160.0, "end": 4162.0, "text": " So let's go up to the value"}, {"start": 4162.24, "end": 4164.56, "text": " object and we're going to start"}, {"start": 4164.64, "end": 4168.16, "text": " codifying what we've seen uh in the examples below"}, {"start": 4169.6, "end": 4172.8, "text": " So we're going to do this by storing a special self-doubt backward"}, {"start": 4174.96, "end": 4177.6, "text": " And uh underscore backward and this will be a function"}, {"start": 4177.6, "end": 4184.400000000001, "text": " Which is going to do that little piece of chain rule at each little node that complete that took inputs and produced output"}, {"start": 4184.8, "end": 4186.160000000001, "text": " Uh we're going to store"}, {"start": 4186.8, "end": 4191.6, "text": " How we are going to chain the the outputs gradient into the inputs gradients"}, {"start": 4192.400000000001, "end": 4194.0, "text": " So by default"}, {"start": 4194.160000000001, "end": 4197.360000000001, "text": " This will be a function that uh doesn't do anything"}, {"start": 4198.08, "end": 4199.120000000001, "text": " Uh so um"}, {"start": 4200.0, "end": 4202.240000000001, "text": " And you can also see that here in the value in micrograd"}, {"start": 4203.360000000001, "end": 4205.52, "text": " So with this backward function"}, {"start": 4205.52, "end": 4207.52, "text": " And by default doesn't do anything"}, {"start": 4208.56, "end": 4209.52, "text": " This is a function"}, {"start": 4210.240000000001, "end": 4214.080000000001, "text": " And that would be sort of the case for example for leaf node for leaf node. There's nothing to do"}, {"start": 4215.92, "end": 4219.040000000001, "text": " But now if when we're creating these out values"}, {"start": 4219.280000000001, "end": 4223.280000000001, "text": " These out values are an addition of self and other"}, {"start": 4224.320000000001, "end": 4226.72, "text": " And so we're going to want to self set"}, {"start": 4227.280000000001, "end": 4231.84, "text": " Out's backward to be the function that propagates the gradient"}, {"start": 4231.84, "end": 4233.4400000000005, "text": " So"}, {"start": 4234.24, "end": 4235.4400000000005, "text": " So"}, {"start": 4235.4400000000005, "end": 4237.4400000000005, "text": " Let's define what should happen"}, {"start": 4240.56, "end": 4244.400000000001, "text": " And we're going to store it in a closure. Let's define what should happen when we call"}, {"start": 4245.28, "end": 4247.28, "text": " Out's grad"}, {"start": 4247.92, "end": 4249.92, "text": " for an addition"}, {"start": 4250.16, "end": 4252.16, "text": " Our job is to take"}, {"start": 4252.16, "end": 4256.64, "text": " Out's grad and propagate it into self-scrad and other dot grad"}, {"start": 4257.12, "end": 4259.360000000001, "text": " So basically we want to self-self grad to something"}, {"start": 4259.36, "end": 4261.36, "text": " And"}, {"start": 4261.36, "end": 4263.36, "text": " We want to set others that grad to something"}, {"start": 4264.639999999999, "end": 4265.92, "text": " Okay"}, {"start": 4265.92, "end": 4269.04, "text": " And the way we saw below how chain rule works"}, {"start": 4269.04, "end": 4274.08, "text": " We want to take the local derivative times the um sort of global derivative"}, {"start": 4274.08, "end": 4279.759999999999, "text": " I should call it which is the derivative of the final output of the expression with respect to out's data"}, {"start": 4281.28, "end": 4283.04, "text": " Respect to out"}, {"start": 4283.04, "end": 4285.04, "text": " so"}, {"start": 4285.04, "end": 4289.76, "text": " The local derivative of self in an addition is 1.0"}, {"start": 4289.76, "end": 4293.04, "text": " So it's just 1.0 times out's grad"}, {"start": 4294.48, "end": 4296.0, "text": " That's the chain rule"}, {"start": 4296.0, "end": 4298.64, "text": " And others that grad will be 1.0 times out grad"}, {"start": 4299.28, "end": 4302.16, "text": " And what you basically what you're seeing here is that out's grad"}, {"start": 4302.8, "end": 4309.04, "text": " Will simply be copied onto self-scrad and others grad as we saw happens for an addition operation"}, {"start": 4309.04, "end": 4314.56, "text": " So we're going to later call this function to propagate the gradient having done an addition"}, {"start": 4315.92, "end": 4319.36, "text": " Let's not do multiplication we're going to also define a dot backward"}, {"start": 4322.4, "end": 4325.36, "text": " And we're going to set its backward to be backward"}, {"start": 4327.92, "end": 4332.0, "text": " And we want to chain out grad into self-scrad"}, {"start": 4334.4, "end": 4336.4, "text": " And others that grad"}, {"start": 4336.4, "end": 4339.759999999999, "text": " And this will be a little piece of chain rule for multiplication"}, {"start": 4340.48, "end": 4342.5599999999995, "text": " So we'll have so what should it be?"}, {"start": 4343.36, "end": 4345.36, "text": " Can you think through"}, {"start": 4348.719999999999, "end": 4350.16, "text": " So what is the local derivative?"}, {"start": 4350.879999999999, "end": 4353.36, "text": " Here the local derivative was others that data"}, {"start": 4355.5199999999995, "end": 4356.719999999999, "text": " And then"}, {"start": 4356.719999999999, "end": 4361.04, "text": " There's other stuff data and then times out that grad that's chain rule"}, {"start": 4362.639999999999, "end": 4365.2, "text": " And here we have self-that data times out that grad"}, {"start": 4365.2, "end": 4366.5599999999995, "text": " That's what we've been doing"}, {"start": 4369.679999999999, "end": 4372.4, "text": " And finally here for 10h that backward"}, {"start": 4374.96, "end": 4378.32, "text": " And then we want to set out backwards to be just backward"}, {"start": 4380.639999999999, "end": 4382.0, "text": " And here we need to"}, {"start": 4382.72, "end": 4387.599999999999, "text": " Back-propagate we have out that grad and we want to chain it into self-that grad"}, {"start": 4389.76, "end": 4391.04, "text": " And self-that grad will be"}, {"start": 4391.04, "end": 4395.44, "text": " The local derivative of this operation that we've done here which is 10h"}, {"start": 4396.24, "end": 4402.16, "text": " And so we saw that the local gradient is 1 minus the 10h of x squared which here is t"}, {"start": 4403.76, "end": 4407.12, "text": " That's the local derivative because that's t is the output of this 10h"}, {"start": 4407.5199999999995, "end": 4409.5199999999995, "text": " So 1 minus t squared is the local derivative"}, {"start": 4410.08, "end": 4412.0, "text": " And then gradient"}, {"start": 4412.56, "end": 4414.32, "text": " Has to be multiplied because of the chain rule"}, {"start": 4414.96, "end": 4418.0, "text": " So out grad is chained through the local gradient into self-that grad"}, {"start": 4418.0, "end": 4421.28, "text": " And that should be basically it"}, {"start": 4421.28, "end": 4424.0, "text": " So we're going to redefine our value node"}, {"start": 4424.96, "end": 4426.56, "text": " We're going to swing all the way down here"}, {"start": 4428.16, "end": 4432.0, "text": " And we're going to redefine our expression"}, {"start": 4432.72, "end": 4434.4, "text": " Make sure that all the grads are zero"}, {"start": 4435.44, "end": 4438.48, "text": " Okay, but now we don't have to do this manually anymore"}, {"start": 4439.84, "end": 4443.12, "text": " We are going to basically be calling the dot backward in the right order"}, {"start": 4444.16, "end": 4447.2, "text": " So first we want to call oaths"}, {"start": 4447.2, "end": 4449.2, "text": " dot backward"}, {"start": 4454.08, "end": 4456.639999999999, "text": " So o was the outcome of 10h"}, {"start": 4458.0, "end": 4461.04, "text": " Right so column oaths that back those those backward"}, {"start": 4462.32, "end": 4465.04, "text": " Will be this function. This is what it will do"}, {"start": 4466.08, "end": 4468.08, "text": " Now we have to be careful"}, {"start": 4468.08, "end": 4474.72, "text": " Because there's times out that grad and out that grad remember is initialized to zero"}, {"start": 4478.96, "end": 4481.76, "text": " So here we see grad zero. So as a base case"}, {"start": 4482.32, "end": 4485.44, "text": " We need to set oath dot grad to 1.0"}, {"start": 4486.72, "end": 4488.72, "text": " To initialize this with one"}, {"start": 4493.6, "end": 4495.28, "text": " And then once this is one"}, {"start": 4495.28, "end": 4501.599999999999, "text": " We can call o dot backward and what that should do is it should propagate this grad through 10h"}, {"start": 4502.24, "end": 4508.8, "text": " So the local derivative times the global derivative which is initialize at one. So this should"}, {"start": 4511.28, "end": 4513.28, "text": " Um"}, {"start": 4515.84, "end": 4517.2, "text": " Uh, no"}, {"start": 4517.2, "end": 4521.679999999999, "text": " So I thought about redoing it but I figured I should just leave the error in here because it's pretty funny"}, {"start": 4522.16, "end": 4524.16, "text": " Why is not I object not collable"}, {"start": 4524.16, "end": 4526.16, "text": " Uh, it's because"}, {"start": 4527.2, "end": 4532.639999999999, "text": " I screwed up we're trying to save these functions. So this is correct this here"}, {"start": 4533.44, "end": 4537.28, "text": " We don't want to call the function because that returns none these functions return none"}, {"start": 4537.68, "end": 4539.68, "text": " Which is want to store the function"}, {"start": 4539.76, "end": 4541.76, "text": " So let me redefine the value object"}, {"start": 4542.32, "end": 4545.76, "text": " And then we're going to come back and redefine the expression draw dot"}, {"start": 4546.72, "end": 4549.04, "text": " Everything is great. O dot grad is one"}, {"start": 4550.24, "end": 4552.24, "text": " O dot grad is one and now"}, {"start": 4552.24, "end": 4554.639999999999, "text": " Now this should work of course"}, {"start": 4555.84, "end": 4557.84, "text": " Okay, so all that backward should have"}, {"start": 4558.639999999999, "end": 4562.639999999999, "text": " This grad should now be point five if we withdraw and everything was correctly"}, {"start": 4563.28, "end": 4565.28, "text": " point five yay"}, {"start": 4565.36, "end": 4567.92, "text": " Okay, so now we need to call ns dot grad"}, {"start": 4570.4, "end": 4572.4, "text": " ns dot backward sorry"}, {"start": 4573.2, "end": 4574.639999999999, "text": " ns backward"}, {"start": 4574.719999999999, "end": 4576.719999999999, "text": " So that seems to have worked"}, {"start": 4577.92, "end": 4579.599999999999, "text": " So ns dot backward"}, {"start": 4579.6, "end": 4583.200000000001, "text": " Rapted the gradient to both of these so this is looking great"}, {"start": 4584.72, "end": 4586.96, "text": " Now we can of course call b dot grad"}, {"start": 4587.68, "end": 4589.68, "text": " Be the backwards or"}, {"start": 4590.320000000001, "end": 4592.08, "text": " What's going to happen?"}, {"start": 4592.160000000001, "end": 4594.320000000001, "text": " Well b doesn't have it backward"}, {"start": 4594.320000000001, "end": 4596.88, "text": " Bees backward because b is a leaf node"}, {"start": 4597.68, "end": 4600.88, "text": " Bees backward is by initialization the empty function"}, {"start": 4601.68, "end": 4605.120000000001, "text": " So nothing would happen but we can call call it on it"}, {"start": 4606.0, "end": 4608.0, "text": " But when we call"}, {"start": 4608.0, "end": 4610.0, "text": " This one"}, {"start": 4610.32, "end": 4612.32, "text": " backwards"}, {"start": 4613.6, "end": 4616.72, "text": " Then we expect this point five to get further routed"}, {"start": 4617.6, "end": 4619.92, "text": " Right, so there we go point five point five"}, {"start": 4621.04, "end": 4622.8, "text": " And then finally"}, {"start": 4622.8, "end": 4627.12, "text": " We want to call it here on x2w2"}, {"start": 4630.4, "end": 4632.4, "text": " And on x1w1"}, {"start": 4632.4, "end": 4638.639999999999, "text": " Let's do both of those and there we go"}, {"start": 4639.839999999999, "end": 4645.04, "text": " So we get 0.5 negative 1.5 and 1 exactly as we did before"}, {"start": 4645.5199999999995, "end": 4649.5199999999995, "text": " But now we've done it through calling that backward"}, {"start": 4651.12, "end": 4652.5599999999995, "text": " Sir manually"}, {"start": 4652.5599999999995, "end": 4658.0, "text": " So we have one last piece to get rid of which is us calling underscore backward manually"}, {"start": 4658.24, "end": 4660.24, "text": " So let's think through what we are actually doing"}, {"start": 4660.24, "end": 4661.44, "text": " Um"}, {"start": 4661.44, "end": 4665.92, "text": " We've laid out a mathematical expression and now we're trying to go backwards through that expression"}, {"start": 4666.5599999999995, "end": 4673.2, "text": " Um, so going backwards through the expression just means that we never want to call a dot backward for any node"}, {"start": 4674.16, "end": 4677.28, "text": " Before we've done sort of um"}, {"start": 4678.0, "end": 4680.0, "text": " Everything after it"}, {"start": 4680.0, "end": 4684.0, "text": " So we have to do everything after it before ever going to call dot backward on any one node"}, {"start": 4684.0, "end": 4687.92, "text": " We have to get all of its full dependencies everything that it depends on has to"}, {"start": 4687.92, "end": 4691.36, "text": " Propagate to it before we can continue that propagation"}, {"start": 4692.4, "end": 4697.28, "text": " So this ordering of graphs can be achieved using something called topological sort"}, {"start": 4698.0, "end": 4700.16, "text": " So topological sort"}, {"start": 4700.16, "end": 4702.64, "text": " Is basically a laying out of a graph"}, {"start": 4703.2, "end": 4706.24, "text": " Such that all the edges go only from left to right basically"}, {"start": 4706.8, "end": 4708.32, "text": " So here we have a graph"}, {"start": 4708.32, "end": 4710.96, "text": " So direction as such like a graph a dag"}, {"start": 4711.68, "end": 4715.2, "text": " And this is two different topological orders of it, I believe"}, {"start": 4715.2, "end": 4721.12, "text": " Where basically you'll see that it's a laying out of the nodes such that all the edges go only one way from left to right"}, {"start": 4722.08, "end": 4727.5199999999995, "text": " And implementing topological sort you can look in Wikipedia and so on. I'm not going to go through it in detail"}, {"start": 4728.96, "end": 4732.5599999999995, "text": " But basically this is what builds a topological graph"}, {"start": 4733.28, "end": 4737.76, "text": " Um, we maintain a set of visited nodes and then we are"}, {"start": 4738.4, "end": 4739.599999999999, "text": " um"}, {"start": 4739.6, "end": 4745.120000000001, "text": " Going through starting at some root node which for us is oh, that's what we want to start the top logical sort"}, {"start": 4745.76, "end": 4751.280000000001, "text": " And starting at oh we go through all of its children and we need to lay them out from left to right"}, {"start": 4752.8, "end": 4760.0, "text": " And basically this starts at oh if it's not visited then it marks it as visited and then it iterates through all of its children"}, {"start": 4760.88, "end": 4763.360000000001, "text": " And calls build topological on them"}, {"start": 4764.320000000001, "end": 4767.76, "text": " And then uh after it's gone through all the children it adds itself"}, {"start": 4767.76, "end": 4769.2, "text": " So basically"}, {"start": 4770.08, "end": 4776.16, "text": " This node that we're going to call it on like say oh is only going to add itself to the topical list"}, {"start": 4776.64, "end": 4781.280000000001, "text": " After all of the children have been processed and that's how this function is guaranteeing"}, {"start": 4781.84, "end": 4787.52, "text": " That you're only going to be in the list once all your children are in the list and that's the invariant that is being maintained"}, {"start": 4788.0, "end": 4791.280000000001, "text": " So if we built up on oh and then inspect this list"}, {"start": 4792.24, "end": 4795.76, "text": " We're going to see that it ordered our value objects"}, {"start": 4795.76, "end": 4800.96, "text": " And the last one is the value of 0.7 which is the output"}, {"start": 4801.92, "end": 4808.08, "text": " So this is oh and then this is n and then all the other nodes get laid out before it"}, {"start": 4809.84, "end": 4819.12, "text": " So that built the topological graph and really what we're doing now is we're just calling that underscore backward on all of the nodes in a topological order"}, {"start": 4819.92, "end": 4822.8, "text": " So if we just reset the gradients they're all zero"}, {"start": 4822.8, "end": 4830.4800000000005, "text": " So what did we do? We started by setting o.grad to be one"}, {"start": 4831.6, "end": 4833.6, "text": " That's that base case"}, {"start": 4833.76, "end": 4836.08, "text": " Then we built the topological order"}, {"start": 4838.56, "end": 4844.72, "text": " And then we went for node in reversed octopo"}, {"start": 4844.72, "end": 4853.2, "text": " Now in the reverse order because this list goes from you know we need to go through it in reverse order"}, {"start": 4854.08, "end": 4861.6, "text": " So starting at o node dot backward and this should be it"}, {"start": 4863.52, "end": 4864.72, "text": " There we go"}, {"start": 4865.68, "end": 4869.68, "text": " Those are the correct derivatives finally we are going to hide this functionality"}, {"start": 4869.68, "end": 4877.360000000001, "text": " So I'm going to copy this and we're going to hide it inside the value class because we don't want to have all that code lying around"}, {"start": 4878.320000000001, "end": 4884.400000000001, "text": " So instead of an underscore backward we're now going to define an actual backward so that backward without the underscore"}, {"start": 4886.320000000001, "end": 4888.400000000001, "text": " And that's going to do all the stuff that we just derived"}, {"start": 4889.12, "end": 4891.280000000001, "text": " So let me just clean this up a little bit. So"}, {"start": 4892.56, "end": 4894.56, "text": " We're first going to"}, {"start": 4894.56, "end": 4898.320000000001, "text": " Build the topological graph"}, {"start": 4899.120000000001, "end": 4901.120000000001, "text": " Starting at self"}, {"start": 4901.360000000001, "end": 4903.360000000001, "text": " So build topo of self"}, {"start": 4904.240000000001, "end": 4908.64, "text": " Will populate the topological order into the topo list which is a local variable"}, {"start": 4909.4400000000005, "end": 4911.6, "text": " Then we set self-adgrad to be one"}, {"start": 4913.120000000001, "end": 4918.320000000001, "text": " And then for each node in the reversed list so starting at us and going to all the children"}, {"start": 4919.280000000001, "end": 4921.120000000001, "text": " Uh, underscore backward"}, {"start": 4921.12, "end": 4926.48, "text": " And um, that should be it. So save"}, {"start": 4928.08, "end": 4930.08, "text": " Come down here we define"}, {"start": 4931.28, "end": 4933.28, "text": " Okay, all the grand are zero"}, {"start": 4933.76, "end": 4937.5199999999995, "text": " And now what we can do is oh, down backward without the underscore and"}, {"start": 4941.5199999999995, "end": 4945.12, "text": " There we go and that's uh, that's back propagation"}, {"start": 4945.12, "end": 4952.72, "text": " Please for one euro now we shouldn't be too happy with ourselves actually because we have a bad bug"}, {"start": 4953.12, "end": 4958.8, "text": " Um, and we have not surfaced the bug because of some specific conditions that we are have we have to think about right now"}, {"start": 4960.0, "end": 4962.48, "text": " So here's the simplest case that shows the bug"}, {"start": 4964.08, "end": 4966.08, "text": " Say I create a single node a"}, {"start": 4968.16, "end": 4970.16, "text": " And then I create a b that is e plus a"}, {"start": 4970.16, "end": 4974.16, "text": " And then I call it backward"}, {"start": 4974.96, "end": 4981.68, "text": " So what's gonna happen is a is three and then a is b is a plus a so there's two arrows on top of each other here"}, {"start": 4983.84, "end": 4989.28, "text": " Then we can see that b is of course the forward pass works b is just a plus a which is six"}, {"start": 4990.08, "end": 4992.08, "text": " But the gradient here is not actually correct"}, {"start": 4992.72, "end": 4994.72, "text": " That we calculate it automatically"}, {"start": 4996.08, "end": 4998.08, "text": " And that's because um"}, {"start": 4998.08, "end": 4999.28, "text": " You"}, {"start": 4999.28, "end": 5003.92, "text": " Of course, uh, just doing calculus in your head the derivative of b with respect to a"}, {"start": 5004.72, "end": 5006.72, "text": " should be uh two"}, {"start": 5007.76, "end": 5009.76, "text": " One plus one it's not one"}, {"start": 5010.96, "end": 5015.84, "text": " And totally what's happening here, right? So b is the result of a plus a and then we call backward on it"}, {"start": 5016.72, "end": 5019.2, "text": " So let's go up and see what that does"}, {"start": 5019.2, "end": 5026.5599999999995, "text": " um, b is a result of addition so out as b"}, {"start": 5028.0, "end": 5030.32, "text": " And then when we call backward what happened is"}, {"start": 5031.04, "end": 5033.84, "text": " self that grad was set to one"}, {"start": 5034.48, "end": 5036.48, "text": " And then other that grad was set to one"}, {"start": 5037.36, "end": 5042.639999999999, "text": " But because we're doing a plus a self and other are actually these as a object"}, {"start": 5042.64, "end": 5050.64, "text": " So we are overriding the gradient we are setting it to one and then we are setting it again to one and that's why it stays at"}, {"start": 5052.0, "end": 5053.200000000001, "text": " one"}, {"start": 5053.200000000001, "end": 5054.72, "text": " So that's a problem"}, {"start": 5054.72, "end": 5058.08, "text": " There's another way to see this in a little bit more complicated expression"}, {"start": 5061.6, "end": 5066.400000000001, "text": " So here we have a and b and then"}, {"start": 5066.4, "end": 5072.5599999999995, "text": " uh, d will be the multiplication of the two and he will be the addition of the two and um"}, {"start": 5073.12, "end": 5076.719999999999, "text": " Then we multiply times d to get f and then we call it f that backward"}, {"start": 5077.759999999999, "end": 5080.16, "text": " And these gradients if you check will be incorrect"}, {"start": 5080.879999999999, "end": 5084.4, "text": " So fundamentally what's happening here again is um"}, {"start": 5085.2, "end": 5088.639999999999, "text": " Basically, we're going to see an issue anytime we use a variable more than once"}, {"start": 5089.28, "end": 5094.24, "text": " Until now in these expressions above every variable is used exactly once so we didn't see the issue"}, {"start": 5094.24, "end": 5098.5599999999995, "text": " But here if a variable is used more than once what's going to happen during backward pass"}, {"start": 5099.2, "end": 5102.48, "text": " We're backpropagating from f to e to d so far so good"}, {"start": 5102.96, "end": 5107.12, "text": " But now e calls it backward and it deposits its gradients to a and b"}, {"start": 5107.5199999999995, "end": 5113.04, "text": " But then we come back to d and call backward and it overrides those gradients at a and b"}, {"start": 5114.5599999999995, "end": 5116.5599999999995, "text": " So that's obviously a problem"}, {"start": 5117.2, "end": 5123.44, "text": " And the solution here if you look at the multi-variate case of the chain rule and its generalization there"}, {"start": 5123.44, "end": 5128.96, "text": " The solution there is basically that we have to accumulate these gradients these gradients add"}, {"start": 5130.16, "end": 5132.879999999999, "text": " And so instead of setting those gradients"}, {"start": 5134.719999999999, "end": 5140.719999999999, "text": " We can simply do plus equals we need to accumulate those gradients plus equals plus equals"}, {"start": 5141.839999999999, "end": 5143.839999999999, "text": " plus equals"}, {"start": 5144.799999999999, "end": 5146.48, "text": " plus equals"}, {"start": 5146.48, "end": 5152.48, "text": " And this will be okay remember because we are initializing them at zero. So they started zero and then any"}, {"start": 5153.5199999999995, "end": 5156.0, "text": " contribution that flows backwards"}, {"start": 5157.28, "end": 5158.879999999999, "text": " Will simply add"}, {"start": 5158.879999999999, "end": 5160.879999999999, "text": " So now if we redefine"}, {"start": 5161.44, "end": 5163.44, "text": " this one"}, {"start": 5163.839999999999, "end": 5169.36, "text": " Because the plus equals this now works because a dot grad started at zero and we called b dot backward"}, {"start": 5169.679999999999, "end": 5174.32, "text": " We deposit one and then we deposit one again and now this is two which is correct"}, {"start": 5174.32, "end": 5178.0, "text": " And here this will also work and we'll get correct gradients"}, {"start": 5178.48, "end": 5182.24, "text": " Because when we call e dot backward we will deposit the gradients from this branch"}, {"start": 5182.48, "end": 5186.5599999999995, "text": " And then we get to back to d dot backward it will deposit its own gradients"}, {"start": 5187.04, "end": 5189.679999999999, "text": " And then those gradients simply add on top of each other"}, {"start": 5190.16, "end": 5192.88, "text": " And so we just accumulate those gradients and that fixes the issue"}, {"start": 5193.44, "end": 5198.32, "text": " Okay, now before we move on let me actually do a bit of cleanup here and delete some of these"}, {"start": 5198.96, "end": 5200.32, "text": " some of the intermediate work"}, {"start": 5200.32, "end": 5201.44, "text": " So"}, {"start": 5201.44, "end": 5204.0, "text": " I'm not going to need any of this now that we've derived all of it"}, {"start": 5204.96, "end": 5205.759999999999, "text": " um"}, {"start": 5205.839999999999, "end": 5208.799999999999, "text": " We are going to keep this because I want to come back to it"}, {"start": 5209.839999999999, "end": 5211.44, "text": " Delete the 10h"}, {"start": 5211.44, "end": 5213.44, "text": " delete arm when you can example"}, {"start": 5214.0, "end": 5216.0, "text": " delete the step"}, {"start": 5216.0, "end": 5218.719999999999, "text": " delete this keep the code that draws"}, {"start": 5219.839999999999, "end": 5224.24, "text": " And then delete this example and leave behind only the definition of value"}, {"start": 5225.44, "end": 5228.799999999999, "text": " And now let's come back to this non-linearity here that we implemented the 10h"}, {"start": 5228.8, "end": 5234.24, "text": " Now I told you that we could have broken down 10h into its explicit atoms"}, {"start": 5234.72, "end": 5237.28, "text": " In terms of other expressions if we had the x function"}, {"start": 5237.76, "end": 5239.6, "text": " So if you remember 10h is defined like this"}, {"start": 5240.16, "end": 5242.72, "text": " And we chose to develop 10h as a single function"}, {"start": 5242.96, "end": 5246.400000000001, "text": " And we can do that because we know it's derivative and we can back propagate through it"}, {"start": 5247.04, "end": 5250.8, "text": " But we can also break down 10h into an expressive function of x"}, {"start": 5251.28, "end": 5255.84, "text": " And I would like to do that now because I want to prove to you that you can all the same results and all the same gradients"}, {"start": 5255.84, "end": 5259.76, "text": " Um, but also because it forces us to implement a few more expressions"}, {"start": 5260.0, "end": 5261.360000000001, "text": " It forces us to do"}, {"start": 5261.52, "end": 5265.2, "text": " Accumentation, addition, subtraction, division and things like that"}, {"start": 5265.2, "end": 5267.4400000000005, "text": " And I think it's a good exercise to go through a few more of these"}, {"start": 5268.16, "end": 5270.16, "text": " Okay, so let's scroll up"}, {"start": 5270.16, "end": 5272.16, "text": " To the definition of value"}, {"start": 5272.24, "end": 5277.4400000000005, "text": " And here one thing that we currently can't do is we can do like a value of say 2.0"}, {"start": 5278.400000000001, "end": 5284.0, "text": " But we can't do you know here for example we want to add a constant one and we can't do something like this"}, {"start": 5284.0, "end": 5288.24, "text": " And we can't do it because it's just into object has no attribute data"}, {"start": 5288.64, "end": 5291.44, "text": " That's because a plus one comes right here to add"}, {"start": 5292.16, "end": 5297.44, "text": " And then other is the integer one and then here Python is trying to access one dot data"}, {"start": 5297.44, "end": 5298.72, "text": " And that's not a thing"}, {"start": 5298.72, "end": 5303.36, "text": " And that's because basically one is not a value object and we only have addition from value objects"}, {"start": 5303.92, "end": 5308.56, "text": " So as a matter of convenience so that we can create expressions like this and make them make sense"}, {"start": 5309.2, "end": 5311.2, "text": " We can simply do something like this"}, {"start": 5311.2, "end": 5317.12, "text": " Basically we let other alone if other is an instance of value"}, {"start": 5317.12, "end": 5320.96, "text": " But if it's not an instance of value we're going to assume that it's a number like an integer or float"}, {"start": 5320.96, "end": 5324.0, "text": " And we're going to simply wrap it in in value"}, {"start": 5324.16, "end": 5329.12, "text": " And then other will just become value of other and then other will have a data attribute and this should work"}, {"start": 5329.28, "end": 5332.72, "text": " So if I just say this read the farm value then this should work"}, {"start": 5333.36, "end": 5334.32, "text": " There we go"}, {"start": 5334.32, "end": 5338.16, "text": " Okay, now let's do the exact same thing for multiply because we can't do something like this"}, {"start": 5338.16, "end": 5344.4, "text": " Again for the exact same reason so we just have to go to mall and if other is"}, {"start": 5345.04, "end": 5347.04, "text": " Not a value then let's wrap it in value"}, {"start": 5347.68, "end": 5349.84, "text": " Let's redefine value and now this works"}, {"start": 5350.5599999999995, "end": 5353.599999999999, "text": " Now here's a kind of unfortunate and not obvious part"}, {"start": 5354.0, "end": 5358.4, "text": " A times two works we saw that but two times a is that going to work"}, {"start": 5359.68, "end": 5362.48, "text": " You'd expect it to write but actually it will not"}, {"start": 5363.04, "end": 5365.44, "text": " And the reason it won't is because Python doesn't know"}, {"start": 5365.44, "end": 5367.679999999999, "text": " Like when when you do a times two"}, {"start": 5368.5599999999995, "end": 5374.24, "text": " Basically um, so a times two Python will go and it will basically do something like a dot mall"}, {"start": 5374.799999999999, "end": 5381.28, "text": " Of two that's basically what we'll call but to it two times a is the same as two dot mall of a"}, {"start": 5382.0, "end": 5384.16, "text": " And it doesn't two can't multiply"}, {"start": 5385.12, "end": 5387.12, "text": " Value and so it's really confused about that"}, {"start": 5387.599999999999, "end": 5393.28, "text": " So instead what happens is in Python the way this works is you are free to define something called the armol"}, {"start": 5393.28, "end": 5394.48, "text": " and"}, {"start": 5394.48, "end": 5397.2, "text": " And armol is kind of like a fallback"}, {"start": 5397.36, "end": 5401.2, "text": " So if the Python can't do two times a it will check if"}, {"start": 5401.5199999999995, "end": 5402.719999999999, "text": " um"}, {"start": 5402.5599999999995, "end": 5407.5199999999995, "text": " If by any chance a knows how to multiply it too and that will be called into armol"}, {"start": 5408.88, "end": 5416.0, "text": " So because Python can't do two times a it will check is there an armol in value and because there is it will now call that"}, {"start": 5416.88, "end": 5420.08, "text": " And what we'll do here is we will swap the order of the operands"}, {"start": 5420.08, "end": 5427.28, "text": " So basically two times a will redirect to armol and armol will basically call it times two and that's how that will work"}, {"start": 5428.4, "end": 5432.16, "text": " So redefining that with armol two times a becomes four"}, {"start": 5432.64, "end": 5436.8, "text": " Okay, now looking at the other elements that we still need we need to know how to exponentiate and how to divide"}, {"start": 5437.2, "end": 5442.4, "text": " So let's first the explanation to the explanation part. We're going to introduce a single"}, {"start": 5443.12, "end": 5445.12, "text": " function x here"}, {"start": 5445.12, "end": 5452.64, "text": " And x is going to mirror 10h in the sense that it's a single single function that transform a single scalar value and outputs a single scalar value"}, {"start": 5453.2, "end": 5455.2, "text": " So we pop out the Python number"}, {"start": 5455.76, "end": 5458.72, "text": " We use method x to x-manitiate it create a new value object"}, {"start": 5459.36, "end": 5461.04, "text": " Everything that we've seen before"}, {"start": 5461.04, "end": 5463.92, "text": " The tricky part of course is how do you back propagate through e to the x?"}, {"start": 5464.88, "end": 5470.08, "text": " And uh, so here you can potentially pause the video and think about what should go here"}, {"start": 5470.08, "end": 5478.24, "text": " Okay, so basically we need to know what is the local derivative of e to the x"}, {"start": 5478.5599999999995, "end": 5481.92, "text": " So d by dx of e to the x is famously just e to the x"}, {"start": 5482.32, "end": 5486.24, "text": " And we've already just calculated e to the x and it's inside out that data"}, {"start": 5486.64, "end": 5488.16, "text": " So we can do about that data times"}, {"start": 5489.04, "end": 5491.12, "text": " And out that grad that's the chain"}, {"start": 5492.16, "end": 5494.48, "text": " So we're just chaining on to the current run grad"}, {"start": 5495.36, "end": 5498.24, "text": " And this is what the expression looks like it looks a little confusing"}, {"start": 5498.24, "end": 5500.88, "text": " But uh, this is what it is and that's the explanation"}, {"start": 5501.92, "end": 5504.8, "text": " So redefining we should not be able to call a dot x"}, {"start": 5505.5199999999995, "end": 5508.16, "text": " And uh, hopefully the backward pass works as well"}, {"start": 5508.32, "end": 5511.5199999999995, "text": " Okay, and the last thing we'd like to do of course is if we'd like to be able to divide"}, {"start": 5512.32, "end": 5518.16, "text": " Now I actually will implement something slightly more powerful than division because division is just a special case of"}, {"start": 5518.48, "end": 5520.08, "text": " Something a bit more powerful"}, {"start": 5520.08, "end": 5522.24, "text": " So in particular just by rearranging"}, {"start": 5522.8, "end": 5524.4, "text": " If we have some kind of a b equals"}, {"start": 5524.4, "end": 5530.719999999999, "text": " Uh, value of 4.0 here we'd like to basically be able to do a divide b and we'd like this to be able to give us 0.5"}, {"start": 5531.759999999999, "end": 5534.799999999999, "text": " Now division actually can be reshoffled as follows"}, {"start": 5535.44, "end": 5536.96, "text": " If we have a divide b"}, {"start": 5536.96, "end": 5539.2, "text": " That's actually the same as a multiplying 1 over b"}, {"start": 5539.92, "end": 5543.36, "text": " And that's the same as a multiplying b to the power of negative 1"}, {"start": 5544.4, "end": 5550.4, "text": " And so what I'd like to do instead is I basically like to implement the operation of x to the k for some constant"}, {"start": 5550.4, "end": 5557.36, "text": " uh, k so it's an integer or a float um, and we would like to be able to differentiate this and then as a special case"}, {"start": 5557.839999999999, "end": 5560.08, "text": " Uh, negative 1 will be division"}, {"start": 5560.96, "end": 5566.0, "text": " And so I'm doing that just because uh, it's more general and um, yeah, you might as well do it that way"}, {"start": 5566.48, "end": 5570.4, "text": " So basically what I'm saying is we can redefine uh, division"}, {"start": 5571.36, "end": 5573.36, "text": " Which we will put here somewhere"}, {"start": 5574.639999999999, "end": 5576.4, "text": " You know, we can put this here somewhere"}, {"start": 5576.4, "end": 5580.5599999999995, "text": " What I'm saying is that we can redefine division so self-divided other"}, {"start": 5580.879999999999, "end": 5584.879999999999, "text": " It can actually be rewritten as self times other to the power of negative 1"}, {"start": 5585.839999999999, "end": 5587.599999999999, "text": " And now"}, {"start": 5587.759999999999, "end": 5590.719999999999, "text": " Value raised to the power of negative 1 we had to now define that"}, {"start": 5591.759999999999, "end": 5592.96, "text": " So here's"}, {"start": 5593.679999999999, "end": 5595.5199999999995, "text": " So we need to implement the pow function"}, {"start": 5596.08, "end": 5598.639999999999, "text": " Where am I going to put the pow function maybe here somewhere"}, {"start": 5600.16, "end": 5602.16, "text": " This is this call it from Fort"}, {"start": 5602.16, "end": 5608.0, "text": " So this function will be called when we try to raise a value to some power and other will be that power"}, {"start": 5608.8, "end": 5615.28, "text": " Now I'd like to make sure that other is only an int or a float usually other is some kind of a different value object"}, {"start": 5615.5199999999995, "end": 5619.84, "text": " But here other will be forced to be an int or a float otherwise the math"}, {"start": 5620.5599999999995, "end": 5624.16, "text": " Uh, won't work for for we're trying to achieve in the specific case"}, {"start": 5624.5599999999995, "end": 5628.639999999999, "text": " That would be a different derivative expression if we wanted other to be a value"}, {"start": 5628.64, "end": 5636.56, "text": " So here we create the up the value which is just uh, you know, this data raised to the power of other and other here could be for example negative 1"}, {"start": 5636.72, "end": 5638.72, "text": " That's what we are hoping to achieve"}, {"start": 5639.4400000000005, "end": 5647.6, "text": " And then uh, this is the backward stub and this is the fun part which is what is the uh chain rule expression here for back"}, {"start": 5648.08, "end": 5649.68, "text": " for um"}, {"start": 5649.68, "end": 5655.12, "text": " Backpropagating through the power function where the power is to the power of some kind of a constant"}, {"start": 5655.12, "end": 5660.8, "text": " So this is the exercise and maybe pause the video here and see if you can figure it out yourself as to what we should put here"}, {"start": 5667.04, "end": 5672.32, "text": " Okay, so um, you can actually go here and look at the derivative rules as an example"}, {"start": 5672.72, "end": 5678.4, "text": " And we see lots of the derivatives that you can hopefully know from calculus in particular what we're looking for is the power rule"}, {"start": 5679.28, "end": 5683.84, "text": " Because that's telling us that if we're trying to take d by dx of x to the n which is what we're doing here"}, {"start": 5683.84, "end": 5688.72, "text": " Then that is just n times x to the n minus 1 right"}, {"start": 5689.6, "end": 5690.96, "text": " Okay"}, {"start": 5690.96, "end": 5695.12, "text": " So that's telling us about the local derivative of this power operation"}, {"start": 5696.08, "end": 5698.08, "text": " So all we want here"}, {"start": 5698.4800000000005, "end": 5705.2, "text": " Basically n is now other and self that data is x and so this now becomes"}, {"start": 5706.16, "end": 5708.16, "text": " Other which is n times"}, {"start": 5708.72, "end": 5710.24, "text": " self that data"}, {"start": 5710.400000000001, "end": 5712.400000000001, "text": " Which is now a python in to reflote"}, {"start": 5713.04, "end": 5715.599999999999, "text": " Uh, it's not a valley object. We're accessing the data attribute"}, {"start": 5716.24, "end": 5717.44, "text": " raised"}, {"start": 5717.44, "end": 5720.32, "text": " To the power of other minus one or n minus one"}, {"start": 5721.28, "end": 5724.32, "text": " I can put brackets around this, but this doesn't matter because um"}, {"start": 5725.36, "end": 5728.879999999999, "text": " Power takes precedence over multiply and pi hell so that would have been okay"}, {"start": 5729.599999999999, "end": 5735.2, "text": " And that's the local derivative only but now we have to chain it and we chain it just as simply by multiplying by on thought graph"}, {"start": 5735.5199999999995, "end": 5739.2, "text": " That's chain rule and this should uh technically work"}, {"start": 5739.2, "end": 5745.679999999999, "text": " And we're gonna find out soon, but now if we do this this should now work"}, {"start": 5746.8, "end": 5750.639999999999, "text": " And we get one five so the forward pass works, but thus the backward pass work"}, {"start": 5751.2, "end": 5756.88, "text": " And I realized that we actually also have to know how to subtract so right now a minus b will not work"}, {"start": 5757.44, "end": 5762.08, "text": " To make it work. We need one more piece of code here and"}, {"start": 5763.12, "end": 5765.12, "text": " basically this is the"}, {"start": 5765.12, "end": 5772.16, "text": " Subtraction and the way we're gonna implement subtraction is we're gonna implement it by addition of an negation and then to implement negation"}, {"start": 5772.16, "end": 5773.84, "text": " We're gonna multiply by negative one"}, {"start": 5773.84, "end": 5778.48, "text": " So just again using the stuff we've already built and just um expressing it in terms of what we have"}, {"start": 5779.04, "end": 5781.12, "text": " And a minus b does not work in"}, {"start": 5781.12, "end": 5784.32, "text": " Okay, so now let's scroll again to this expression here for this neuron"}, {"start": 5785.28, "end": 5786.96, "text": " And let's just"}, {"start": 5786.96, "end": 5791.12, "text": " compute the backward pass here once we've defined oh and let's draw it"}, {"start": 5791.12, "end": 5797.68, "text": " So here's the gradients for all these lead nodes for this two dimensional neuron that has a 10H that we've seen before"}, {"start": 5798.48, "end": 5803.84, "text": " So now what I'd like to do is I'd like to break up this 10H into this expression here"}, {"start": 5804.48, "end": 5806.64, "text": " So let me copy paste this here"}, {"start": 5807.5199999999995, "end": 5812.88, "text": " And now instead of we'll preserve the label and we will change how we define oh"}, {"start": 5813.76, "end": 5816.24, "text": " So in particular we're going to implement this formula here"}, {"start": 5816.88, "end": 5820.4, "text": " So we need each of the two x minus one over each of the x plus one"}, {"start": 5820.4, "end": 5825.839999999999, "text": " So e to the two x we need to take two times m and we need to explain it"}, {"start": 5826.48, "end": 5829.36, "text": " That's e to the two x and then because we're using it twice"}, {"start": 5829.839999999999, "end": 5831.839999999999, "text": " Let's create an intermediate variable e"}, {"start": 5832.719999999999, "end": 5838.32, "text": " And then define oh as e plus one over e minus one over e plus one"}, {"start": 5839.36, "end": 5841.92, "text": " e minus one over e plus one"}, {"start": 5842.879999999999, "end": 5846.32, "text": " And that should be it and then we should be able to draw dot of oh"}, {"start": 5846.32, "end": 5850.24, "text": " So now before I run this what do we expect to see"}, {"start": 5850.96, "end": 5853.12, "text": " Number one we're expecting to see a much longer"}, {"start": 5853.84, "end": 5857.04, "text": " graph here because we've broken up 10H into a bunch of other operations"}, {"start": 5857.759999999999, "end": 5862.08, "text": " But those operations are mathematically equivalent and so what we're expecting to see is number one"}, {"start": 5862.5599999999995, "end": 5868.32, "text": " The same result here. So the forward pass works and number two because of that mathematical equivalence"}, {"start": 5868.719999999999, "end": 5874.5599999999995, "text": " We expect to see the same backward pass and the same gradients on these lead nodes. So these gradients should be identical"}, {"start": 5874.56, "end": 5876.56, "text": " So let's run this"}, {"start": 5878.160000000001, "end": 5883.76, "text": " So number one let's verify that instead of a single 10H node we have now x and we have"}, {"start": 5884.400000000001, "end": 5888.0, "text": " Plus we have times negative one. This is the division"}, {"start": 5888.8, "end": 5895.4400000000005, "text": " And we end up with the same forward pass here and then the gradients we have to be careful because they're in slightly different order potentially"}, {"start": 5896.320000000001, "end": 5898.96, "text": " The gradients for w2 x2 should be 0 and 0.5"}, {"start": 5898.96, "end": 5904.56, "text": " W2 and x2 are 0 and 0.5 and w1 x1 are 1 and negative 1.5"}, {"start": 5905.36, "end": 5907.36, "text": " 1 and negative 1.5"}, {"start": 5907.36, "end": 5913.44, "text": " So that means that both our forward passes and backward passes were correct because this turned out to be equivalent to"}, {"start": 5914.16, "end": 5915.84, "text": " 10H before"}, {"start": 5915.84, "end": 5921.2, "text": " And so the reason I wanted to go through this exercises number one we got to practice a few more operations and"}, {"start": 5921.76, "end": 5926.4800000000005, "text": " Writing more backwards passes and number two. I wanted to illustrate the point that the"}, {"start": 5926.48, "end": 5927.839999999999, "text": " um"}, {"start": 5927.839999999999, "end": 5933.5199999999995, "text": " The level at which you implement your operations is totally up to you. You can implement backward passes for tiny"}, {"start": 5933.599999999999, "end": 5938.16, "text": " Expressions like a single individual plus or a single times or you can implement them for say"}, {"start": 5939.12, "end": 5940.08, "text": " 10H"}, {"start": 5940.08, "end": 5945.919999999999, "text": " Which is a kind of a potentially you can see it as a composite operation because it's made up of all these more atomic operations"}, {"start": 5946.4, "end": 5951.44, "text": " But really all of this is kind of like a fake concept all that matters is we have some kind of inputs and some kind of an output"}, {"start": 5951.44, "end": 5956.96, "text": " And this output is a function of the inputs in some way and as long as you can do forward pass and the backward pass of that"}, {"start": 5957.12, "end": 5958.4, "text": " little operation"}, {"start": 5958.4, "end": 5962.48, "text": " It doesn't matter what that operation is um and how composite it is"}, {"start": 5963.12, "end": 5967.2, "text": " If you can write the local gradients you can chain the gradient and you can continue back propagation"}, {"start": 5967.5199999999995, "end": 5970.96, "text": " So the design of what those functions are is completely up to you"}, {"start": 5972.08, "end": 5974.799999999999, "text": " So now I would like to show you how you can do the exact same thing"}, {"start": 5974.879999999999, "end": 5978.32, "text": " But using a modern deep neural network library like for example PyTorch"}, {"start": 5978.32, "end": 5981.04, "text": " Which I've roughly modeled micrograd"}, {"start": 5981.92, "end": 5988.0, "text": " By and so PyTorch is something you would use in production and I'll show you how you can do the exact same thing"}, {"start": 5988.0, "end": 5989.5199999999995, "text": " But in PyTorch API"}, {"start": 5989.679999999999, "end": 5993.5199999999995, "text": " So I'm just going to copy-paste it in and walk you through it a little bit. This is what it looks like"}, {"start": 5994.799999999999, "end": 5998.4, "text": " So we're going to import PyTorch and then we need to define these"}, {"start": 5999.679999999999, "end": 6004.0, "text": " Value objects like we have here now micrograd is a scalar valued"}, {"start": 6004.0, "end": 6008.4, "text": " um engine so we only have scalar values like 2.0"}, {"start": 6008.96, "end": 6013.28, "text": " But in PyTorch everything is based around tensors and like I mentioned tensors are just"}, {"start": 6013.28, "end": 6015.28, "text": " Indimensional arrays of scalars"}, {"start": 6015.84, "end": 6021.2, "text": " So that's why things get a little bit more complicated here. I just need a scalar valued tensor"}, {"start": 6021.28, "end": 6023.28, "text": " A tensor with just a single element"}, {"start": 6023.52, "end": 6026.24, "text": " But by default when you work with PyTorch you would use"}, {"start": 6026.24, "end": 6031.679999999999, "text": " um more complicated tensors like this so if I import PyTorch"}, {"start": 6034.0, "end": 6039.2, "text": " Then I can create tensors like this and this tensor for example is a 2 by 3 array"}, {"start": 6039.84, "end": 6041.84, "text": " Of scalar scalars"}, {"start": 6041.84, "end": 6048.719999999999, "text": " Um in a single compact representation. So you can check it shape. We see that it's a 2 by 3 array and so"}, {"start": 6048.72, "end": 6055.92, "text": " So this is usually what you would work with um in the actual libraries. So here I'm creating a tensor"}, {"start": 6056.4800000000005, "end": 6058.88, "text": " That has only a single element 2.0"}, {"start": 6060.56, "end": 6063.04, "text": " And then I'm casting it to be double"}, {"start": 6063.6, "end": 6067.92, "text": " Because PyTorch is by default using double precision force floating point numbers"}, {"start": 6068.0, "end": 6074.16, "text": " So I like everything to be identical by default the data type of these tensors will be float 32"}, {"start": 6074.64, "end": 6076.64, "text": " So it's only using a single precision float"}, {"start": 6076.64, "end": 6078.64, "text": " So I'm casting it to double"}, {"start": 6079.12, "end": 6081.76, "text": " So that we have float 64 just like in Python"}, {"start": 6082.72, "end": 6087.4400000000005, "text": " So I'm casting to double and then we get something similar to value of 2"}, {"start": 6088.08, "end": 6093.52, "text": " The next thing I have to do is because these are leaf nodes by default PyTorch assumes that they do not require gradients"}, {"start": 6093.92, "end": 6097.4400000000005, "text": " So I need to explicitly say that all of these nodes require gradients"}, {"start": 6098.08, "end": 6102.72, "text": " Okay, so this is going to construct scalar valued one element tensors"}, {"start": 6103.360000000001, "end": 6105.52, "text": " Make sure that PyTorch knows that they require gradients"}, {"start": 6105.52, "end": 6109.84, "text": " Now by default these are said to false by the way because of efficiency reasons"}, {"start": 6110.160000000001, "end": 6112.8, "text": " Because usually you would not want gradients for leaf nodes"}, {"start": 6113.52, "end": 6118.320000000001, "text": " Like the inputs to the network and this is just trying to be efficient in the most common cases"}, {"start": 6119.280000000001, "end": 6122.320000000001, "text": " So once we've defined all of our values in PyTorch land"}, {"start": 6122.64, "end": 6125.6, "text": " We can perform arithmetic just like we can here in micrograd land"}, {"start": 6125.92, "end": 6128.96, "text": " So this will just work and then there's a torshtot 10h also"}, {"start": 6129.76, "end": 6131.76, "text": " And when we get back is a tensor again"}, {"start": 6131.76, "end": 6137.68, "text": " And we can just like in micrograd it's got a data attribute and it's got grad attributes"}, {"start": 6138.320000000001, "end": 6142.16, "text": " So these tensor objects just like in micrograd have a dot data and a dot grad"}, {"start": 6142.8, "end": 6146.16, "text": " And the only difference here is that we need to call a dot item"}, {"start": 6146.72, "end": 6149.2, "text": " because otherwise um PyTorch"}, {"start": 6150.4800000000005, "end": 6156.8, "text": " Dot item basically takes a single tensor of one element and it just returns that element stripping out the tensor"}, {"start": 6156.8, "end": 6162.08, "text": " So let me just run this and hopefully we are going to get this is going to print the forward pass"}, {"start": 6162.56, "end": 6167.6, "text": " Which is 0.707 and this will be the gradients which hopefully are"}, {"start": 6168.4800000000005, "end": 6170.64, "text": " 0.5 to 0 negative 0.5 and 1"}, {"start": 6171.28, "end": 6173.28, "text": " So if we just run this"}, {"start": 6174.0, "end": 6175.04, "text": " There we go"}, {"start": 6175.04, "end": 6179.76, "text": " 0.7 so the forward pass agrees and then 0.5 0, 80, 0.5 and 1"}, {"start": 6180.8, "end": 6182.8, "text": " So PyTorch agrees with us"}, {"start": 6182.8, "end": 6184.320000000001, "text": " And just to show you here basically oh"}, {"start": 6184.32, "end": 6186.88, "text": " Here's a tensor with a single element"}, {"start": 6188.32, "end": 6193.5199999999995, "text": " And it's a double and we can call that item on it to just get the single number out"}, {"start": 6194.32, "end": 6201.04, "text": " So that's what item does and oh is a tensor object like I mentioned and it's got a backward function just like we've implemented"}, {"start": 6202.16, "end": 6207.12, "text": " And then all of these also have a dot grad so like x2 for example on the grad and it's a tensor"}, {"start": 6207.36, "end": 6209.92, "text": " And we can pop out the individual number with dot actum"}, {"start": 6209.92, "end": 6213.52, "text": " So basically torches"}, {"start": 6214.08, "end": 6219.92, "text": " Torch can do what we did in micrograd as a special case when your tensors are all single element tensors"}, {"start": 6220.4800000000005, "end": 6223.92, "text": " But the big deal with PyTorch is that everything is significantly more efficient"}, {"start": 6224.16, "end": 6229.92, "text": " Because we are working with these tensor objects and we can do lots of operations in parallel on all of these tensors"}, {"start": 6231.6, "end": 6235.12, "text": " But otherwise what we've built very much agrees with the API of PyTorch"}, {"start": 6235.12, "end": 6239.5199999999995, "text": " Okay, so now that we have some machinery to build out pretty complicated mathematical expressions"}, {"start": 6239.92, "end": 6246.16, "text": " We can also start building up neural nets and as I mentioned neural nets are just a specific class of mathematical expressions"}, {"start": 6247.12, "end": 6252.32, "text": " So we're going to start building out a neural net piece by piece and eventually we'll build out a two layer multi layer"}, {"start": 6252.32, "end": 6255.5199999999995, "text": " Layer perceptron as it's called and I'll show you exactly what that means"}, {"start": 6256.0, "end": 6258.0, "text": " Let's start with a single individual neuron"}, {"start": 6258.0, "end": 6259.68, "text": " We've implemented one here"}, {"start": 6259.68, "end": 6266.8, "text": " But here I'm going to implement one that also subscribes to the PyTorch API and how it designs its neural network modules"}, {"start": 6267.360000000001, "end": 6270.64, "text": " So just like we saw that we can like matched API of PyTorch"}, {"start": 6271.360000000001, "end": 6273.04, "text": " On the autograd side"}, {"start": 6273.04, "end": 6275.280000000001, "text": " We're going to try to do that on the neural network modules"}, {"start": 6275.92, "end": 6277.92, "text": " So here's class neuron"}, {"start": 6278.4800000000005, "end": 6280.88, "text": " And just for the sake of efficiency"}, {"start": 6280.88, "end": 6284.240000000001, "text": " I'm going to copy-base some sections that are relatively straightforward"}, {"start": 6284.24, "end": 6292.08, "text": " So the constructor will take a number of inputs to this neuron which is how many inputs come to a neuron"}, {"start": 6292.48, "end": 6294.48, "text": " So this one for example is three inputs"}, {"start": 6295.2, "end": 6300.719999999999, "text": " And then it's going to create a weight that is some random number between negative one and one for every one of those inputs"}, {"start": 6301.36, "end": 6305.12, "text": " And a bias that controls the overall trigger happiness of this neuron"}, {"start": 6306.639999999999, "end": 6309.84, "text": " And then we're going to implement a depth underscore underscore call"}, {"start": 6309.84, "end": 6313.6, "text": " Of self and x, sum input x"}, {"start": 6314.16, "end": 6316.8, "text": " And really what we're not going to do here is w times x plus b"}, {"start": 6317.52, "end": 6320.32, "text": " We're w times x here because they dot power specifically"}, {"start": 6321.4400000000005, "end": 6323.2, "text": " Now if you haven't seen call"}, {"start": 6324.16, "end": 6326.16, "text": " Let me just return 0.0 here from now"}, {"start": 6326.72, "end": 6330.400000000001, "text": " The way this works now is we can have an x which is say like 2.0 3.0"}, {"start": 6330.88, "end": 6333.28, "text": " Then we can initialize a neuron that is two-dimensional"}, {"start": 6334.0, "end": 6335.52, "text": " Because these are two numbers"}, {"start": 6335.52, "end": 6339.68, "text": " And then we can feed those two numbers into that neuron to again and output"}, {"start": 6339.92, "end": 6344.080000000001, "text": " And so when you use this notation n of x python will use call"}, {"start": 6345.120000000001, "end": 6347.040000000001, "text": " So currently call just returns 0.0"}, {"start": 6350.160000000001, "end": 6354.080000000001, "text": " Now we'd like to actually do the forward pass of this neuron instead"}, {"start": 6354.96, "end": 6362.64, "text": " So what we're going to do here first is we need to basically multiply all of the elements of w with all of the elements of x pairwise"}, {"start": 6362.64, "end": 6364.400000000001, "text": " We need to multiply them"}, {"start": 6364.4, "end": 6366.639999999999, "text": " So the first thing we're going to do is we're going to zip up"}, {"start": 6367.28, "end": 6368.48, "text": " sultan w and x"}, {"start": 6369.2, "end": 6372.08, "text": " And in python zip takes two iterators"}, {"start": 6372.719999999999, "end": 6377.12, "text": " And it creates a new iterator that iterates over the tuples of their corresponding entries"}, {"start": 6378.0, "end": 6381.44, "text": " So for example just to show you we can print this list"}, {"start": 6382.16, "end": 6383.92, "text": " And still returns 0.0 here"}, {"start": 6390.879999999999, "end": 6392.879999999999, "text": " Sorry"}, {"start": 6392.88, "end": 6398.32, "text": " So we see that these w's are paired up with the x's w with x"}, {"start": 6401.6, "end": 6402.8, "text": " And now what we're going to do is"}, {"start": 6406.96, "end": 6409.6, "text": " For wixi in"}, {"start": 6410.72, "end": 6414.08, "text": " We want to multiply w times wixi"}, {"start": 6414.96, "end": 6418.88, "text": " And then we want to sum all of that together to come up with an activation"}, {"start": 6419.68, "end": 6421.4400000000005, "text": " And add also sultan b on top"}, {"start": 6421.44, "end": 6423.44, "text": " So that's the real activation"}, {"start": 6423.919999999999, "end": 6426.4, "text": " And then of course we need to pass that through a non-mejority"}, {"start": 6426.719999999999, "end": 6429.12, "text": " So what we're going to be returning is act.h"}, {"start": 6429.919999999999, "end": 6431.12, "text": " And here's out"}, {"start": 6432.32, "end": 6435.12, "text": " So now we see that we are getting some outputs"}, {"start": 6435.5199999999995, "end": 6440.4, "text": " And we get a different output from neuron each time because we are initializing different weights and biases"}, {"start": 6441.28, "end": 6443.5199999999995, "text": " And then to be a bit more efficient here actually"}, {"start": 6443.5199999999995, "end": 6448.32, "text": " Some by the way takes a second optional parameter which is the start"}, {"start": 6448.32, "end": 6451.36, "text": " And by default the start is 0"}, {"start": 6451.599999999999, "end": 6455.5199999999995, "text": " So these elements of this sum will be added on top of 0 to begin with"}, {"start": 6455.759999999999, "end": 6457.5199999999995, "text": " But actually we can just start with sultan b"}, {"start": 6458.5599999999995, "end": 6460.08, "text": " And then we just have an expression like this"}, {"start": 6465.5199999999995, "end": 6468.5599999999995, "text": " And then the generator expression here must be parenthesisized by the line"}, {"start": 6469.5199999999995, "end": 6471.5199999999995, "text": " There we go"}, {"start": 6473.84, "end": 6476.16, "text": " Yep, so now we can forward a single neuron"}, {"start": 6476.16, "end": 6479.12, "text": " And next up we're going to define a layer of neurons"}, {"start": 6479.36, "end": 6481.92, "text": " So here we have a schematic for a mlp"}, {"start": 6482.5599999999995, "end": 6486.16, "text": " So we see that these mlp's each layer, this is one layer"}, {"start": 6486.4, "end": 6487.84, "text": " Has actually a number of neurons"}, {"start": 6487.84, "end": 6490.72, "text": " And they're not connected to each other but all of them are fully connected to the input"}, {"start": 6491.36, "end": 6493.12, "text": " So what is a layer of neurons?"}, {"start": 6493.12, "end": 6495.84, "text": " It's just a set of neurons evaluated independently"}, {"start": 6496.8, "end": 6501.76, "text": " So in the interest of time I'm going to do something fairly straightforward here"}, {"start": 6503.2, "end": 6505.2, "text": " It's um"}, {"start": 6505.2, "end": 6508.32, "text": " Literally a layer is just a list of neurons"}, {"start": 6509.12, "end": 6510.72, "text": " And then how many neurons do we have?"}, {"start": 6510.72, "end": 6512.639999999999, "text": " We take that as an input argument here"}, {"start": 6512.639999999999, "end": 6514.16, "text": " How many neurons do you want in your layer?"}, {"start": 6514.16, "end": 6515.76, "text": " A number of outputs in this layer"}, {"start": 6516.639999999999, "end": 6519.44, "text": " And so we just initialize completely independent neurons"}, {"start": 6519.44, "end": 6521.36, "text": " With this given dimensionality"}, {"start": 6521.36, "end": 6525.679999999999, "text": " And when we call on it we just independently evaluate them"}, {"start": 6526.4, "end": 6529.599999999999, "text": " So now instead of a neuron we can make a layer of neurons"}, {"start": 6529.599999999999, "end": 6531.76, "text": " They are two dimensional neurons and let's say three of them"}, {"start": 6531.76, "end": 6536.72, "text": " And now we see that we have three independent evaluations of three different neurons"}, {"start": 6536.72, "end": 6537.52, "text": " Right?"}, {"start": 6538.96, "end": 6540.96, "text": " Okay and finally let's complete this picture"}, {"start": 6540.96, "end": 6544.0, "text": " And define an entire multilayer perception or mlp"}, {"start": 6544.64, "end": 6546.4800000000005, "text": " And as we can see here in an mlp"}, {"start": 6546.4800000000005, "end": 6548.56, "text": " These layers just speed into each other sequentially"}, {"start": 6549.280000000001, "end": 6551.04, "text": " So let's come here and I'm just going to"}, {"start": 6551.68, "end": 6553.52, "text": " Copy the code here in interest of time"}, {"start": 6554.4800000000005, "end": 6556.16, "text": " So an mlp is very similar"}, {"start": 6556.8, "end": 6559.52, "text": " We're taking the number of inputs as before"}, {"start": 6559.52, "end": 6563.52, "text": " But now instead of taking a single n out which is number of neurons and a single layer"}, {"start": 6563.92, "end": 6566.0, "text": " We're going to take a list of n outs"}, {"start": 6566.0, "end": 6569.68, "text": " And this list defines the sizes of all the layers that we want in our mlp"}, {"start": 6570.400000000001, "end": 6572.0, "text": " So here we just put them all together"}, {"start": 6572.0, "end": 6574.0, "text": " And then iterate over consecutive pairs"}, {"start": 6574.4800000000005, "end": 6577.200000000001, "text": " Of these sizes and create a layer objects for them"}, {"start": 6577.92, "end": 6580.64, "text": " And then in the call function we are just calling them sequentially"}, {"start": 6580.64, "end": 6582.160000000001, "text": " So that's an mlp really"}, {"start": 6582.88, "end": 6584.64, "text": " And let's actually re-implement this picture"}, {"start": 6584.64, "end": 6586.64, "text": " So we want three input neurons"}, {"start": 6586.64, "end": 6589.68, "text": " And then two layers of four and an output unit"}, {"start": 6589.84, "end": 6591.200000000001, "text": " So we want"}, {"start": 6592.56, "end": 6593.68, "text": " Three dimensional input"}, {"start": 6593.68, "end": 6595.360000000001, "text": " Say this is an example input"}, {"start": 6595.360000000001, "end": 6600.56, "text": " We want three inputs into two layers of four and one output"}, {"start": 6600.56, "end": 6602.160000000001, "text": " And this of course is an mlp"}, {"start": 6603.76, "end": 6604.56, "text": " And there we go"}, {"start": 6604.56, "end": 6606.08, "text": " That's a forward passive in mlp"}, {"start": 6606.8, "end": 6608.240000000001, "text": " To make this a little bit nicer"}, {"start": 6608.240000000001, "end": 6609.84, "text": " You see how we have just a single element"}, {"start": 6609.84, "end": 6611.04, "text": " But it's wrapped in a list"}, {"start": 6611.04, "end": 6612.96, "text": " Because layer always returns lists"}, {"start": 6612.96, "end": 6616.96, "text": " So for convenience return out at zero if"}, {"start": 6616.96, "end": 6620.0, "text": " Lend out is exactly a single element"}, {"start": 6620.0, "end": 6622.0, "text": " Else return fullest"}, {"start": 6622.0, "end": 6624.96, "text": " And this will allow us to just get a single value out"}, {"start": 6624.96, "end": 6626.96, "text": " At the last layer that only has a single neuron"}, {"start": 6626.96, "end": 6630.96, "text": " And finally we should be able to prod out of n of x"}, {"start": 6630.96, "end": 6632.96, "text": " And as you might imagine"}, {"start": 6632.96, "end": 6634.96, "text": " These expressions are now getting"}, {"start": 6634.96, "end": 6636.96, "text": " Relatively involved"}, {"start": 6636.96, "end": 6640.96, "text": " So this is an entire mlp that we're defining now"}, {"start": 6640.96, "end": 6646.96, "text": " All the way until a single output"}, {"start": 6646.96, "end": 6648.96, "text": " Okay"}, {"start": 6648.96, "end": 6652.96, "text": " And so obviously you would never differentiate on pen and paper"}, {"start": 6652.96, "end": 6654.96, "text": " These expressions but with micrograd"}, {"start": 6654.96, "end": 6656.96, "text": " We will be able to back propagate all the way through this"}, {"start": 6656.96, "end": 6658.96, "text": " And back propagate"}, {"start": 6658.96, "end": 6662.96, "text": " Into these weights of all these neurons"}, {"start": 6662.96, "end": 6664.96, "text": " So let's see how that works"}, {"start": 6664.96, "end": 6666.96, "text": " Okay, so let's create ourselves a very simple"}, {"start": 6666.96, "end": 6668.96, "text": " Example data set here"}, {"start": 6668.96, "end": 6670.96, "text": " So this data set has four examples"}, {"start": 6670.96, "end": 6674.96, "text": " And so we have four possible inputs into the neural net"}, {"start": 6674.96, "end": 6676.96, "text": " And we have four desired targets"}, {"start": 6676.96, "end": 6678.96, "text": " So we'd like the neural net to"}, {"start": 6678.96, "end": 6680.96, "text": " Assign"}, {"start": 6680.96, "end": 6684.96, "text": " Or output 1.0 when it's fed this example"}, {"start": 6684.96, "end": 6686.96, "text": " Negative 1 when it's fed these examples"}, {"start": 6686.96, "end": 6688.96, "text": " And 1 when it's fed this example"}, {"start": 6688.96, "end": 6690.96, "text": " So it's a very simple binary classifier neural net"}, {"start": 6690.96, "end": 6692.96, "text": " Basically that we would like here"}, {"start": 6692.96, "end": 6694.96, "text": " Now let's think what the neural net currently thinks about these four examples"}, {"start": 6694.96, "end": 6696.96, "text": " We can just get their predictions"}, {"start": 6696.96, "end": 6700.96, "text": " Basically we can just call n of x for x in axis"}, {"start": 6700.96, "end": 6702.96, "text": " And then we can print"}, {"start": 6702.96, "end": 6706.96, "text": " So these are the outputs of the neural net on those four examples"}, {"start": 6706.96, "end": 6710.96, "text": " So the first one is 0.91"}, {"start": 6710.96, "end": 6712.96, "text": " But we'd like it to be 1"}, {"start": 6712.96, "end": 6714.96, "text": " So we should push this one higher"}, {"start": 6714.96, "end": 6716.96, "text": " This one we want to be higher"}, {"start": 6716.96, "end": 6718.96, "text": " This one says 0.88"}, {"start": 6718.96, "end": 6720.96, "text": " And we want this to be negative 1"}, {"start": 6720.96, "end": 6722.96, "text": " This is 0.88"}, {"start": 6722.96, "end": 6724.96, "text": " We want it to be negative 1"}, {"start": 6724.96, "end": 6726.96, "text": " And this one is 0.88 we want it to be 1"}, {"start": 6726.96, "end": 6728.96, "text": " So how do we make the neural net"}, {"start": 6728.96, "end": 6730.96, "text": " And how do we tune the weights"}, {"start": 6730.96, "end": 6734.96, "text": " To better predict the desired targets"}, {"start": 6734.96, "end": 6738.96, "text": " And the trick used in deep learning to achieve this"}, {"start": 6738.96, "end": 6740.96, "text": " Is to calculate a single number"}, {"start": 6740.96, "end": 6744.96, "text": " That somehow measures the total performance of your neural net"}, {"start": 6744.96, "end": 6746.96, "text": " And we call this single number the loss"}, {"start": 6746.96, "end": 6748.96, "text": " So the loss"}, {"start": 6748.96, "end": 6750.96, "text": " First is a single number"}, {"start": 6750.96, "end": 6752.96, "text": " That we're going to define"}, {"start": 6752.96, "end": 6754.96, "text": " That basically measures how well the neural net is performing"}, {"start": 6754.96, "end": 6756.96, "text": " Right now we have the intuitive sense"}, {"start": 6756.96, "end": 6758.96, "text": " That it's not performing very well"}, {"start": 6758.96, "end": 6760.96, "text": " Because we're not very much close to this"}, {"start": 6760.96, "end": 6762.96, "text": " So the loss will be high"}, {"start": 6762.96, "end": 6764.96, "text": " And we'll want to minimize the loss"}, {"start": 6764.96, "end": 6766.96, "text": " So in particular in this case what we're going to do"}, {"start": 6766.96, "end": 6768.96, "text": " Is we're going to implement the mean squared error loss"}, {"start": 6768.96, "end": 6770.96, "text": " So what this is doing"}, {"start": 6770.96, "end": 6772.96, "text": " Is we're going to basically iterate"}, {"start": 6772.96, "end": 6774.96, "text": " For y ground truth"}, {"start": 6774.96, "end": 6778.96, "text": " And y output can zip off"}, {"start": 6778.96, "end": 6780.96, "text": " Ys and y bread"}, {"start": 6780.96, "end": 6782.96, "text": " So we're going to pair up the"}, {"start": 6782.96, "end": 6784.96, "text": " Ground truth with the predictions"}, {"start": 6784.96, "end": 6786.96, "text": " And this zip iterates over tuples of them"}, {"start": 6786.96, "end": 6788.96, "text": " And for each"}, {"start": 6788.96, "end": 6792.96, "text": " Y ground truth and y output"}, {"start": 6792.96, "end": 6794.96, "text": " We're going to subtract them"}, {"start": 6794.96, "end": 6796.96, "text": " And it's squared"}, {"start": 6796.96, "end": 6798.96, "text": " So let's first see what these losses are"}, {"start": 6798.96, "end": 6800.96, "text": " These are individual loss components"}, {"start": 6800.96, "end": 6802.96, "text": " And so basically for each"}, {"start": 6802.96, "end": 6804.96, "text": " One of the four"}, {"start": 6804.96, "end": 6806.96, "text": " We are taking the prediction"}, {"start": 6806.96, "end": 6808.96, "text": " And the ground truth"}, {"start": 6808.96, "end": 6810.96, "text": " We are subtracting them"}, {"start": 6810.96, "end": 6812.96, "text": " And squaring them"}, {"start": 6812.96, "end": 6814.96, "text": " So because this one is so close to its target"}, {"start": 6814.96, "end": 6816.96, "text": " 0.91 is almost one"}, {"start": 6816.96, "end": 6818.96, "text": " Subtracting them gives a very small number"}, {"start": 6818.96, "end": 6820.96, "text": " So here we would get like a negative point one"}, {"start": 6820.96, "end": 6822.96, "text": " And then squaring it"}, {"start": 6822.96, "end": 6824.96, "text": " Just makes sure"}, {"start": 6824.96, "end": 6826.96, "text": " That regardless of whether we are more negative"}, {"start": 6826.96, "end": 6828.96, "text": " Or more positive"}, {"start": 6828.96, "end": 6830.96, "text": " We always get a positive number"}, {"start": 6830.96, "end": 6832.96, "text": " Instead of squaring the Schrold"}, {"start": 6832.96, "end": 6834.96, "text": " We could also take for example the absolute value"}, {"start": 6834.96, "end": 6836.96, "text": " We need to discard the sign"}, {"start": 6836.96, "end": 6838.96, "text": " And so you see that the expression is"}, {"start": 6838.96, "end": 6840.96, "text": " Ranged so that you only get 0 exactly"}, {"start": 6840.96, "end": 6842.96, "text": " When y out is equal to y ground truth"}, {"start": 6842.96, "end": 6844.96, "text": " When those two are equal"}, {"start": 6844.96, "end": 6846.96, "text": " So your prediction is exactly the target"}, {"start": 6846.96, "end": 6848.96, "text": " You are going to get 0"}, {"start": 6848.96, "end": 6850.96, "text": " And if your prediction is not the target"}, {"start": 6850.96, "end": 6852.96, "text": " You are going to get some other number"}, {"start": 6852.96, "end": 6854.96, "text": " So here for example we are way off"}, {"start": 6854.96, "end": 6856.96, "text": " And so that's why the loss is quite high"}, {"start": 6856.96, "end": 6858.96, "text": " And the more off we are"}, {"start": 6858.96, "end": 6860.96, "text": " The greater the loss will be"}, {"start": 6860.96, "end": 6862.96, "text": " So we don't want high loss"}, {"start": 6862.96, "end": 6864.96, "text": " We want low loss"}, {"start": 6864.96, "end": 6866.96, "text": " And so the final loss here"}, {"start": 6866.96, "end": 6868.96, "text": " Will be just the sum"}, {"start": 6868.96, "end": 6870.96, "text": " Of all of these numbers"}, {"start": 6870.96, "end": 6872.96, "text": " So you see that this should be"}, {"start": 6872.96, "end": 6874.96, "text": " 0 roughly plus 0 roughly"}, {"start": 6874.96, "end": 6876.96, "text": " But plus"}, {"start": 6876.96, "end": 6876.96, "text": " 7"}, {"start": 6876.96, "end": 6878.96, "text": " So loss should be about"}, {"start": 6878.96, "end": 6880.96, "text": " 7 here"}, {"start": 6880.96, "end": 6882.96, "text": " And now we want to minimize the loss"}, {"start": 6882.96, "end": 6884.96, "text": " We want the loss to be low"}, {"start": 6884.96, "end": 6886.96, "text": " Because the loss is low"}, {"start": 6886.96, "end": 6888.96, "text": " Then every one of the predictions"}, {"start": 6888.96, "end": 6890.96, "text": " Is equal to 0"}, {"start": 6890.96, "end": 6894.96, "text": " Then every one of the predictions is equal to its target"}, {"start": 6894.96, "end": 6898.96, "text": " So the loss, the loss, the loss, it can be 0"}, {"start": 6898.96, "end": 6900.96, "text": " And the greater it is"}, {"start": 6900.96, "end": 6902.96, "text": " The worse off the neural net is predicting"}, {"start": 6902.96, "end": 6904.96, "text": " So now of course if we do"}, {"start": 6904.96, "end": 6906.96, "text": " Loss that backward"}, {"start": 6906.96, "end": 6908.96, "text": " Something magical happened when I hit enter"}, {"start": 6908.96, "end": 6912.96, "text": " And the magical thing of course that happened"}, {"start": 6912.96, "end": 6914.96, "text": " Is that we can look at endout layers"}, {"start": 6914.96, "end": 6916.96, "text": " That neuron, endout layers"}, {"start": 6916.96, "end": 6918.96, "text": " At say like the first layer"}, {"start": 6918.96, "end": 6920.96, "text": " That neurons at 0"}, {"start": 6920.96, "end": 6922.96, "text": " Because remember that"}, {"start": 6922.96, "end": 6924.96, "text": " MLP has layers which is a list"}, {"start": 6924.96, "end": 6926.96, "text": " And each layer has a neurons which is a list"}, {"start": 6926.96, "end": 6928.96, "text": " And that gives us individual neuron"}, {"start": 6928.96, "end": 6930.96, "text": " And then it's got some weights"}, {"start": 6930.96, "end": 6932.96, "text": " And so we can for example"}, {"start": 6932.96, "end": 6934.96, "text": " Look at the weights at 0"}, {"start": 6934.96, "end": 6938.96, "text": " Oops, it's not cold weights"}, {"start": 6938.96, "end": 6940.96, "text": " It's called W"}, {"start": 6940.96, "end": 6942.96, "text": " And that's a value"}, {"start": 6942.96, "end": 6944.96, "text": " But now this value also has a graph"}, {"start": 6944.96, "end": 6946.96, "text": " Because of the backward values"}, {"start": 6946.96, "end": 6950.96, "text": " And so we see that because this gradient here"}, {"start": 6950.96, "end": 6952.96, "text": " On this particular weight of this particular neuron"}, {"start": 6952.96, "end": 6954.96, "text": " Of this particular layer is negative"}, {"start": 6954.96, "end": 6956.96, "text": " We see that its influence on the loss"}, {"start": 6956.96, "end": 6958.96, "text": " Is also negative"}, {"start": 6958.96, "end": 6960.96, "text": " So slightly increasing this particular weight"}, {"start": 6960.96, "end": 6962.96, "text": " Of this neuron of this layer"}, {"start": 6962.96, "end": 6964.96, "text": " Would make the loss go down"}, {"start": 6964.96, "end": 6966.96, "text": " And we actually have this information"}, {"start": 6966.96, "end": 6968.96, "text": " For every single one of our neurons"}, {"start": 6968.96, "end": 6970.96, "text": " And all of their parameters"}, {"start": 6970.96, "end": 6972.96, "text": " Actually it's worth looking at"}, {"start": 6972.96, "end": 6974.96, "text": " Also the draw dot loss"}, {"start": 6974.96, "end": 6976.96, "text": " So the draw dot loss by the way"}, {"start": 6976.96, "end": 6978.96, "text": " So previously we looked at the draw dot"}, {"start": 6978.96, "end": 6980.96, "text": " Of a single neuron"}, {"start": 6980.96, "end": 6982.96, "text": " Neuralin forward pass"}, {"start": 6982.96, "end": 6984.96, "text": " And that was already a large expression"}, {"start": 6984.96, "end": 6986.96, "text": " But what is this expression?"}, {"start": 6986.96, "end": 6988.96, "text": " We actually forwarded"}, {"start": 6988.96, "end": 6990.96, "text": " Every one of those four examples"}, {"start": 6990.96, "end": 6992.96, "text": " And then we have the loss in top with them"}, {"start": 6992.96, "end": 6994.96, "text": " With the mean squared error"}, {"start": 6994.96, "end": 6996.96, "text": " And so this is a really massive graph"}, {"start": 6996.96, "end": 6998.96, "text": " Because this graph that we built up now"}, {"start": 6998.96, "end": 7000.96, "text": " Oh my gosh"}, {"start": 7000.96, "end": 7002.96, "text": " This graph that we built up now"}, {"start": 7002.96, "end": 7004.96, "text": " Which is kind of excessive"}, {"start": 7004.96, "end": 7006.96, "text": " It's excessive because it has four forward passes"}, {"start": 7006.96, "end": 7008.96, "text": " Of a neural net for every one of the examples"}, {"start": 7008.96, "end": 7010.96, "text": " And then it has the loss on top"}, {"start": 7010.96, "end": 7012.96, "text": " And it ends with the value of the loss"}, {"start": 7012.96, "end": 7014.96, "text": " Which for 7.1.2"}, {"start": 7014.96, "end": 7016.96, "text": " And this loss will now back propagate"}, {"start": 7016.96, "end": 7018.96, "text": " Through all the forward passes"}, {"start": 7018.96, "end": 7020.96, "text": " All the way through just every single"}, {"start": 7020.96, "end": 7022.96, "text": " Intermediate value of the neural net"}, {"start": 7022.96, "end": 7024.96, "text": " All the way back to"}, {"start": 7024.96, "end": 7026.96, "text": " Of course the parameters of the weights"}, {"start": 7026.96, "end": 7028.96, "text": " Which are the input"}, {"start": 7028.96, "end": 7030.96, "text": " So these weight parameters here"}, {"start": 7030.96, "end": 7032.96, "text": " To this neural net"}, {"start": 7032.96, "end": 7034.96, "text": " And these numbers here"}, {"start": 7034.96, "end": 7036.96, "text": " These scalars are inputs to the neural net"}, {"start": 7036.96, "end": 7038.96, "text": " So if we went around here"}, {"start": 7038.96, "end": 7040.96, "text": " We will probably find"}, {"start": 7040.96, "end": 7042.96, "text": " Some of these examples"}, {"start": 7042.96, "end": 7044.96, "text": " This 1.0 potentially maybe this 1.0"}, {"start": 7044.96, "end": 7046.96, "text": " Or you know some of the others"}, {"start": 7046.96, "end": 7048.96, "text": " And you'll see that they all have gradients as well"}, {"start": 7048.96, "end": 7050.96, "text": " The thing is these gradients on the input data"}, {"start": 7050.96, "end": 7052.96, "text": " Are not that useful to us"}, {"start": 7052.96, "end": 7054.96, "text": " And that's because"}, {"start": 7054.96, "end": 7056.96, "text": " The input data seems to be"}, {"start": 7056.96, "end": 7058.96, "text": " Not changeable"}, {"start": 7058.96, "end": 7060.96, "text": " And it's not given to the problem"}, {"start": 7060.96, "end": 7062.96, "text": " And so it's a fixed input"}, {"start": 7062.96, "end": 7064.96, "text": " We're not going to be changing it or messing with it"}, {"start": 7064.96, "end": 7066.96, "text": " Even though we do have gradients for it"}, {"start": 7066.96, "end": 7068.96, "text": " But some of these gradients here"}, {"start": 7068.96, "end": 7070.96, "text": " Will be for the neural network parameters"}, {"start": 7070.96, "end": 7072.96, "text": " The W's and the B's"}, {"start": 7072.96, "end": 7074.96, "text": " And those we of course we want to change"}, {"start": 7074.96, "end": 7076.96, "text": " Okay so now we're going to"}, {"start": 7076.96, "end": 7078.96, "text": " Want some convenience code"}, {"start": 7078.96, "end": 7080.96, "text": " To gather up all of the parameters of the neural net"}, {"start": 7080.96, "end": 7084.96, "text": " So that we can operate on on all of them simultaneously"}, {"start": 7084.96, "end": 7086.96, "text": " And every one of them"}, {"start": 7086.96, "end": 7088.96, "text": " A tiny amount"}, {"start": 7088.96, "end": 7090.96, "text": " Based on the gradient depermission"}, {"start": 7090.96, "end": 7094.96, "text": " So let's collect the parameters of the neural net all in one array"}, {"start": 7094.96, "end": 7096.96, "text": " So let's create a parameters of self"}, {"start": 7096.96, "end": 7098.96, "text": " That just returns"}, {"start": 7098.96, "end": 7100.96, "text": " Salt that W which is a list"}, {"start": 7100.96, "end": 7102.96, "text": " Concatenated with"}, {"start": 7102.96, "end": 7104.96, "text": " A list of"}, {"start": 7104.96, "end": 7106.96, "text": " Salt that B"}, {"start": 7106.96, "end": 7108.96, "text": " So this will just return a list"}, {"start": 7108.96, "end": 7110.96, "text": " List plus list just"}, {"start": 7110.96, "end": 7112.96, "text": " You know gives you a list"}, {"start": 7112.96, "end": 7114.96, "text": " So that's parameters of neural"}, {"start": 7114.96, "end": 7116.96, "text": " And I'm calling it this way because also"}, {"start": 7116.96, "end": 7118.96, "text": " Pipe Torch has a parameters on every single"}, {"start": 7118.96, "end": 7120.96, "text": " And in module"}, {"start": 7120.96, "end": 7122.96, "text": " And it does exactly what we're doing here"}, {"start": 7122.96, "end": 7124.96, "text": " It just returns the"}, {"start": 7124.96, "end": 7126.96, "text": " Parameter tensors for us is the primary scalers"}, {"start": 7126.96, "end": 7130.96, "text": " Now layer is also a module"}, {"start": 7130.96, "end": 7132.96, "text": " So it will have parameters"}, {"start": 7132.96, "end": 7134.96, "text": " Self"}, {"start": 7134.96, "end": 7136.96, "text": " And basically what we want to do here is"}, {"start": 7136.96, "end": 7138.96, "text": " Something like this like"}, {"start": 7138.96, "end": 7140.96, "text": " Param's is here"}, {"start": 7140.96, "end": 7142.96, "text": " And then for"}, {"start": 7142.96, "end": 7144.96, "text": " Neuron in salt that neurons"}, {"start": 7144.96, "end": 7148.96, "text": " We want to get neuron that parameters"}, {"start": 7148.96, "end": 7150.96, "text": " And we want to"}, {"start": 7150.96, "end": 7152.96, "text": " Param's that extend"}, {"start": 7152.96, "end": 7154.96, "text": " Right so these are the"}, {"start": 7154.96, "end": 7156.96, "text": " Parameters of this neuron"}, {"start": 7156.96, "end": 7158.96, "text": " And then we want to put them on top of"}, {"start": 7158.96, "end": 7159.96, "text": " Param's so"}, {"start": 7159.96, "end": 7160.96, "text": " Param's that extend of"}, {"start": 7160.96, "end": 7162.96, "text": " Peace"}, {"start": 7162.96, "end": 7163.96, "text": " And then we want to return"}, {"start": 7163.96, "end": 7164.96, "text": " Param's"}, {"start": 7164.96, "end": 7166.96, "text": " So this is way too much code"}, {"start": 7166.96, "end": 7168.96, "text": " So actually there's a way to simplify this"}, {"start": 7168.96, "end": 7170.96, "text": " Which is"}, {"start": 7170.96, "end": 7172.96, "text": " Return"}, {"start": 7172.96, "end": 7174.96, "text": " P"}, {"start": 7174.96, "end": 7176.96, "text": " For neuron in self"}, {"start": 7176.96, "end": 7178.96, "text": " That neurons"}, {"start": 7178.96, "end": 7180.96, "text": " For"}, {"start": 7180.96, "end": 7184.96, "text": " P in neuron dot parameters"}, {"start": 7184.96, "end": 7186.96, "text": " So it's a single list comprehension"}, {"start": 7186.96, "end": 7188.96, "text": " In python you can sort of nest"}, {"start": 7188.96, "end": 7190.96, "text": " Then like this and you can"}, {"start": 7190.96, "end": 7192.96, "text": " Then create"}, {"start": 7192.96, "end": 7194.96, "text": " The desired array"}, {"start": 7194.96, "end": 7196.96, "text": " So these are identical"}, {"start": 7196.96, "end": 7198.96, "text": " We can take this out"}, {"start": 7198.96, "end": 7200.96, "text": " And then let's do the same here"}, {"start": 7200.96, "end": 7204.96, "text": " Deframeters"}, {"start": 7204.96, "end": 7206.96, "text": " Self"}, {"start": 7206.96, "end": 7208.96, "text": " And return"}, {"start": 7208.96, "end": 7210.96, "text": " A parameter for layer"}, {"start": 7210.96, "end": 7212.96, "text": " In self dot layers"}, {"start": 7212.96, "end": 7214.96, "text": " For"}, {"start": 7214.96, "end": 7216.96, "text": " P in layer dot parameters"}, {"start": 7216.96, "end": 7220.96, "text": " And that should be good"}, {"start": 7220.96, "end": 7224.96, "text": " Now let me pop out this"}, {"start": 7224.96, "end": 7226.96, "text": " So we don't re-initialize our network"}, {"start": 7226.96, "end": 7228.96, "text": " Because we need to re-initialize our"}, {"start": 7228.96, "end": 7230.96, "text": " Okay so unfortunately we"}, {"start": 7230.96, "end": 7232.96, "text": " Will have to probably re-initialize"}, {"start": 7232.96, "end": 7234.96, "text": " Network because we just had"}, {"start": 7234.96, "end": 7236.96, "text": " Functionality because this class"}, {"start": 7236.96, "end": 7238.96, "text": " Of course we i want to get"}, {"start": 7238.96, "end": 7240.96, "text": " All the end dot parameters"}, {"start": 7240.96, "end": 7242.96, "text": " But that's not going to work because this is the old class"}, {"start": 7242.96, "end": 7246.96, "text": " Okay so unfortunately we do have to re-initialize the network"}, {"start": 7246.96, "end": 7248.96, "text": " Which will change some of the numbers"}, {"start": 7248.96, "end": 7250.96, "text": " But let me do that so"}, {"start": 7250.96, "end": 7252.96, "text": " That we can do that"}, {"start": 7252.96, "end": 7256.96, "text": " So that we pick up the new API"}, {"start": 7256.96, "end": 7258.96, "text": " We can now do end dot parameters"}, {"start": 7258.96, "end": 7260.96, "text": " And these are all the weights and biases"}, {"start": 7260.96, "end": 7262.96, "text": " Inside the entire neural net"}, {"start": 7262.96, "end": 7268.96, "text": " So in total this MLP has 41 parameters"}, {"start": 7268.96, "end": 7272.96, "text": " And now we'll be able to change them"}, {"start": 7272.96, "end": 7276.96, "text": " If we recalculate the loss here"}, {"start": 7276.96, "end": 7278.96, "text": " We see that unfortunately we have slightly different"}, {"start": 7278.96, "end": 7280.96, "text": " Predictionality"}, {"start": 7280.96, "end": 7284.96, "text": " Predictions and slightly different loss"}, {"start": 7284.96, "end": 7286.96, "text": " But that's okay"}, {"start": 7286.96, "end": 7290.96, "text": " Okay so we see that this neuron's"}, {"start": 7290.96, "end": 7292.96, "text": " Gradient is slightly negative"}, {"start": 7292.96, "end": 7294.96, "text": " We can also look at its data right now"}, {"start": 7294.96, "end": 7296.96, "text": " Which is 0.85"}, {"start": 7296.96, "end": 7298.96, "text": " So this is the current value of this neuron"}, {"start": 7298.96, "end": 7300.96, "text": " And this is its gradient on the loss"}, {"start": 7300.96, "end": 7304.96, "text": " So what we want to do now is"}, {"start": 7304.96, "end": 7306.96, "text": " We want to iterate for every P"}, {"start": 7306.96, "end": 7308.96, "text": " N end dot parameters"}, {"start": 7308.96, "end": 7310.96, "text": " So for all the 41 parameters of this neuron net"}, {"start": 7310.96, "end": 7312.96, "text": " We actually want to change"}, {"start": 7312.96, "end": 7314.96, "text": " P data data"}, {"start": 7314.96, "end": 7318.96, "text": " Slightly according to the gradient information"}, {"start": 7318.96, "end": 7320.96, "text": " Okay so data to do here"}, {"start": 7320.96, "end": 7324.96, "text": " But this will be basically a tiny update"}, {"start": 7324.96, "end": 7328.96, "text": " In this gradient descent scheme"}, {"start": 7328.96, "end": 7330.96, "text": " And gradient descent we are thinking of the gradient"}, {"start": 7330.96, "end": 7336.96, "text": " As a vector pointing in the direction of increased loss"}, {"start": 7336.96, "end": 7340.96, "text": " And so in gradient descent"}, {"start": 7340.96, "end": 7342.96, "text": " We are modifying P dot data"}, {"start": 7342.96, "end": 7346.96, "text": " By a small step size in the direction of the gradient"}, {"start": 7346.96, "end": 7348.96, "text": " So the step size as an example could be"}, {"start": 7348.96, "end": 7350.96, "text": " Like a very small number of 0.01 is the step size"}, {"start": 7350.96, "end": 7352.96, "text": " Times P dot grad"}, {"start": 7352.96, "end": 7354.96, "text": " Right"}, {"start": 7354.96, "end": 7358.96, "text": " But we have to think through some of the signs here"}, {"start": 7358.96, "end": 7360.96, "text": " So in particular"}, {"start": 7360.96, "end": 7362.96, "text": " Working with this specific example here"}, {"start": 7362.96, "end": 7364.96, "text": " We see that if we just left it like this"}, {"start": 7364.96, "end": 7366.96, "text": " Then this neurons value"}, {"start": 7366.96, "end": 7370.96, "text": " Would be currently increased by a tiny amount of the gradient"}, {"start": 7370.96, "end": 7372.96, "text": " The gradient is negative"}, {"start": 7372.96, "end": 7376.96, "text": " So this value of this neuron would go slightly down"}, {"start": 7376.96, "end": 7378.96, "text": " It would become like 0.8"}, {"start": 7378.96, "end": 7380.96, "text": " You know, 4 or something like that"}, {"start": 7380.96, "end": 7384.96, "text": " But if this neurons value goes lower"}, {"start": 7384.96, "end": 7388.96, "text": " That would actually increase the loss"}, {"start": 7388.96, "end": 7392.96, "text": " That because the derivative of this neuron is negative"}, {"start": 7392.96, "end": 7396.96, "text": " So increasing this makes the loss go down"}, {"start": 7396.96, "end": 7400.96, "text": " So increasing it is what we want to do instead of decreasing it"}, {"start": 7400.96, "end": 7402.96, "text": " So basically what we are missing here is"}, {"start": 7402.96, "end": 7404.96, "text": " We are actually missing a negative sign"}, {"start": 7404.96, "end": 7406.96, "text": " And again this other interpretation"}, {"start": 7406.96, "end": 7408.96, "text": " And that's because we want to minimize the loss"}, {"start": 7408.96, "end": 7410.96, "text": " We don't want to maximize the loss"}, {"start": 7410.96, "end": 7412.96, "text": " We want to decrease it"}, {"start": 7412.96, "end": 7414.96, "text": " And the other interpretation as I mentioned"}, {"start": 7414.96, "end": 7416.96, "text": " Is you can think of the gradient vector"}, {"start": 7416.96, "end": 7418.96, "text": " So basically just the vector of all the gradients"}, {"start": 7418.96, "end": 7422.96, "text": " As pointing in the direction of increasing the loss"}, {"start": 7422.96, "end": 7424.96, "text": " But then we want to decrease it"}, {"start": 7424.96, "end": 7426.96, "text": " So we actually want to go in the opposite direction"}, {"start": 7426.96, "end": 7428.96, "text": " And so you can convince yourself that this"}, {"start": 7428.96, "end": 7430.96, "text": " Or like thus the right thing here with the negative"}, {"start": 7430.96, "end": 7432.96, "text": " Because we want to minimize the loss"}, {"start": 7432.96, "end": 7436.96, "text": " So if we notch all the parameters by a tiny amount"}, {"start": 7436.96, "end": 7442.96, "text": " Then we'll see that this data will change a little bit"}, {"start": 7442.96, "end": 7446.96, "text": " So now this neuron is a tiny amount creator"}, {"start": 7446.96, "end": 7448.96, "text": " A tiny amount creator"}, {"start": 7448.96, "end": 7450.96, "text": " Value"}, {"start": 7450.96, "end": 7452.96, "text": " So 0.854, once it's 0.857"}, {"start": 7452.96, "end": 7454.96, "text": " And that's a good thing"}, {"start": 7454.96, "end": 7456.96, "text": " Because slightly increasing this neuron"}, {"start": 7456.96, "end": 7460.96, "text": " Data makes the loss go down"}, {"start": 7460.96, "end": 7462.96, "text": " According to the gradient"}, {"start": 7462.96, "end": 7464.96, "text": " And so the correct thing has happened signwise"}, {"start": 7464.96, "end": 7468.96, "text": " And so now what we would expect of course is that"}, {"start": 7468.96, "end": 7470.96, "text": " Because we've changed all these parameters"}, {"start": 7470.96, "end": 7474.96, "text": " We expect that the loss should have gone down a bit"}, {"start": 7474.96, "end": 7476.96, "text": " So we want to re-evaluate the loss"}, {"start": 7476.96, "end": 7478.96, "text": " Let me basically"}, {"start": 7478.96, "end": 7482.96, "text": " This is just a data definition that hasn't changed"}, {"start": 7482.96, "end": 7484.96, "text": " But the forward pass here"}, {"start": 7484.96, "end": 7486.96, "text": " Of the network we can recalculate"}, {"start": 7488.96, "end": 7490.96, "text": " And actually let me do it outside here"}, {"start": 7490.96, "end": 7492.96, "text": " So that we can compare the two loss values"}, {"start": 7492.96, "end": 7496.96, "text": " So here if I recalculate the loss"}, {"start": 7496.96, "end": 7500.96, "text": " We'd expect the new loss now to be slightly lower than this number"}, {"start": 7500.96, "end": 7502.96, "text": " So hopefully what we're getting now"}, {"start": 7502.96, "end": 7504.96, "text": " Is a tiny bit lower than 4.854"}, {"start": 7506.96, "end": 7508.96, "text": " 4.36"}, {"start": 7508.96, "end": 7510.96, "text": " And remember the way we've arranged this"}, {"start": 7510.96, "end": 7514.96, "text": " Is that low loss means that our predictions are matching the targets"}, {"start": 7514.96, "end": 7518.96, "text": " So our predictions now are probably slightly closer to the targets"}, {"start": 7520.96, "end": 7524.96, "text": " And now all we have to do is we have to iterate this process"}, {"start": 7524.96, "end": 7526.96, "text": " So again we've done the forward pass"}, {"start": 7526.96, "end": 7528.96, "text": " And this is the loss"}, {"start": 7528.96, "end": 7530.96, "text": " Now we can lost that backward"}, {"start": 7530.96, "end": 7532.96, "text": " And we can do a step size"}, {"start": 7532.96, "end": 7536.96, "text": " And now we should have a slightly lower loss"}, {"start": 7536.96, "end": 7538.96, "text": " 4.36 goes to 3.9"}, {"start": 7538.96, "end": 7540.96, "text": " And okay so"}, {"start": 7540.96, "end": 7542.96, "text": " We've done the forward pass"}, {"start": 7542.96, "end": 7544.96, "text": " Here's the backward pass"}, {"start": 7544.96, "end": 7546.96, "text": " And now the loss is 3.66"}, {"start": 7546.96, "end": 7550.96, "text": " 3.47"}, {"start": 7550.96, "end": 7552.96, "text": " And you get the idea"}, {"start": 7552.96, "end": 7554.96, "text": " We just continue doing this"}, {"start": 7554.96, "end": 7556.96, "text": " And this is gradient descent"}, {"start": 7556.96, "end": 7558.96, "text": " We're just iteratively doing forward pass"}, {"start": 7558.96, "end": 7560.96, "text": " Backward pass"}, {"start": 7560.96, "end": 7562.96, "text": " Updates"}, {"start": 7562.96, "end": 7564.96, "text": " Forward pass, backward pass, update"}, {"start": 7564.96, "end": 7566.96, "text": " And the neural net is improving its predictions"}, {"start": 7566.96, "end": 7568.96, "text": " So here if we look at wide-pred now"}, {"start": 7568.96, "end": 7570.96, "text": " Wide-pred"}, {"start": 7570.96, "end": 7572.96, "text": " We see that"}, {"start": 7572.96, "end": 7576.96, "text": " This value should be getting closer to 1"}, {"start": 7576.96, "end": 7578.96, "text": " So this value should be getting more positive"}, {"start": 7578.96, "end": 7580.96, "text": " These should be getting more negative"}, {"start": 7580.96, "end": 7582.96, "text": " And this one should be also getting more positive"}, {"start": 7582.96, "end": 7584.96, "text": " So if we just iterate this"}, {"start": 7584.96, "end": 7586.96, "text": " A few more times"}, {"start": 7586.96, "end": 7588.96, "text": " Actually we'll be able to afford to go a bit faster"}, {"start": 7588.96, "end": 7590.96, "text": " Let's try a slightly higher learning rate"}, {"start": 7590.96, "end": 7594.96, "text": " Whoops, okay"}, {"start": 7594.96, "end": 7596.96, "text": " There we go, so now we're at 0.31"}, {"start": 7596.96, "end": 7600.96, "text": " If you go too fast by the way"}, {"start": 7600.96, "end": 7602.96, "text": " If you try to make it too big of a step"}, {"start": 7602.96, "end": 7604.96, "text": " You may actually overstep"}, {"start": 7604.96, "end": 7608.96, "text": " It's overconfidence"}, {"start": 7608.96, "end": 7610.96, "text": " Because again remember we don't actually know exactly about the loss function"}, {"start": 7610.96, "end": 7612.96, "text": " The loss function has all kinds of structure"}, {"start": 7612.96, "end": 7614.96, "text": " And we only know about the very local"}, {"start": 7614.96, "end": 7616.96, "text": " Dependence of all these parameters on the loss"}, {"start": 7616.96, "end": 7618.96, "text": " But if we step too far"}, {"start": 7618.96, "end": 7620.96, "text": " We may step into you know"}, {"start": 7620.96, "end": 7622.96, "text": " A part of the loss that is completely different"}, {"start": 7622.96, "end": 7624.96, "text": " And that can destabilize training"}, {"start": 7624.96, "end": 7626.96, "text": " And make your loss actually blow up even"}, {"start": 7626.96, "end": 7628.96, "text": " So the loss is now 0.04"}, {"start": 7628.96, "end": 7632.96, "text": " So actually the predictions should be really quite close"}, {"start": 7632.96, "end": 7634.96, "text": " Let's take a look"}, {"start": 7634.96, "end": 7636.96, "text": " So you see how this is almost 1"}, {"start": 7636.96, "end": 7638.96, "text": " Almost negative 1, almost 1"}, {"start": 7638.96, "end": 7640.96, "text": " We can continue going"}, {"start": 7640.96, "end": 7642.96, "text": " So, yep, backward, update"}, {"start": 7642.96, "end": 7644.96, "text": " Up, there we go"}, {"start": 7644.96, "end": 7646.96, "text": " So we went way too fast"}, {"start": 7646.96, "end": 7648.96, "text": " And we actually overstepped"}, {"start": 7648.96, "end": 7650.96, "text": " So we got to 2, 2 eager"}, {"start": 7650.96, "end": 7652.96, "text": " Where are we now?"}, {"start": 7652.96, "end": 7654.96, "text": " Oops"}, {"start": 7654.96, "end": 7656.96, "text": " Okay, 7 in negative 9"}, {"start": 7656.96, "end": 7658.96, "text": " So this is very, very low loss"}, {"start": 7658.96, "end": 7662.96, "text": " And the predictions are basically perfect"}, {"start": 7662.96, "end": 7666.96, "text": " So somehow we were doing way too big updates"}, {"start": 7666.96, "end": 7668.96, "text": " And we briefly explored it"}, {"start": 7668.96, "end": 7670.96, "text": " But then somehow we ended up getting into a really good spot"}, {"start": 7670.96, "end": 7674.96, "text": " So usually this learning rate and the tuning of it is a subtle art"}, {"start": 7674.96, "end": 7676.96, "text": " You want to set your learning rate"}, {"start": 7676.96, "end": 7678.96, "text": " If it's too low, you're going to take way too long to converge"}, {"start": 7678.96, "end": 7680.96, "text": " But if it's too high, the whole thing gets unstable"}, {"start": 7680.96, "end": 7682.96, "text": " And you might actually even explode the loss"}, {"start": 7682.96, "end": 7684.96, "text": " Depending on your loss function"}, {"start": 7684.96, "end": 7686.96, "text": " So finding the step size to be just right"}, {"start": 7686.96, "end": 7688.96, "text": " It's a pretty subtle art sometimes"}, {"start": 7688.96, "end": 7690.96, "text": " When you're using sort of a null-accradient descent"}, {"start": 7690.96, "end": 7692.96, "text": " But we happen to get into a good spot"}, {"start": 7692.96, "end": 7694.96, "text": " We can look at end dot parameters"}, {"start": 7694.96, "end": 7696.96, "text": " And we can see that the result is really good"}, {"start": 7696.96, "end": 7700.96, "text": " So we can see that we have a very good spot"}, {"start": 7700.96, "end": 7702.96, "text": " We can look at end dot parameters"}, {"start": 7702.96, "end": 7706.96, "text": " So this is the setting of weights and biases"}, {"start": 7706.96, "end": 7712.96, "text": " That makes our network predict the desired targets very, very close"}, {"start": 7712.96, "end": 7718.96, "text": " And basically we successfully trained in neural nut"}, {"start": 7718.96, "end": 7720.96, "text": " Okay, let's make this a tiny bit more respectable"}, {"start": 7720.96, "end": 7722.96, "text": " And implement an actual training loop"}, {"start": 7722.96, "end": 7724.96, "text": " And what that looks like"}, {"start": 7724.96, "end": 7726.96, "text": " is the initialization of that state"}, {"start": 7726.96, "end": 7728.96, "text": " This is the forward pass"}, {"start": 7728.96, "end": 7730.96, "text": " So for K in range"}, {"start": 7730.96, "end": 7734.96, "text": " We're going to take a bunch of steps"}, {"start": 7736.96, "end": 7738.96, "text": " First, you do the forward pass"}, {"start": 7738.96, "end": 7740.96, "text": " We evaluate the loss"}, {"start": 7742.96, "end": 7744.96, "text": " Let's re-initialize the neural nut from scratch"}, {"start": 7744.96, "end": 7746.96, "text": " And here's the data"}, {"start": 7746.96, "end": 7750.96, "text": " And we first do forward pass"}, {"start": 7750.96, "end": 7752.96, "text": " Then we do the backward pass"}, {"start": 7752.96, "end": 7754.96, "text": " And then we do an update"}, {"start": 7754.96, "end": 7756.96, "text": " That's gradient descent"}, {"start": 7758.96, "end": 7760.96, "text": " And then we do an update"}, {"start": 7760.96, "end": 7762.96, "text": " That's gradient descent"}, {"start": 7766.96, "end": 7768.96, "text": " And then we should be able to iterate this"}, {"start": 7768.96, "end": 7770.96, "text": " And we should be able to print the current step"}, {"start": 7770.96, "end": 7772.96, "text": " The current loss"}, {"start": 7772.96, "end": 7774.96, "text": " Let's just print the sort of"}, {"start": 7774.96, "end": 7776.96, "text": " Number of the loss"}, {"start": 7776.96, "end": 7778.96, "text": " And that should be it"}, {"start": 7778.96, "end": 7780.96, "text": " And then the learning rate"}, {"start": 7780.96, "end": 7782.96, "text": " 0.01 is a little too small"}, {"start": 7782.96, "end": 7784.96, "text": " 0.1 we saw is like a little bit dangerous if you buy"}, {"start": 7784.96, "end": 7786.96, "text": " Let's go somewhere between"}, {"start": 7786.96, "end": 7788.96, "text": " And we'll optimize this for"}, {"start": 7788.96, "end": 7790.96, "text": " Not 10 steps"}, {"start": 7790.96, "end": 7792.96, "text": " But let's go for say 20 steps"}, {"start": 7792.96, "end": 7796.96, "text": " Let me erase all of this junk"}, {"start": 7796.96, "end": 7800.96, "text": " And let's run the optimization"}, {"start": 7800.96, "end": 7804.96, "text": " And you see how we've actually converged slower"}, {"start": 7804.96, "end": 7806.96, "text": " In a more controlled manner"}, {"start": 7806.96, "end": 7808.96, "text": " And got through a loss that is very low"}, {"start": 7808.96, "end": 7810.96, "text": " So I expect wide-pred to be quite good"}, {"start": 7810.96, "end": 7812.96, "text": " There we go"}, {"start": 7816.96, "end": 7818.96, "text": " And that's it"}, {"start": 7818.96, "end": 7820.96, "text": " Okay, so this is kind of embarrassing"}, {"start": 7820.96, "end": 7822.96, "text": " But we actually have a really terrible bug"}, {"start": 7822.96, "end": 7824.96, "text": " In here"}, {"start": 7824.96, "end": 7826.96, "text": " And it's a subtle bug"}, {"start": 7826.96, "end": 7828.96, "text": " And it's a very common bug"}, {"start": 7828.96, "end": 7830.96, "text": " And I can't believe I've done it"}, {"start": 7830.96, "end": 7832.96, "text": " For the 20th time in my life"}, {"start": 7832.96, "end": 7834.96, "text": " Especially on camera"}, {"start": 7834.96, "end": 7836.96, "text": " And I could have reshot the bug"}, {"start": 7836.96, "end": 7838.96, "text": " And I could have reshot the whole thing"}, {"start": 7838.96, "end": 7840.96, "text": " But I think it's pretty funny"}, {"start": 7840.96, "end": 7842.96, "text": " And you know, you get to appreciate a bit"}, {"start": 7842.96, "end": 7844.96, "text": " What working with neural nuts maybe"}, {"start": 7844.96, "end": 7846.96, "text": " Is like sometimes"}, {"start": 7846.96, "end": 7848.96, "text": " We are guilty of"}, {"start": 7848.96, "end": 7850.96, "text": " A common bug"}, {"start": 7850.96, "end": 7852.96, "text": " I've actually tweeted"}, {"start": 7852.96, "end": 7854.96, "text": " The most common neural nut mistakes"}, {"start": 7854.96, "end": 7856.96, "text": " A long time ago now"}, {"start": 7856.96, "end": 7858.96, "text": " And I'm not really"}, {"start": 7858.96, "end": 7860.96, "text": " Gonna explain any of these"}, {"start": 7860.96, "end": 7862.96, "text": " Except for we are guilty of number three"}, {"start": 7862.96, "end": 7864.96, "text": " You forgot to zero grad"}, {"start": 7864.96, "end": 7866.96, "text": " Before dot backward"}, {"start": 7866.96, "end": 7868.96, "text": " What is that?"}, {"start": 7868.96, "end": 7870.96, "text": " Basically what's happening"}, {"start": 7870.96, "end": 7872.96, "text": " And it's a subtle bug and I'm not sure if you saw it"}, {"start": 7872.96, "end": 7874.96, "text": " Is that"}, {"start": 7874.96, "end": 7876.96, "text": " All of these"}, {"start": 7876.96, "end": 7878.96, "text": " Wates here have a dot data and a dot grad"}, {"start": 7878.96, "end": 7880.96, "text": " And dot grad starts at zero"}, {"start": 7880.96, "end": 7882.96, "text": " And then we do backward"}, {"start": 7882.96, "end": 7884.96, "text": " And we fill in the gradients"}, {"start": 7884.96, "end": 7886.96, "text": " And then we do an update on the data"}, {"start": 7886.96, "end": 7888.96, "text": " But we don't flush the grad"}, {"start": 7888.96, "end": 7890.96, "text": " It stays there"}, {"start": 7890.96, "end": 7892.96, "text": " So when we do the second"}, {"start": 7892.96, "end": 7894.96, "text": " Forward pass and we do backward again"}, {"start": 7894.96, "end": 7896.96, "text": " Remember that all the backward operations"}, {"start": 7896.96, "end": 7898.96, "text": " Do a plus equals on the grad"}, {"start": 7898.96, "end": 7900.96, "text": " And so these gradients"}, {"start": 7900.96, "end": 7904.96, "text": " Just add up and they never get reset to zero"}, {"start": 7904.96, "end": 7906.96, "text": " So basically we didn't zero grad"}, {"start": 7906.96, "end": 7908.96, "text": " So here's how we zero grad"}, {"start": 7908.96, "end": 7910.96, "text": " Before backward"}, {"start": 7910.96, "end": 7912.96, "text": " We need to iterate over all the parameters"}, {"start": 7912.96, "end": 7914.96, "text": " And we need to make sure that"}, {"start": 7914.96, "end": 7916.96, "text": " P dot grad is set to zero"}, {"start": 7916.96, "end": 7918.96, "text": " We need to reset it to zero"}, {"start": 7918.96, "end": 7920.96, "text": " Just like it is in the constructor"}, {"start": 7920.96, "end": 7922.96, "text": " So remember all the way here"}, {"start": 7922.96, "end": 7924.96, "text": " For all these value nodes"}, {"start": 7924.96, "end": 7926.96, "text": " Grad is reset to zero"}, {"start": 7926.96, "end": 7930.96, "text": " And then all these backward passes do a plus equals on that grad"}, {"start": 7930.96, "end": 7932.96, "text": " But we need to make sure that"}, {"start": 7932.96, "end": 7934.96, "text": " We reset these grads to zero"}, {"start": 7934.96, "end": 7936.96, "text": " So that when we do backward"}, {"start": 7936.96, "end": 7938.96, "text": " All of them start at zero"}, {"start": 7938.96, "end": 7940.96, "text": " And the actual backward pass accumulates"}, {"start": 7940.96, "end": 7944.96, "text": " The loss derivatives into the grads"}, {"start": 7944.96, "end": 7946.96, "text": " So this is zero grad in PyTorch"}, {"start": 7946.96, "end": 7948.96, "text": " And we will slightly"}, {"start": 7948.96, "end": 7952.96, "text": " We will get a slightly different optimization"}, {"start": 7952.96, "end": 7954.96, "text": " Let's reset the neural net"}, {"start": 7954.96, "end": 7956.96, "text": " The data is the same"}, {"start": 7956.96, "end": 7958.96, "text": " This is now I think correct"}, {"start": 7958.96, "end": 7960.96, "text": " And we get a much more"}, {"start": 7960.96, "end": 7962.96, "text": " You know we get a much more slower descent"}, {"start": 7962.96, "end": 7964.96, "text": " We still end up with pretty good results"}, {"start": 7964.96, "end": 7966.96, "text": " And we can continue this a bit more"}, {"start": 7966.96, "end": 7968.96, "text": " To get down lower"}, {"start": 7968.96, "end": 7970.96, "text": " And lower"}, {"start": 7970.96, "end": 7972.96, "text": " And lower"}, {"start": 7972.96, "end": 7974.96, "text": " Yeah"}, {"start": 7974.96, "end": 7976.96, "text": " So the only reason that the previous thing worked"}, {"start": 7976.96, "end": 7978.96, "text": " It's extremely buggy"}, {"start": 7978.96, "end": 7980.96, "text": " The only reason that worked"}, {"start": 7980.96, "end": 7982.96, "text": " Is that"}, {"start": 7982.96, "end": 7984.96, "text": " This is a very very simple problem"}, {"start": 7984.96, "end": 7988.96, "text": " And it's very easy for this neural net to fit this data"}, {"start": 7988.96, "end": 7990.96, "text": " And so the grads ended up accumulating"}, {"start": 7990.96, "end": 7994.96, "text": " And it effectively gave us a massive step size"}, {"start": 7994.96, "end": 7996.96, "text": " And it made us converge extremely fast"}, {"start": 7996.96, "end": 8000.96, "text": " But basically now we have to do more steps"}, {"start": 8000.96, "end": 8002.96, "text": " To get to very low values of loss"}, {"start": 8002.96, "end": 8004.96, "text": " And get why I pray to be really good"}, {"start": 8004.96, "end": 8006.96, "text": " We can try to step a bit greater"}, {"start": 8010.96, "end": 8012.96, "text": " Yeah"}, {"start": 8012.96, "end": 8016.96, "text": " We're going to get closer and closer to one minus one"}, {"start": 8016.96, "end": 8018.96, "text": " And one"}, {"start": 8018.96, "end": 8022.96, "text": " So we're going to do lots of sometimes tricky"}, {"start": 8022.96, "end": 8024.96, "text": " Because"}, {"start": 8024.96, "end": 8026.96, "text": " You may have lots of bugs in the code"}, {"start": 8026.96, "end": 8028.96, "text": " And your network might actually work"}, {"start": 8028.96, "end": 8030.96, "text": " Just like ours worked"}, {"start": 8030.96, "end": 8032.96, "text": " But chances are is that"}, {"start": 8032.96, "end": 8034.96, "text": " We had a more complex problem"}, {"start": 8034.96, "end": 8038.96, "text": " Than actually this bug would have made us not optimize the loss very well"}, {"start": 8038.96, "end": 8040.96, "text": " And we were only able to get away with it because"}, {"start": 8040.96, "end": 8042.96, "text": " The problem is very simple"}, {"start": 8042.96, "end": 8044.96, "text": " So let's now bring everything together"}, {"start": 8044.96, "end": 8046.96, "text": " And summarize what we learned"}, {"start": 8046.96, "end": 8048.96, "text": " What are neural nets?"}, {"start": 8048.96, "end": 8050.96, "text": " Neural nets are these mathematical expressions"}, {"start": 8050.96, "end": 8052.96, "text": " They're really simple mathematical expressions"}, {"start": 8052.96, "end": 8054.96, "text": " In case of multi-layer perceptron"}, {"start": 8054.96, "end": 8056.96, "text": " That take"}, {"start": 8056.96, "end": 8058.96, "text": " Input as the data"}, {"start": 8058.96, "end": 8060.96, "text": " And they take input the weights and the parameters of the neural net"}, {"start": 8060.96, "end": 8062.96, "text": " Mathematical expression for the forward pass"}, {"start": 8062.96, "end": 8064.96, "text": " Followed by a loss function"}, {"start": 8064.96, "end": 8068.96, "text": " And the loss function tries to measure the accuracy of the predictions"}, {"start": 8068.96, "end": 8070.96, "text": " And usually the loss will be low"}, {"start": 8070.96, "end": 8072.96, "text": " When your predictions are matching your targets"}, {"start": 8072.96, "end": 8074.96, "text": " Or where the network is basically behaving well"}, {"start": 8074.96, "end": 8078.96, "text": " So we manipulate the loss function so that when the loss is low"}, {"start": 8078.96, "end": 8082.96, "text": " The network is doing what you wanted to do on your problem"}, {"start": 8082.96, "end": 8084.96, "text": " And then we backward the loss"}, {"start": 8084.96, "end": 8086.96, "text": " Use back propagation to get the gradient"}, {"start": 8086.96, "end": 8088.96, "text": " And then we know how to tune all the parameters to"}, {"start": 8088.96, "end": 8090.96, "text": " Decrease the loss locally"}, {"start": 8090.96, "end": 8092.96, "text": " But then we have to iterate that process many times"}, {"start": 8092.96, "end": 8094.96, "text": " In what's called the gradient descent"}, {"start": 8094.96, "end": 8096.96, "text": " So we simply follow the gradient information"}, {"start": 8096.96, "end": 8098.96, "text": " And that minimizes the loss"}, {"start": 8098.96, "end": 8100.96, "text": " And the loss is arranged so that when the loss is minimized"}, {"start": 8100.96, "end": 8102.96, "text": " The network is doing what you want it to do"}, {"start": 8102.96, "end": 8106.96, "text": " And yeah, so we just have a blob of neural stuff"}, {"start": 8106.96, "end": 8108.96, "text": " And we can make it do arbitrary things"}, {"start": 8108.96, "end": 8110.96, "text": " And that's what gives neural nets their power"}, {"start": 8110.96, "end": 8114.96, "text": " It's, you know, this is a very tiny network with 41 parameters"}, {"start": 8114.96, "end": 8118.96, "text": " But you can build significantly more complicated neural nets"}, {"start": 8118.96, "end": 8122.96, "text": " With billions at this point, almost trillions of parameters"}, {"start": 8122.96, "end": 8124.96, "text": " And it's a massive blob of neural tissue"}, {"start": 8124.96, "end": 8126.96, "text": " Simulated neural tissue"}, {"start": 8126.96, "end": 8128.96, "text": " Roughly speaking"}, {"start": 8128.96, "end": 8132.96, "text": " And you can make it do extremely complex problems"}, {"start": 8132.96, "end": 8134.96, "text": " And these neural nets then have all kinds of very fascinating"}, {"start": 8134.96, "end": 8136.96, "text": " Emergent properties"}, {"start": 8136.96, "end": 8140.96, "text": " In when you try to make them do significantly hard problems"}, {"start": 8140.96, "end": 8142.96, "text": " So you can make it more difficult"}, {"start": 8142.96, "end": 8144.96, "text": " To do significantly hard problems"}, {"start": 8144.96, "end": 8146.96, "text": " As in the case of GPT for example"}, {"start": 8146.96, "end": 8150.96, "text": " We have massive amounts of text from the internet"}, {"start": 8150.96, "end": 8152.96, "text": " And we're trying to get a neural net to predict"}, {"start": 8152.96, "end": 8154.96, "text": " To take like a few words"}, {"start": 8154.96, "end": 8156.96, "text": " And try to predict the next word in a sequence"}, {"start": 8156.96, "end": 8158.96, "text": " That's the learning problem"}, {"start": 8158.96, "end": 8160.96, "text": " And it turns out that when you train this on all of the internet"}, {"start": 8160.96, "end": 8162.96, "text": " The neural net actually has like really remarkable"}, {"start": 8162.96, "end": 8164.96, "text": " Emergent properties"}, {"start": 8164.96, "end": 8166.96, "text": " But that neural net would have hundreds of billions of parameters"}, {"start": 8166.96, "end": 8170.96, "text": " But it works on fundamentally these exact same principles"}, {"start": 8170.96, "end": 8172.96, "text": " And the neural net of course will be a bit more complex"}, {"start": 8172.96, "end": 8174.96, "text": " But otherwise the"}, {"start": 8174.96, "end": 8176.96, "text": " Evaluating the gradient is there"}, {"start": 8176.96, "end": 8178.96, "text": " And it would be identical"}, {"start": 8178.96, "end": 8180.96, "text": " And the gradient descent would be there"}, {"start": 8180.96, "end": 8182.96, "text": " And would be basically identical"}, {"start": 8182.96, "end": 8184.96, "text": " But people usually use slightly different updates"}, {"start": 8184.96, "end": 8188.96, "text": " This is a very simple stochastic gradient sent update"}, {"start": 8188.96, "end": 8190.96, "text": " And the last function would not be an eSquared error"}, {"start": 8190.96, "end": 8192.96, "text": " They would be using something called the cross-entropy loss"}, {"start": 8192.96, "end": 8194.96, "text": " For predicting the next token"}, {"start": 8194.96, "end": 8196.96, "text": " So there's a few more details"}, {"start": 8196.96, "end": 8198.96, "text": " But fundamentally the neural network setup"}, {"start": 8198.96, "end": 8200.96, "text": " Or training is identical and pervasive"}, {"start": 8200.96, "end": 8202.96, "text": " And now you understand intuitively"}, {"start": 8202.96, "end": 8204.96, "text": " How that works under the hood"}, {"start": 8204.96, "end": 8206.96, "text": " In the beginning of this video"}, {"start": 8206.96, "end": 8208.96, "text": " I told you that by the end of it"}, {"start": 8208.96, "end": 8210.96, "text": " You would understand everything in micrograd"}, {"start": 8210.96, "end": 8212.96, "text": " And then we'd slowly build it up"}, {"start": 8212.96, "end": 8214.96, "text": " Let me briefly prove that to you"}, {"start": 8214.96, "end": 8216.96, "text": " So I'm going to start through all the code that is in micrograd"}, {"start": 8216.96, "end": 8218.96, "text": " As of today"}, {"start": 8218.96, "end": 8220.96, "text": " Actually, potentially some of the code will change"}, {"start": 8220.96, "end": 8222.96, "text": " By the time you watch this video"}, {"start": 8222.96, "end": 8224.96, "text": " Because I intend to continue developing micrograd"}, {"start": 8224.96, "end": 8226.96, "text": " But let's look at what we have so far at least"}, {"start": 8226.96, "end": 8228.96, "text": " When you go to engine.py that has the value"}, {"start": 8228.96, "end": 8230.96, "text": " Everything here you should mostly recognize"}, {"start": 8230.96, "end": 8232.96, "text": " So we have the dead.data.grad attributes"}, {"start": 8232.96, "end": 8234.96, "text": " We have the backward function"}, {"start": 8234.96, "end": 8236.96, "text": " We have the previous set of children"}, {"start": 8236.96, "end": 8238.96, "text": " And the operation that produced this value"}, {"start": 8238.96, "end": 8242.96, "text": " We have addition, multiplication, and raising to a scalar power"}, {"start": 8242.96, "end": 8244.96, "text": " We have the Rellou nonlinearity"}, {"start": 8244.96, "end": 8246.96, "text": " Which is slightly different type of nonlinearity than 10H"}, {"start": 8246.96, "end": 8248.96, "text": " That we used in this video"}, {"start": 8248.96, "end": 8250.96, "text": " Both of them are nonlinearties"}, {"start": 8250.96, "end": 8252.96, "text": " And notably 10H is not actually present in micrograd"}, {"start": 8252.96, "end": 8254.96, "text": " As of right now"}, {"start": 8254.96, "end": 8256.96, "text": " But I intend to add it later"}, {"start": 8256.96, "end": 8258.96, "text": " With the backward, which is identical"}, {"start": 8258.96, "end": 8260.96, "text": " And then all of these other operations"}, {"start": 8260.96, "end": 8262.96, "text": " Which are built up on top of operations here"}, {"start": 8262.96, "end": 8264.96, "text": " So value should be very recognizable"}, {"start": 8264.96, "end": 8266.96, "text": " Except for the nonlinearity used in this video"}, {"start": 8266.96, "end": 8268.96, "text": " There's no massive difference between Rellou and 10H"}, {"start": 8268.96, "end": 8270.96, "text": " And sigmoid and these other nonlinearties"}, {"start": 8270.96, "end": 8272.96, "text": " They're all roughly equivalent and can be used in MLPs"}, {"start": 8272.96, "end": 8274.96, "text": " So I use 10H because it's a bit smoother"}, {"start": 8274.96, "end": 8276.96, "text": " And because it's a little bit more complicated than Rellou"}, {"start": 8276.96, "end": 8278.96, "text": " And therefore it's stressed a little bit more"}, {"start": 8278.96, "end": 8280.96, "text": " And therefore it's stressed a little bit more"}, {"start": 8280.96, "end": 8282.96, "text": " And therefore it's stressed a little bit more"}, {"start": 8282.96, "end": 8284.96, "text": " The local gradients and working with those derivatives"}, {"start": 8284.96, "end": 8286.96, "text": " Which I probably would be useful"}, {"start": 8286.96, "end": 8290.96, "text": " And then the pi is the neural networks library as I mentioned"}, {"start": 8290.96, "end": 8292.96, "text": " So you should recognize identical implementation of your own"}, {"start": 8292.96, "end": 8294.96, "text": " Layer and MLP"}, {"start": 8294.96, "end": 8296.96, "text": " Notably, for not so much"}, {"start": 8296.96, "end": 8298.96, "text": " We have a class module here"}, {"start": 8298.96, "end": 8300.96, "text": " There's a parent class of all these modules"}, {"start": 8300.96, "end": 8304.96, "text": " I did that because there's an end-up module class in PyTorch"}, {"start": 8304.96, "end": 8306.96, "text": " And so this exactly matches that API"}, {"start": 8306.96, "end": 8310.96, "text": " And end-up module in PyTorch has also a zero-brad"}, {"start": 8310.96, "end": 8312.96, "text": " Which I refactored out here"}, {"start": 8312.96, "end": 8314.96, "text": " So that's the end of micro-grad really"}, {"start": 8314.96, "end": 8316.96, "text": " Then there's a test"}, {"start": 8316.96, "end": 8318.96, "text": " Which you'll see basically creates two chunks of code"}, {"start": 8318.96, "end": 8320.96, "text": " One in micro-grad and one in PyTorch"}, {"start": 8320.96, "end": 8324.96, "text": " And we'll make sure that the forward and the backward pass agree identically"}, {"start": 8324.96, "end": 8326.96, "text": " For a slightly less complicated expression"}, {"start": 8326.96, "end": 8328.96, "text": " A slightly more complicated expression"}, {"start": 8328.96, "end": 8330.96, "text": " Everything agrees"}, {"start": 8330.96, "end": 8332.96, "text": " So we agree with PyTorch on all of these operations"}, {"start": 8332.96, "end": 8334.96, "text": " And finally there's a demo that IpyY"}, {"start": 8334.96, "end": 8336.96, "text": " And finally there's a demo that IpyY"}, {"start": 8336.96, "end": 8338.96, "text": " And finally there's a demo that IpyY"}, {"start": 8338.96, "end": 8340.96, "text": " And finally there's a demo that IpyY and B here"}, {"start": 8340.96, "end": 8344.96, "text": " And it's a bit more complicated binary classification demo than the one I covered in this lecture"}, {"start": 8344.96, "end": 8348.96, "text": " So we only had a tiny data set of for examples"}, {"start": 8348.96, "end": 8350.96, "text": " Here we have a bit more complicated example"}, {"start": 8350.96, "end": 8352.96, "text": " With lots of blue points and lots of red points"}, {"start": 8352.96, "end": 8356.96, "text": " And we're trying to again build a binary classifier to distinguish"}, {"start": 8356.96, "end": 8358.96, "text": " Two dimensional points as red or blue"}, {"start": 8358.96, "end": 8362.96, "text": " It's a bit more complicated MLP here with it's a bigger MLP"}, {"start": 8362.96, "end": 8364.96, "text": " The loss is a bit more complicated because"}, {"start": 8364.96, "end": 8366.96, "text": " It supports batches"}, {"start": 8366.96, "end": 8368.96, "text": " It supports batches"}, {"start": 8368.96, "end": 8370.96, "text": " So because our data set was so tiny"}, {"start": 8370.96, "end": 8374.96, "text": " We always did a forward pass on the entire data set of for examples"}, {"start": 8374.96, "end": 8376.96, "text": " But when your data set is like a million examples"}, {"start": 8376.96, "end": 8378.96, "text": " What we usually do in practice is we"}, {"start": 8378.96, "end": 8380.96, "text": " We basically pick out some random subset"}, {"start": 8380.96, "end": 8382.96, "text": " We call that a batch"}, {"start": 8382.96, "end": 8384.96, "text": " And then we only process the batch"}, {"start": 8384.96, "end": 8386.96, "text": " Forward, backward, and update"}, {"start": 8386.96, "end": 8388.96, "text": " So we don't have to forward the entire training set"}, {"start": 8388.96, "end": 8390.96, "text": " So this supports batching"}, {"start": 8390.96, "end": 8392.96, "text": " Because there's a lot more examples here"}, {"start": 8392.96, "end": 8394.96, "text": " We do a forward pass"}, {"start": 8394.96, "end": 8396.96, "text": " The loss is slightly more different"}, {"start": 8396.96, "end": 8398.96, "text": " This is a max margin loss that I implement here"}, {"start": 8398.96, "end": 8402.96, "text": " The one that we used was the mean squared error loss"}, {"start": 8402.96, "end": 8404.96, "text": " Because it's the simplest one"}, {"start": 8404.96, "end": 8406.96, "text": " There's also the binary cross-entropy loss"}, {"start": 8406.96, "end": 8408.96, "text": " All of them can be used for binary classification"}, {"start": 8408.96, "end": 8410.96, "text": " And don't make too much of a difference"}, {"start": 8410.96, "end": 8412.96, "text": " In the simple examples that we looked at so far"}, {"start": 8412.96, "end": 8414.96, "text": " There's something called"}, {"start": 8414.96, "end": 8416.96, "text": " Alt-to-regularization used here"}, {"start": 8416.96, "end": 8418.96, "text": " This has to do with generalization of the neural net"}, {"start": 8418.96, "end": 8422.96, "text": " And controls the overfitting in machine learning setting"}, {"start": 8422.96, "end": 8424.96, "text": " So I did not cover these concepts in this video"}, {"start": 8424.96, "end": 8426.96, "text": " Potentially later"}, {"start": 8426.96, "end": 8428.96, "text": " And the training loop you should recognize"}, {"start": 8428.96, "end": 8430.96, "text": " So forward, backward, with, zero grad"}, {"start": 8430.96, "end": 8432.96, "text": " And update, and so on"}, {"start": 8432.96, "end": 8434.96, "text": " You'll notice that in the update here"}, {"start": 8434.96, "end": 8438.96, "text": " The learning rate is scaled as a function of number of iterations"}, {"start": 8438.96, "end": 8440.96, "text": " And it shrinks"}, {"start": 8440.96, "end": 8442.96, "text": " And this is something called learning rate decay"}, {"start": 8442.96, "end": 8444.96, "text": " So in the beginning you have a high learning rate"}, {"start": 8444.96, "end": 8448.96, "text": " And as the network sort of stabilizes near the end"}, {"start": 8448.96, "end": 8450.96, "text": " You bring down the learning rate to get some of the fine details in the end"}, {"start": 8450.96, "end": 8454.96, "text": " And in the end we see the decision surface of the neural net"}, {"start": 8454.96, "end": 8458.96, "text": " And we see that it learns to separate out the red and the blue area"}, {"start": 8458.96, "end": 8460.96, "text": " Based on the data points"}, {"start": 8460.96, "end": 8462.96, "text": " So that's a slightly more complicated example"}, {"start": 8462.96, "end": 8464.96, "text": " And the demo that I buy by YMB"}, {"start": 8464.96, "end": 8466.96, "text": " That you're free to go over"}, {"start": 8466.96, "end": 8468.96, "text": " But yeah, as of today, that is micro-grad"}, {"start": 8468.96, "end": 8470.96, "text": " I also wanted to show you a little bit of real stuff"}, {"start": 8470.96, "end": 8472.96, "text": " So that you get to see how this is actually implemented"}, {"start": 8472.96, "end": 8474.96, "text": " In the production grade library like by torch"}, {"start": 8474.96, "end": 8476.96, "text": " So in particular, I wanted to show"}, {"start": 8476.96, "end": 8478.96, "text": " I wanted to find and show you"}, {"start": 8478.96, "end": 8480.96, "text": " The backward password 10H in PyTorch"}, {"start": 8480.96, "end": 8482.96, "text": " So here in micro-grad"}, {"start": 8482.96, "end": 8484.96, "text": " We see that the backward password 10H is"}, {"start": 8484.96, "end": 8486.96, "text": " 1-t square"}, {"start": 8486.96, "end": 8488.96, "text": " Where t is the output of the 10H of X"}, {"start": 8488.96, "end": 8490.96, "text": " Times of that grad which is the chain rule"}, {"start": 8490.96, "end": 8494.96, "text": " So we're looking for something that looks like this"}, {"start": 8494.96, "end": 8498.96, "text": " Now I went to PyTorch"}, {"start": 8498.96, "end": 8500.96, "text": " Which has open source GitHub code base"}, {"start": 8500.96, "end": 8502.96, "text": " And I looked through a lot of its code"}, {"start": 8502.96, "end": 8504.96, "text": " And honestly, I wanted to show you"}, {"start": 8504.96, "end": 8508.96, "text": " I looked through a lot of its code and honestly"}, {"start": 8508.96, "end": 8510.96, "text": " I spent about 15 minutes and I couldn't find 10H"}, {"start": 8510.96, "end": 8512.96, "text": " And that's because these libraries"}, {"start": 8512.96, "end": 8514.96, "text": " Unfortunately, they grow in size and entropy"}, {"start": 8514.96, "end": 8516.96, "text": " And if you just search for 10H"}, {"start": 8516.96, "end": 8518.96, "text": " You get apparently 2,800 results"}, {"start": 8518.96, "end": 8520.96, "text": " And 400 and 406 files"}, {"start": 8520.96, "end": 8524.96, "text": " So I don't know what these files are doing honestly"}, {"start": 8524.96, "end": 8528.96, "text": " And why there are so many mentions of 10H"}, {"start": 8528.96, "end": 8530.96, "text": " But unfortunately, these libraries are quite complex"}, {"start": 8530.96, "end": 8532.96, "text": " They're meant to be used"}, {"start": 8532.96, "end": 8534.96, "text": " Not really inspected"}, {"start": 8534.96, "end": 8536.96, "text": " Eventually, I did stumble on someone"}, {"start": 8536.96, "end": 8540.96, "text": " Who tries to change the 10H"}, {"start": 8540.96, "end": 8542.96, "text": " Backward code for some reason"}, {"start": 8542.96, "end": 8544.96, "text": " And someone here pointed to the CPU kernel"}, {"start": 8544.96, "end": 8546.96, "text": " And the CUDA kernel for 10H backward"}, {"start": 8546.96, "end": 8550.96, "text": " So this basically depends on if you're using PyTorch"}, {"start": 8550.96, "end": 8552.96, "text": " On the CPU device or on the GPU"}, {"start": 8552.96, "end": 8554.96, "text": " Which these are different devices and I haven't covered this"}, {"start": 8554.96, "end": 8556.96, "text": " But this is the 10H backward kernel"}, {"start": 8556.96, "end": 8558.96, "text": " For CPU"}, {"start": 8558.96, "end": 8560.96, "text": " And the reason it's so large"}, {"start": 8560.96, "end": 8562.96, "text": " Is that a number one"}, {"start": 8562.96, "end": 8564.96, "text": " This is like if you're using a complex type"}, {"start": 8564.96, "end": 8566.96, "text": " Which we haven't even talked about"}, {"start": 8566.96, "end": 8568.96, "text": " If you're using a specific data type of B-float 16"}, {"start": 8568.96, "end": 8570.96, "text": " Which we haven't talked about"}, {"start": 8570.96, "end": 8572.96, "text": " And then if you're not"}, {"start": 8572.96, "end": 8574.96, "text": " Then this is the kernel and deep here"}, {"start": 8574.96, "end": 8576.96, "text": " We see something that resembles our backward pass"}, {"start": 8576.96, "end": 8580.96, "text": " So they have 8 times 1 minus B-square"}, {"start": 8580.96, "end": 8582.96, "text": " So this B here"}, {"start": 8582.96, "end": 8584.96, "text": " Must be the output of the 10H"}, {"start": 8584.96, "end": 8586.96, "text": " And this is the out.grad"}, {"start": 8586.96, "end": 8588.96, "text": " So here we found it"}, {"start": 8588.96, "end": 8590.96, "text": " Deep inside"}, {"start": 8590.96, "end": 8592.96, "text": " PyTorch and this location"}, {"start": 8592.96, "end": 8594.96, "text": " For some reason inside binary ops kernel"}, {"start": 8594.96, "end": 8596.96, "text": " When 10H is not actually binary op"}, {"start": 8596.96, "end": 8600.96, "text": " And then this is the GPU kernel"}, {"start": 8600.96, "end": 8602.96, "text": " We're not complex"}, {"start": 8602.96, "end": 8604.96, "text": " We're here"}, {"start": 8604.96, "end": 8606.96, "text": " And here we go with online opcode"}, {"start": 8606.96, "end": 8608.96, "text": " So we did find it"}, {"start": 8608.96, "end": 8610.96, "text": " But basically unfortunately"}, {"start": 8610.96, "end": 8612.96, "text": " These code bases are very large"}, {"start": 8612.96, "end": 8614.96, "text": " And micrograd is very very simple"}, {"start": 8614.96, "end": 8616.96, "text": " But if you actually"}, {"start": 8616.96, "end": 8618.96, "text": " Want to use real stuff"}, {"start": 8618.96, "end": 8620.96, "text": " Finding the code for it"}, {"start": 8620.96, "end": 8622.96, "text": " You'll actually find that difficult"}, {"start": 8622.96, "end": 8624.96, "text": " I also wanted to show you"}, {"start": 8624.96, "end": 8626.96, "text": " The whole example here"}, {"start": 8626.96, "end": 8628.96, "text": " Where PyTorch is showing you how you can"}, {"start": 8628.96, "end": 8630.96, "text": " Register a new type of function"}, {"start": 8630.96, "end": 8632.96, "text": " That you want to add to PyTorch"}, {"start": 8632.96, "end": 8634.96, "text": " As a LEGO building block"}, {"start": 8634.96, "end": 8636.96, "text": " So here if you want to for example add"}, {"start": 8636.96, "end": 8638.96, "text": " A like jonder polynomial"}, {"start": 8638.96, "end": 8640.96, "text": " 3"}, {"start": 8640.96, "end": 8642.96, "text": " Here's how you can do it"}, {"start": 8642.96, "end": 8644.96, "text": " You will register it"}, {"start": 8644.96, "end": 8646.96, "text": " And then you have to tell PyTorch"}, {"start": 8646.96, "end": 8648.96, "text": " How to forward your new function"}, {"start": 8648.96, "end": 8650.96, "text": " And how to backward through it"}, {"start": 8650.96, "end": 8652.96, "text": " So as long as you can do the forward pass"}, {"start": 8652.96, "end": 8654.96, "text": " Of this little function piece that you want to add"}, {"start": 8654.96, "end": 8656.96, "text": " And as long as you know"}, {"start": 8656.96, "end": 8658.96, "text": " The local derivatives"}, {"start": 8658.96, "end": 8660.96, "text": " Local gradients which are implemented in the"}, {"start": 8660.96, "end": 8662.96, "text": " Backward PyTorch will be able to"}, {"start": 8662.96, "end": 8664.96, "text": " Backpropagate through your function"}, {"start": 8664.96, "end": 8666.96, "text": " And then you can use this as a LEGO block"}, {"start": 8666.96, "end": 8668.96, "text": " In a larger LEGO castle"}, {"start": 8668.96, "end": 8670.96, "text": " Of all the different LEGO blocks that PyTorch already has"}, {"start": 8670.96, "end": 8672.96, "text": " And so that's the only thing you have to tell PyTorch"}, {"start": 8672.96, "end": 8674.96, "text": " And everything would just work"}, {"start": 8674.96, "end": 8676.96, "text": " And you can register new types of functions"}, {"start": 8676.96, "end": 8678.96, "text": " In this way following this example"}, {"start": 8678.96, "end": 8680.96, "text": " And that is everything that I wanted to cover in this lecture"}, {"start": 8680.96, "end": 8682.96, "text": " So I hope you enjoyed building out micro-grad"}, {"start": 8682.96, "end": 8684.96, "text": " With me, I hope you find it interesting"}, {"start": 8684.96, "end": 8686.96, "text": " And insightful"}, {"start": 8686.96, "end": 8688.96, "text": " And yeah, I will post a lot of the links"}, {"start": 8688.96, "end": 8690.96, "text": " That are related to this video"}, {"start": 8690.96, "end": 8692.96, "text": " In the video description below"}, {"start": 8692.96, "end": 8694.96, "text": " I will also probably post a link to a discussion forum"}, {"start": 8694.96, "end": 8696.96, "text": " Or discussion group"}, {"start": 8696.96, "end": 8698.96, "text": " Where you can ask questions related to this video"}, {"start": 8698.96, "end": 8700.96, "text": " And then I can answer"}, {"start": 8700.96, "end": 8702.96, "text": " Or someone else can answer your questions"}, {"start": 8702.96, "end": 8704.96, "text": " And I may also do a follow-up video"}, {"start": 8704.96, "end": 8706.96, "text": " That answers some of the most common questions"}, {"start": 8706.96, "end": 8708.96, "text": " But for now, that's it"}, {"start": 8708.96, "end": 8710.96, "text": " I hope you enjoyed it"}, {"start": 8710.96, "end": 8712.96, "text": " If you did, then please like and subscribe"}, {"start": 8712.96, "end": 8714.96, "text": " So that YouTube knows to feature this video to more people"}, {"start": 8714.96, "end": 8716.96, "text": " And that's it for now"}, {"start": 8716.96, "end": 8718.96, "text": " I'll see you later"}, {"start": 8722.96, "end": 8724.96, "text": " Now here's the problem"}, {"start": 8724.96, "end": 8726.96, "text": " We know DL by"}, {"start": 8726.96, "end": 8728.96, "text": " Wait, what is the problem"}, {"start": 8728.96, "end": 8730.96, "text": " And that's everything I wanted to cover in this lecture"}, {"start": 8730.96, "end": 8732.96, "text": " So I hope you enjoyed"}, {"start": 8732.96, "end": 8734.96, "text": " Us building out micro-grad"}, {"start": 8734.96, "end": 8736.96, "text": " Micro-grad"}, {"start": 8736.96, "end": 8738.96, "text": " Okay now let's do the exact same thing for multiple"}, {"start": 8738.96, "end": 8740.96, "text": " Because we can't do something like"}, {"start": 8740.96, "end": 8742.96, "text": " Eight times two"}, {"start": 8742.96, "end": 8744.96, "text": " Oops"}, {"start": 8744.96, "end": 8746.96, "text": " I know what happened there"}, {"start": 8746.96, "end": 8748.96, "text": " I know what happened there"}, {"start": 8748.96, "end": 8750.96, "text": " I know what happened there"}, {"start": 8750.96, "end": 8760.96, "text": " I know what happened there"}]
Diana Uribe
https://www.youtube.com/watch?v=dayDTsaM1Gc
Drive My Car
#miercolesdecine #drivemycar #mubi Imagínense que nuestro patrocinador de #Miércolesdecine que ahora es @MUBI, nos regaló 30 días gratis de cine en su plataforma para todos ustedes, solo debemos ingresar a mubi.com/dianauribe en el siguiente link y ya, podrán disfrutar de una película nueva todos los días. →https://mubi.com/dianauribe?utm_source=social%20channels&utm_medium=influencer&utm_campaign=comiercolesdecine Drive My Car Adaptación del relato corto de Haruki Murakami, Drive My Car sigue a Yusuke Kafuku (HidetoshiNishijima), un actor y director de teatro, quien está felizmente casado con la guionista Oto (ReikaKirishima). Sin embargo, Oto muere repentinamente tras dejar atrás un secreto. Dos años más tarde,Kafuku, aún incapaz de superar del todo la pérdida de su esposa, recibe una oferta para dirigir una obraen un festival de teatro y se dirige a Hiroshima en su coche. ¡Síguenos en nuestras Redes Sociales! Facebook: https://www.facebook.com/DianaUribe.fm/ Instagram: https://www.instagram.com/dianauribefm/?hl=es-la Twitter: https://twitter.com/dianauribefm?lang=es Pagina web: https://www.dianauribe.fm
Buenas, les cuento una historia hoy en mi el cole de cine después de 4 años tenemos un patrocinador Mubi hemos tenido apoyos pero ahora Mubi nos va a patrocinar en este en esta película y ustedes van a poder ver durante un mes películas de la vanguardia más moderna de las clásicas más fantásticas lo pueden ver es muy y de leitarse con un excelente cine cine arte es lo que hay en muy de todos los películas maravillos entonces imagínese que es muy una película que me fue sorprendiendo me fue sorprendiendo me fue sorprendiendo se llama dry my car película también nominada los car en el ahorita en el 2021 basada en un cuento de Murakami de la autor japonés la película va tomando un giro cada vez más fuerte y más poderoso es una historia que como comienza no apuntas a la profundidad de lo que nos va a tocar K Y a la LL las obras de Teatro en Casetz, lo cual es un detalle de fina coquetería de la película de una sociedad tan supremamente avanzada, tecnógicamente como la sociedad japonesa y transcurre entre ellos el tiempo y se van reconfigurando como quiera que eso puede hacer y él suele manejar y disfruta mucho manejar que le permite la intimidad, el acceso a las voces que ella le graba permanentemente de la obra de Shekhov que le está siempre montando y pronto la muerte de ella lo pone en una situación totalmente compleja y de todas maneras acepta ser director invitado en Hiroshima y allá presentar la obra cuando llega Hiroshima hay una mujer que va a conducir para él porque está prohibido que los directores y que los actores conduzcan su propio vehículo por un tema de seguridad porque han vivido situaciones muy difíciles y es una norma de la compañía de Teatro hasta aquí es una historia cotidiana con dolores profundos pero es una historia cotidiana, entonces él va a poner en escena como director invitado va a ser todo el casting para crear la obra de Shekhov del Tio Vania y esta mujer muy introvertida casi invisible maneja para él hasta ahí va planteada la historia digamos como parte de lo que es la vida pero esto va cogiendo un nivel de profundidad los personajes entran en unos bórtices psicológicos afectivos tan potentes que la película de pronto te lleva para la os desconocidos de inimaginables de lo profundo de los avismos de la naturaleza humana y ahí es donde se pone realmente poderosa la película porque cada uno de los personajes incluida la chica que maneja y algunos de los actores con los que estará invogado por situaciones anteriores del pasado lo van a llevar a la profundidad del abismo del alma de ella porque en el final el secreto es el corazón indecifarable de otó su mujer y hay una frase que es absolutamente maravillosa que dice nunca podríamos llegar a conocer el corazón de la persona que amamos por mucho que lleguemos a marla jamás llegaremos a conocer su corazón lo único que podremos conocer es el nuestro propio es lo único que nos he estado conocer nuestro propio corazón esa es la frase a veces ni siquiera eso pero en la en la frase dice lo único que si nos he estado es conocer nuestro propio corazón y empieza a presentarse una vez muy sondable de secretos, de emociones, de de profundos perfiles psicológicos de los personajes con los que él está haciendo la obra de la la búsqueda y el secreto del corazón de la mujer con la que compartió tantos años de la obra de Shekhov que en sí misma invita a todos estos abismos de la alma humana desde la profundidad de la pluma y del teatro ruso y desde la mirada de él que reconociendo toda esta diversidad de profundidades se reconstruye desde la solidaridad con una nueva mirada del mundo gracias a su compañía con la mujer que camina que la que me le maneja el cargo a él hay un personaje sobre el mundo, ese personaje sobre el mundo que es parte del casting va a tener un toque, poético, profundo, subtil, poderoso dentro de la engarana de la obra y la película porque estamos narrando la obra, estamos narrando la película, estamos narrando la biografía, estamos narrando su corazón, estamos manejando un carro y estamos mirando las profundidades de las almas de las personas que están involucradas en este esquícito, este delicioso profundo y poderoso relato de Murakami entonces drive my car, maneja mi carro, resulta siendo una película complejísima, es de tres horas al principio parece que se acabara y no se va a acabar, vais a empezar y de ahí en adelante uno sigue manejando y eso aporiendo muy tenaz y se aporiendo muy tenaz hasta que termine enfrentándolo a uno a los más grandes secretos insondables de la alma humana que nunca son ajenos a la alma propia, cuando se ve reflejado en estos puntos en donde se toma la naturaleza humana en su complejidad profunda es una película interior, es una película que nos muestra este Japón perfecto en toda su dimensión así donde los diseños son maravillosos y cinematográficamente es perfecta los espacios, los diseños, todo es de una estética minimalista perfectamente diseñada como en estas películas nada parece casual, todo es realmente programado y viajado infotográficamente es muy bella y es cotidiana y la cotidiana y va tomando una intensidad y la intensidad va tomando una profundidad y la profundidad va tomando una lectura del corazón humano y una película totalmente diferente a aquella en la que te embarcaro es en la que vas a terminar metido tú y tu propio corazón hoy en mi alcool es decir, drive my car en un video una película profunda que te mueve el alma
[{"start": 0.0, "end": 20.72, "text": " Buenas, les cuento una historia hoy en mi el cole de cine despu\u00e9s de 4 a\u00f1os tenemos un patrocinador"}, {"start": 20.72, "end": 31.52, "text": " Mubi hemos tenido apoyos pero ahora Mubi nos va a patrocinar en este en esta pel\u00edcula y ustedes van a poder ver"}, {"start": 31.52, "end": 42.56, "text": " durante un mes pel\u00edculas de la vanguardia m\u00e1s moderna de las cl\u00e1sicas m\u00e1s fant\u00e1sticas lo pueden"}, {"start": 42.56, "end": 50.52, "text": " ver es muy y de leitarse con un excelente cine cine arte es lo que hay en muy de todos los"}, {"start": 50.52, "end": 59.800000000000004, "text": " pel\u00edculas maravillos"}, {"start": 59.800000000000004, "end": 66.0, "text": " entonces imag\u00ednese que es muy una pel\u00edcula que me fue sorprendiendo me fue sorprendiendo me fue"}, {"start": 66.0, "end": 74.96000000000001, "text": " sorprendiendo se llama dry my car pel\u00edcula tambi\u00e9n nominada los car en el ahorita en el 2021 basada en un"}, {"start": 74.96000000000001, "end": 83.84, "text": " cuento de Murakami de la autor japon\u00e9s la pel\u00edcula va tomando un giro cada vez m\u00e1s fuerte y m\u00e1s"}, {"start": 83.84, "end": 99.76, "text": " poderoso es una historia que como comienza no apuntas a la profundidad de lo que nos va a tocar"}, {"start": 114.8, "end": 137.64000000000001, "text": " K"}, {"start": 137.64, "end": 148.64, "text": " Y a la LL las obras de Teatro en Casetz, lo cual es un detalle de fina coqueter\u00eda de la pel\u00edcula de una sociedad tan supremamente avanzada,"}, {"start": 148.64, "end": 163.64, "text": " tecn\u00f3gicamente como la sociedad japonesa y transcurre entre ellos el tiempo y se van reconfigurando como quiera que eso puede hacer y \u00e9l suele manejar y disfruta mucho manejar"}, {"start": 163.64, "end": 182.64, "text": " que le permite la intimidad, el acceso a las voces que ella le graba permanentemente de la obra de Shekhov que le est\u00e1 siempre montando y pronto la muerte de ella lo pone en una situaci\u00f3n totalmente compleja"}, {"start": 182.64, "end": 210.64, "text": " y de todas maneras acepta ser director invitado en Hiroshima y all\u00e1 presentar la obra cuando llega Hiroshima hay una mujer que va a conducir para \u00e9l porque est\u00e1 prohibido que los directores y que los actores conduzcan su propio veh\u00edculo por un tema de seguridad porque han vivido situaciones muy dif\u00edciles y es una norma de la compa\u00f1\u00eda de Teatro"}, {"start": 210.64, "end": 226.64, "text": " hasta aqu\u00ed es una historia cotidiana con dolores profundos pero es una historia cotidiana, entonces \u00e9l va a poner en escena como director invitado va a ser todo el casting para crear la obra de Shekhov del Tio Vania"}, {"start": 226.64, "end": 243.64, "text": " y esta mujer muy introvertida casi invisible maneja para \u00e9l hasta ah\u00ed va planteada la historia digamos como parte de lo que es la vida pero esto va cogiendo un nivel de profundidad"}, {"start": 243.64, "end": 261.64, "text": " los personajes entran en unos b\u00f3rtices psicol\u00f3gicos afectivos tan potentes que la pel\u00edcula de pronto te lleva para la os desconocidos de inimaginables de lo profundo de los avismos de la naturaleza humana"}, {"start": 261.64, "end": 277.64, "text": " y ah\u00ed es donde se pone realmente poderosa la pel\u00edcula porque cada uno de los personajes incluida la chica que maneja y algunos de los actores con los que estar\u00e1 invogado por situaciones anteriores del pasado"}, {"start": 277.64, "end": 293.64, "text": " lo van a llevar a la profundidad del abismo del alma de ella porque en el final el secreto es el coraz\u00f3n indecifarable de ot\u00f3 su mujer y hay una frase que es absolutamente maravillosa que dice"}, {"start": 293.64, "end": 308.64, "text": " nunca podr\u00edamos llegar a conocer el coraz\u00f3n de la persona que amamos por mucho que lleguemos a marla jam\u00e1s llegaremos a conocer su coraz\u00f3n lo \u00fanico que podremos conocer es el nuestro propio"}, {"start": 308.64, "end": 320.64, "text": " es lo \u00fanico que nos he estado conocer nuestro propio coraz\u00f3n esa es la frase a veces ni siquiera eso pero en la en la frase dice lo \u00fanico que si nos he estado es conocer nuestro propio coraz\u00f3n"}, {"start": 320.64, "end": 337.64, "text": " y empieza a presentarse una vez muy sondable de secretos, de emociones, de de profundos perfiles psicol\u00f3gicos de los personajes con los que \u00e9l est\u00e1 haciendo la obra"}, {"start": 337.64, "end": 357.64, "text": " de la la b\u00fasqueda y el secreto del coraz\u00f3n de la mujer con la que comparti\u00f3 tantos a\u00f1os de la obra de Shekhov que en s\u00ed misma invita a todos estos abismos de la alma humana desde la profundidad de la pluma y del teatro ruso"}, {"start": 357.64, "end": 379.64, "text": " y desde la mirada de \u00e9l que reconociendo toda esta diversidad de profundidades se reconstruye desde la solidaridad con una nueva mirada del mundo gracias a su compa\u00f1\u00eda con la mujer que camina que la que me le maneja el cargo a \u00e9l"}, {"start": 379.64, "end": 397.64, "text": " hay un personaje sobre el mundo, ese personaje sobre el mundo que es parte del casting va a tener un toque, po\u00e9tico, profundo, subtil, poderoso dentro de la engarana de la obra y la pel\u00edcula"}, {"start": 397.64, "end": 418.64, "text": " porque estamos narrando la obra, estamos narrando la pel\u00edcula, estamos narrando la biograf\u00eda, estamos narrando su coraz\u00f3n, estamos manejando un carro y estamos mirando las profundidades de las almas de las personas que est\u00e1n involucradas en este esqu\u00edcito, este delicioso profundo y poderoso relato de Murakami"}, {"start": 418.64, "end": 434.64, "text": " entonces drive my car, maneja mi carro, resulta siendo una pel\u00edcula complej\u00edsima, es de tres horas al principio parece que se acabara y no se va a acabar, vais a empezar y de ah\u00ed en adelante uno sigue manejando"}, {"start": 434.64, "end": 459.64, "text": " y eso aporiendo muy tenaz y se aporiendo muy tenaz hasta que termine enfrent\u00e1ndolo a uno a los m\u00e1s grandes secretos insondables de la alma humana que nunca son ajenos a la alma propia, cuando se ve reflejado en estos puntos en donde se toma la naturaleza humana en su complejidad profunda"}, {"start": 459.64, "end": 488.64, "text": " es una pel\u00edcula interior, es una pel\u00edcula que nos muestra este Jap\u00f3n perfecto en toda su dimensi\u00f3n as\u00ed donde los dise\u00f1os son maravillosos y cinematogr\u00e1ficamente es perfecta los espacios, los dise\u00f1os, todo es de una est\u00e9tica minimalista perfectamente dise\u00f1ada como en estas pel\u00edculas nada parece casual, todo es realmente programado y viajado"}, {"start": 488.64, "end": 514.64, "text": " infotogr\u00e1ficamente es muy bella y es cotidiana y la cotidiana y va tomando una intensidad y la intensidad va tomando una profundidad y la profundidad va tomando una lectura del coraz\u00f3n humano y una pel\u00edcula totalmente diferente a aquella en la que te embarcaro es en la que vas a terminar metido t\u00fa y tu propio coraz\u00f3n"}, {"start": 514.64, "end": 521.64, "text": " hoy en mi alcool es decir, drive my car en un video una pel\u00edcula profunda que te mueve el alma"}]
Diana Uribe
https://www.youtube.com/watch?v=MZV9fQB0hnM
Feria de Manizales
#podcastdianauribe #dianauribefm #feriademanizales Esta vez el turno es para una de las festividades más tradicionales y reconocidas de nuestro país: La Feria de Manizales. Hablaremos de una ciudad que surgió sobre la cultura del café, del cable aéreo, del rescate de las tradiciones, del Nevado del Ruiz, de la relación con la Feria de Sevilla, de las carreras de carritos de balineras, de los arrieros, trovadores y del sentido de pertenencia a una ciudad incrustada en las faldas de la cordillera de los Andes. Notas del episodio El origen de Manizales, la capital del departamento de Caldas →https://www.banrepcultural.org/biblioteca-virtual/credencial-historia/numero-236/manizales-la-ciudad-homerica Las rutas de los arrieros en Caldas →https://destinocaldas.co/rutas_caldas/ruta-de-la-arrieria/ La historia del cable aéreo Manizales-Mariquita, una proeza de la ingeniería en medio de las montañas colombianas →https://www.radionacional.co/cultura/historia-colombiana/cable-aereo-manizales-mariquita-100-anos-de-historia Aquí les dejamos una reseña de la Feria de Abril de Sevilla, fuente de inspiración de la Feria de Manizales →https://www.visitasevilla.es/historia/la-feria-de-abril Señal Memoria nos cuenta los cambios que ha tenido la Feria de Manizales a lo largo del tiempo →https://www.senalmemoria.co/articulos/feria-de-manizales Y aquí un relato de una tradición propia del relieve de Manizales: los carritos de balineras →https://www.radionacional.co/noticias-colombia/carrera-de-carritos-de-balineras-en-la-feria-de-manizales Página oficial de la Feria de Manizales →https://feriademanizales.gov.co/ ¡Síguenos en nuestras Redes Sociales! Facebook: https://www.facebook.com/DianaUribe.fm/ Instagram: https://www.instagram.com/dianauribefm/?hl=es-la Twitter: https://twitter.com/dianauribefm?lang=es Pagina web: https
buenas siguiendo las tradiciones de las ferias y fiestas en las que estamos montados encontrando estas muchísimas formas en que la diversidad de todas las culturas de Colombia nos habitan en la gran celebración vamos a meter con una de las ferias más importantes de Colombia y probablemente el continente también porque tiene un alcance muy grande es la tremenda y la poderosa feria de Manizales para esto primero vamos a hablar un poco de la región las ferias en general en todas las ferias que hemos visto hay una reivindicación de las tres raíces que nos habita la raíz indígena la raíz afro y la raíz ispanica esta en particular va a reimindicar la raíz ispanica cada una de ellas nos define en uno de los componentes culturales que nos habitan esta va a ser una una manera de arraigarse en la identidad de lo ispanico lo mismo que del carnaval de negros y blancos se ha arraigado en la indígena y el carnaval de barrenquilla en lo afro esta se arraiga en lo ispanico las tres son vertientes históricas y culturales que nos habitan a nosotros entonces vamos a empezar por la región porque esta es una región particular nodal neurá lgica en toda la construcción de la modernidad de Colombia como país entonces es importante que vengamos de donde ha surgido todo esto esta es una ciudad reciente a diferencia de otras ciudades que tienen fundaciones coloniales y que vienen de tiempos muy anteriores esta ciudad aparece en el siglo XIX en el contexto de las colonizaciones que en ese momento se estaban dando en el país y es que tiene una historia muy distinta porque las poblaciones indígenas de las montañas y las vertientes de la cordillera central y occidental sufrieron el impacto de la invasión española de una manera tan dramática que ahí si hubo digamos como una tragedia demográfica enorme entre los siglos 16 y 16 17 prácticamente desaparecieron o sea la tragedia demográfica de la perió de la conquista se siente más en unas regiones que en otras aquí fue muy grave entonces esta zona quedó fuera de los poblamientos durante mucho tiempo eso permitió que toda la naturaleza los bosques y las selvas se fueran recuperando hasta convertirse en una zona boscosa denza profunda desde el punto de vista de la naturaleza entonces en el siglo XIX que es un siglo que está cargado de una manera de mirar la historia van a llegar una gran cantidad de poblaciones humanas para buscar un hogar en estas tierras que estaban totalmente pobladas por bosques entonces manizales es fundada en 1849 por colonos y por arrieros que se abrieron en el monte a su paso aquí hay una construcción de un imaginario y es crearse en un paso de ciudad y de población a través de desmontar una región eso es un imaginario muy importante para ellos porque es abrir se paso por entre lo más escarpado y espeso de la naturaleza dentro de toda la idea del siglo XIX de que el progreso era abrirse paso por el monte entonces la palabra manizales viene de unas piedras volcánicas que se llaman mani que de las cuales encuentran muchísimas en el sitio de manizales pues porque es una zona profundamente volcánica entonces de ahí viene la palabra de mani de estas piedras manizales y esta es una región que es muy importante porque es una región que nos da un espejo de toda la diversidad geográfica que nosotros somos capaces de tener porque tiene volcánes nevados después vamos a ver lo que eso significa en tanto identidad y en tanto sobresalto pero también eso nos ha traído historias muy bravas también entonces esta condición montañosa esta condición selbática y esta condición de nevados que al mismo tiempo son volcánes y volcánes que al mismo tiempo son nevados cosa que es difícil de entender desde otras geografías y desde otras desde otras regiones donde tales fenómenos no existen pero la geografía en Colombia tiene cosas realmente tremendas, esplendorosas y un esés en otro sitio donde yo he visto es en islandia eso que hay glaciares y volcánes de bajo pero no es tan común en la tierra y en cambio manizales está hay en el sitio donde estaban las elba donde escan los estando nevados entonces tiene como el tema de las capas térmicas como definición de las temperaturas en Colombia es particularmente importante en manizales porque ahí hay una cantidad de recursos geográficos de variantes geográficas que confluyen en esa ciudad única que está que va a ser fundada y va a estar y va a funcionar en la cuchilla de una montaña eso es una ciudad bastante difícil de imaginar para que no haya ido a manizales entonces dos décadas después de la fundación la ciudad de manizales va a tener el impulso más importante que determinó nuestro historia en el siglo XIX y en muy buena parte del siglo XX y que determina nuestra identidad ante el mundo en una gran medida el café y es que en ese momento esta parte de Colombia va a tener como toda la luz de la historia porque va a ser el eje cafetero y hoy sigue siendo una parte nodal de nuestra historia todo el país va a vivir en la década en 1870 el café como nuestro producto insigne de exportación ese café permitió que nosotros centraramos a la moderna porque eso ha sido de tales proporciones que nos dio un lugar en el mundo nos dio un lugar en las exportaciones nos dio un lugar en la identidad planetaria y nos da la sabrosura del café además de todas las ventadas históricas es que rico el café yo que soy particularmente tintera pues me me me regocijo con esta historia porque pues la vivo diariamente entonces manizales se vuelve la capital del eje cafetero y eso le da un lugar en la historia muy preponderante entonces eso le genera un crecimiento la cosa más impresionante pero un crecimiento verti y no so verdaderamente grande y eso hace que rápidamente vaya a tener una población mucho más grande que otras ciudades que tienen muchos años anteriores de fundación eso le va a dar un carácter en que irrumpe manizales en la historia a partir de todo el empuje y el impulso del café que le va a dar toda esta identidad Colombia en su construcción económica de independencia va a buscar en el café la base económica y va a ser uno de los países exportadores más grandes y deba crear uno de los mercados más grandes nosotros tenemos representación en Londres bueno eso ha sido toda una construcción de país a partir del café y digamos la agricultura nos ha generado muchos otros otros productos pero el café siempre ha sido la bancuardia de la modernidad y de la entrada de Colombia al siglo XX y de la entrada de Colombia como a los mercados del mundo al escenario internacional y a la identidad que todo eso va a generar nosotros los colombianos socializamos a través del café o sea eso es a través de lo cual nosotros nos sentamos y no seamos amigos tomá menos un tinto si es la manera tinto llamamos nosotros para los que nos escuchan de otras latitudes a un café que uno se toma que se llama tinto y es la base de toda la socialización en Colombia porque uno empieza a portomarse un tinto para cualquier forma de relacionarse labora la efectiva amistosa parrande todo empieza por un tinto o cuando uno llega a una casa y no le ofrecieron ni un tinto aquello fue la miseria absoluta ni un tinto me ofrecieron o sea hay que entender la importancia del café en nuestra cultura para entender la importancia de manizales en nuestro relato entonces somos un país totalmente permeado por el café desde las viejas historias de todo ese café que en Europa va a llegar cuando los otomanos fueron derrotados y dejar una cantidad de sacos de café el que había tenido su origen en etiopía que ha atravesado el imperio tomano que cuando los otomanos sufren esa derrota a manos del imperio osra ungo de los bieneses hasta el punto donde ya no pueden avanzar más en esas colinas a la salida de biena dejan una cantidad de saco de café esa cantidad de saco de café van a ser tomados por los bieneses van a ser colados se van a volver más suaves y van a empezar a crear los cafés bieneses y ese café va a llegar también por la vía de los cafés pariscinos que es donde se va a forgar una buena parte de la ilustración y la revolución francesa y van a llegar después a nuestra tierra por la vía de Santander por los Santanderes y hoy acá que va a llegar por primera vez y va a constituir la identidad histórica de manizales en la men que ellos van a ser el corazón del relato de la historia más importante que nosotros tenemos que la historia de nuestra relación con el café entonces ahí ya nos vamos situando en donde estamos entonces después de que el café llegó por los santanderes y por Kundina marca va a empezar a distribuirse por todo el país pero su epicentro va a ser manizales y toda esta región lo que después se nían los departamentos del quintillo de Rizaraldas de Caldas tanto el viejo Caldas que los abarcaba a todos como después la reciente división de cada uno de estos departamentos en el quintillo en Rizaraldas y en Caldas va a generar una serie de capitales pues que van a ser Pereira que van a ser Armenia y manizales entonces en lo que tiene que ver con todo el epicentro del café va a ser manizales la capital de ejecafetero entonces eso le va a dar a la región un empuje económico impresionante le va a dar prosperidad desarrollo de pequeñas industrias una cantidad de pequeños negocios y después vamos a tener una circunstancia histórica que es que en 1920 con la separación de Panamá los Estados Unidos va a dar una indemnización a Colombia por la pérdida de Panamá para nosotros esto es la pérdida de Panamá para Panamá y su surgimiento como Estado nacional y es su independencia y para los Estados Unidos es una enorme movida de destino manifiesto para unir los dos oceanos en un proyecto de expansión imperial enorme el asunto es que por eso nos van a dar un vietel argo y ese vietel argo es la indemnización de la pérdida de Panamá ese vietel se va a invertir en café y ese café se hacen manizales y eso le va a dar a manizales un apoyo y un empuje económico urbanístico cultural va a ser como la guián beneficiaria de la indemnización de la separación de Panamá entonces esta ciudad brota así brota inclusive ellos lo dicen con mucho orgullo que ellos brotan como brota de la tierra nosotros somos muy fértiles en este país tenemos regiones de una fertilidad increíble entonces así como brota lo que se cae al piso porque aquí hay cosas que se caen y brotan y de ahí salen árboles y de ahí salen matas se pasa con el algodón eso pasa con muchísimos de nuestros productos que sólo conestar en el piso brotan manizales brota brota como que irrumpe como que florece en la mitad de estas afortunadísimas coyunturas históricas para la formación de la ciudad entonces bueno pero aquí tenemos un problema hay que transportar el café y nosotros tenemos unas montañas las más tremendas porque este sistema de tres cordilleras que se levantan a unas alturas tan grandes hace que el transporte entre ellas siempre haya sido complicado y la vuelta de encarreteras es es es es carpada es torposa y hacer un ferrocalíde en ese momento ahí estaba muy difícil porque era una tarea titánica y necesitamos llegar al Magdalena porque el Magdalena es el río que realmente nos va a articular como país es el alma de este pueblo nuestro aquí llegar al Magdalena como vamos a llegar al Magdalena por entre estas montañas entonces va a surgir una idea buenísima y es la de hacer un cable un cable aéreo que comunique a manizales con mariquita en el tolima porque de ahí salga para el Magdalena entonces está la manera de transportar el café que en la medida en que crece están necesitando muchísimo más soluciones de transporte que no se podía andar de otra manera sin que fueran increíblemente tuertuosas y prolongadas entonces hay que atravesar ríos cordilleras que imaginarse porque este es un punto en que nuestro geografía es increíblemente diversa descomunal como se va a pasar por ahí entonces se hace el cable un cable que atraviese todo el cable más largo del mundo el cable aéreo más impresionante esto es una obra enorme empieza en 1912 con el apoyo técnico y económico de una firma inglesa pero adivinen que el viejo truco la primera guerra mundial entonces todo lo que viene de Europa con transporte material o tecnología de Europa queda en el colapso del suicidio de la razón que es la primera guerra mundial en donde los europeos experimentan un entro piano un cante es vista pues bueno esto es lo que venga ya va a quedar va a quedar en veremos porque pues ellos están en el en el tema de destruirse y echarse 20 millones de muertos por un conflicto de mensual y sin sentido que quedaría mal terminado y llevaría a otro que sería la segunda entonces mientras ellos están en la destrucción de la guerra nosotros nos quedamos así como muy llora que vamos a hacer con esto la obra se finalizan 1923 y entonces con las regalías que están entrando en ese momento de panamá ya se logró hacer con 20 estaciones 70 kilómetros de largo se convierte en el cable más largo del mundo o sea aquí imaginarse esto porque estamos por entre unas montañas que son particularmente abruptas escarpadas enormes en esa zona y esto va a conectar a esta región del país y al café con el resto del mundo se cabe va a hacer la salida al magdalena y la salida al magdalena en la salida al mundo porque por el magdalena nosotros nos vamos a poder comunicar con el mundo hay que entender la maravilla de nuestra geografía para poder dar lepiso a estos relatos y entender lo que significan este cable tiene 71.823 metros de longitud y tiene 375 torres de acero o sea agameso construyalo a excepción del ator de del herveo que está hecho en madera y algunas de las alturas están entre 4 y 55 metros distribuidas en 15 secciones o sea eso es una obra monumental entonces esto tenía vagonetas tiene vagonetas por dentro se impulsaba carga y esas vagonetas van con 8 motores de 140 caballos de fuerza todo recorrido tomado 10 horas y el hecho de que el recorrido tomara 10 horas nos va a dar un gran progreso sobre lo que significaba cargarlas en mula que eran 10 días y aquí con el cable aéreo pues va a ser en 10 horas eso hace una diferencia en la productividad en el crecimiento en el desarrollo y en la pujanza pues del cielo a la tierra muy grande entonces con este cable nos vamos a comunicar con el mundo desde Manizales aquí empieza el espacio comercial la feria de Manizales una de las celebraciones más tradicionales dentro de nuestro país y un referente para todos los que quieren venir a vivir y a sentir la la agría de Colombia los invito a que escuchen el origen la trayectoria los protagonistas de la feria de Manizales en mi podcast las historias de Diana Uribe a través de radio nacional de Colombia y luego escuchalo cuando quieras en rtbcplay.com bueno ya puestas todas las condiciones para que haya una ciudad que haya emergido de esta manera que se convierta en un epicentro tan importante como la historia cafetera de los colombianos ahora es tiempo de una feria ya lo vimos lo vimos en pasto lo vimos en barranquilla vimos como cuando se crea la feria la ciudad entra en una como en una armonía diferente porque irrumpe en su cultura aquello que va a determinar la vida y el amor de sus habitantes que son las ferias las ciudades que tienen ferias pues son ciudades felices porque la gente se va a pasar el resto de la vida todo el año preparando esta feria y eso como hemos visto es una de las ocupaciones más sabrosas y maravillosas de la vida que es tener una feria en casa bueno entonces vamos a hacer esta feria como como les digo al principio esta feria va a reivindicar un origen espánico al igual que barranquilla en un origen afro o pasto un origen indígena un origen profundamente afro cada una de nuestras raíces se va a ver representada en mayor o menor grado en cada una de estas ferias la mezcla de todas ellas nos define y la vertiente de cada una de ellas nos habita no se hazómos todo eso también y es parte de la diversidad impresionante que somos los colombianos entonces un momento en que querían reactivar la economía porque había habido unos declives por cuenta de grandes incendios que se también nos pasó en cali cuando uosemejante está llido tan impresionante que se acabó el centro de cali aquí un incendios incendios poderosos vueltere motos porque esta es una zona geolóicamente inestable aquí pasan terremotos muy terribles pasan cosas graves también entonces había habido un declive económico y se no reactivemos todo esto con una feria y vamos a hacer una cosa bien bonita por los 100 años de la fundación de manizales entonces se preparan celebraciones y fiestas pero a nosotros nos atraviesan unas cosas tan terribles de hombre y la gracias que salimos adelante por encima de ellas cuando ya estaba todo listo todo bonito y todo chevere a tan a Jorge Lisser gaitán en 1948 en Bogota y Estalla o se generaliza un fenómeno que nosotros vamos a conocer como la violencia con Vema yúscula Nagerra Civil que nos desangruó en la vida en la sangre y en la memoria y que todavía nos persigue en todos los fantasmas de lo que fue a haber vivido eso y que en toda la feria que tiene que ver con ese periodo de nos atraviesa de una u otra manera todas se vieron atravesadas por el momento de la violencia entonces esto no se puede hacer como lo pensábamos hacer ese día ni nada entonces se sigue pensando en cómo celebrar esto en cómo hacer una feria para la ciudad a pesar de todo lo que está pasando porque a nosotros no nos detiene nada o sea para gozar y para hacer ferias y para hacer fiestas lo hemos reiterado a lo largo de estos relatos no nos detiene nada aquí se rumbé a pase lo que pase y esa es espíritu de celebración y de gozadera también un espíritu de resiliencia y nos hace poderosos en la celebración es uno de nuestros grandes poderes como sociedad y como pueblo el poder de la celebración es cómo nos vamos a inventar esto lo hay una persona que vive enamorado de Sevilla y se iba para las ferias de Sevilla en abril que son las ferias que se hacen para conmemorar la llegada de la primera dicen de Sevilla que quien no ha visto Sevilla no conoce maravilla o sea para las fiestas de Sevilla y o así es como las fiestas de Sevilla las vamos a hacer en manizales entonces trae parte de las principales características de la celebración de Sevilla las carretas del rocillo la manzanilla las manulas las casetas y esto se va a dar alrededor de un fenómeno que va a tener una importancia cultural muy grande en Colombia durante un periodo de su historia las corridas de toros de 10 años antes de la inauguración de la feria y a manizales tenía corrida de toros y esto también va a atravesar nuestras celebraciones bastante pues la vimos en la feria de cali vimos que las corridas de toros van a tener hoy para tener una importancia muy grande hoy por hoy esto se ve con unos ojos muy distintos pero en su momento eso era una de las formas de entrar en una celebración de la de lo que en ese momento se consierá la modernidad las cosas van cambiando de categorías y de maneras de verse hoy por hoy eso se ve como bastante más cercano a la barbaría que a la modernidad pero en esa época eso se veía como una modernidad se empiezan a hacer plazas de toros en toda Colombia y la celebración del torneo va a llegar a crear toda una cultura entre nosotros es parte de lo que nos ha transitado en estas en estas búsquedas de identidades y de y de construcciones aquí el tema de los toros va a ser muy importante en 1955 oficialmente se hizo la primera feria de manizales y desde ahí la han celebrado 65 veces o sea esta feria es muy tradicional en la media que ha tenido una continuidad de celebración muy importante y que genera un espíritu de pertenencia y de identidad en la ciudad muy poderos en cada uno de estos años y es el momento en que todas las ferias y fiestas están en ese momento en Colombia o sea nosotros tenemos dos puntos importantísimos y es entre diciembre y enero que todo el país está de fiestas y carnavales y el otro es ahorita en un julio que vienen también otra otra etapa de carnavales y ahí todo el país se enrumba entonces la feria de manizales forma parte de esta rumba junto con la feria de cal y junto con todo y después de que tú sales de la feria de manizales y de la feria de cal y esto va a empezar el carnaval de negros y blancos así que el que se quieren rumbarse pueden rumbar desde las cuadrillas de san martín en noviembre pasarse todo y ciembre en ferias desde las velitas que empieza la navidad y rematar con negros y blancos y si además son los días de río sucio que es cada dos años pues mira la enrubada que te puedes pegar es absolutamente magnífica porque se nos juntan todas al tiempo y se nos ponen de la más emocionantes porque se hace esto en manizales en esa fecha porque manizales es una ciudad donde llueve mucho en la región del eje para que afeteró llueve mucho su produce parte de la fertilidad y del florecimiento entonces es la semana más seca del año en la ciudad a cuéles que nosotros tenemos un zona donde llueve mucho en el chocó cuando no llueve dos días lo llaman veranillo y aquí es la semana más seca del año por eso es la que semana que se hace la feria entonces ya con el éxito de la feria vamos a musicalizar la feria la banda sonora de la feria de manizales es el paso doble y eso ha sido el éxito desde el comienzo de la feria porque un reconocido poetacal dense Guillermo González ospina quiso escribir unos versos en honor a la ciudad de manizales y le llevó la letra a Oscar oyos botero que fue el fundador de la feria y a él le padeció mucho ver la idea entonces el maestro González tenía como la idea de que fuera un bombuco pero oyos botero que viena murado España y de las corridas le hicieron que ser un paso doble entonces porque es el ritmo tradicional del sur de España entonces por eso bailé pide a un director de orquestado al anciano que es José Mariancis que convierta la letra del maestro González en un paso doble cosa que las hace y al hacerlo va a crear algo que es realmente el digno extraoficial pero el digno de manizales el paso doble de manizales del alma eso se bailen las fiestas es donde los sabemos todos se ha convertido en una canción que nos identifica culturalmente en muchísimos lugares la feria de manizales es una cosa que todo el mundo se muchísima gente se sabe y nos ha tocado en todas las regiones pasodobles y la gente sabe mucha gente sabe bailar pasodobles es parte de lo que de lo que nos ha habitado es exacto esa parte hispanica del paso doble la herencia de las túanas que recorre absolutamente todo el continente las túanas como estos cantos de viguela de bandola donde realmente se viste como el siglo 16 de eso hay en todo el continente y es esa parte digamos musical que se nutre de las tradiciones españolas y que la lleva en la en la sonoridad y en la sonoridad de las músicas de cuerda y todo eso aquí de esa digamos de esa misma beta de donde vienen las túanas de donde viene toda la musicalidad española viene el paso doble y va a tener su mayor representación en la feria de manizales y se va a convertir el elimno de la ciudad o sea no es el límno oficial pero es el libro de manizales todo el mundo va a conocer estas letras y va a conocer esta música en colombian sus partes de los de nosotros vemos unas narrativas colectivas que van desde el ballenato hasta el paso doble pasando por la cumbia pasando por el porro porque todas las tradiciones de fiestas que les cuento van a entrar en un momento a mezclarse para formar este conjunto de diversidad cultural que somos los colombianos parte de esa narrativa musical es el paso doble y de la feria de manizales toda la feria como tal se va a realizar en 1957 la primera versión del rey nado internacional del café que también le va a dar a la feria mucha importancia y se coro una la primera reina del café que va a ser una panameña anhelida al faru y ahí en adelante las reinas que llegaron a tener tanta importancia que tenían vuelos directos a manizales en los reynados van a ser muy importantes en las ferias eso lo hemos visto son parte digamos de momentos cumbres de la celebración de la feria son los reynados lo vimos en la reina del carnaval de barranquilla aquí la reina del café tiene una importancia muy grande sobre todo desde cuando luz varina su lo haga fue nombrada mis universo en 1958 y es la primera mis universo que hemos tenido en la historia y ella va a ser la que va a promocionar la feria siempre va a decir lo que los esperen la feria en el rey nado mundial del café en el festival folklorico internacional entonces eso le va a dar a la feria de manizales también toda una forma de revestirse de una gala especial que dan las ferias pero la feria de manizales se ha venido entroncando con una gran cantidad de tradiciones de la ciudad de la región que la han venido diversificando ella es una feria digamos de una vertiente española de Sevilla pero queda en este territorio y todas las cosas que habitan este territorio van a empezar a formar parte de la feria entonces la figura de la riero como este relato fundacional de quienes entró abriendo monte y de quien recorre esas montañas al homo de mula atravesando lugares inóspitos y va llevando las colonizaciones eso va a ser una tradición popular que está incorporada a la feria y unes manizales ve una estatua muy importante a la figura del arriero entonces eso va a ser que haya una feria de la riería es donde se honra al arriero con desfiles tradicionales contra gestípicos o sea con el poncho con el carril que le cabe un clóseta dentro es pues eso no tiene fondo y con el machete y la mula eso es un digamos como un relato fundacional de toda la historia de la colonización antioqueña y de lo que va a ser el eje cafetero y también de lo que va a ser manizales también hay otro elemento muy importante de la cultura de esta cultura digamos que nosotros llamamos paisa que son todos estos departamentos que tienen que ver con el café es todo ade de antioquía quien dio caldas risar alta son los trovadores riding en el mejor vividor. En armella está panaca, el ruiza y amanizales, enfera y hasta la rumba y en santa Rosa, termales, nuestra región cafetera, el lado de más crecimiento, el progreso se refleja en sus tres departamentos. La trova a nosotros nos atraviesa por todos los extremos, porque nosotros somos un pueblo de mucha tradicional. Y los trovadores son poetas que cantan a la montaña y al trabajo y provisan coplas de una manera increíble. Eso pasa en todas nuestras tradiciones, pero aquí son muy importantes. Y cada vez tiene las mujeres más participación en este mundo coplero que antes era solamente de hombres y que cada vez cuenta más con la participación de las mujeres. Hay otra cosa que me parece ataque. Uno tiene que imaginarse cómo son las calles en manizales. Las calles en manizales se descoelgan, se ruedan. Cuando usted le dice que suba, es que sube, o sea, es que es una trepada, increíble. Y cuando hay que bajar por ahí, es que es que usted se me ha dicho que se rueda ahí. Esto, digamos, apata ese escarpado, en carro es difícil, en bicicleta es suicida, pero esto lo vamos a hacer en carritos de valinera. Yo quiero que ustedes se imaginen lo que es bajar por una falda de una manera enloquecida en un carrito de valineras. Curiosamente no se ha matado la gente, o sea, esto no se me ocurre nada más vertiginoso ni un deporte extremo más arriesgado. Sin embargo, hay concursos de carrito de valinera y se echan muntia bajo por las faldas de manizales. Y aquello es de vertigo, es de vertiginoso, es vertiginoso de ver, eso sí es un deporte extremo, o no ponerse a hacer concursos de carritos de valineras, pues las calles de manizales es una cosa loca. Y hay concursos y estos concursos se han hecho la gente hace los carritos en casa, los impulsar al principio, apátano, pealiendo. Y luego, por entre esas calles así en pina, es que esto es verdad, es chévere que ustedes no solamente vean el mapa, que no solamente que vayan a manizales sino que traten de ver esto para que uno se imagine como es quien se rueda en un carrito de valineras, con una velocidad absolutamente increíble y hacen concursos de esto en la feria. Entonces, vienes que cada una de las tradiciones se va uniendo a la feria, los carritos de valineras, la feria de la riería, este es una ciudad muy importante en el teatro, lo contamos en el libro americano, esta es una ciudad de teatro y es una ciudad universitaria, es una ciudad estudiantil, lo que le da una permanente transformación en la mía que van llegando estudiantes de todo el país a manizales y van participando de estas tradiciones, es una ciudad donde se encuentran muchas vertientes de la cultura para esta feria, he hecho la mente en tres ocasiones, ha sido suspendida a la feria, una fue en 1980 porque un terremoto les cuento de esta zona geológicamente inestable y en 1979 un terremoto grande, esos los terremotos nos van a travesar a la feria después, les cuento como Vuel de Popallán, hemos tenido terremoto realmente muy graves porque la cordillera de los andes es la más joven de las cordilleras planetarias, esto está apenas esta información aún con respecto a otras cordilleras del planeta, entonces hay terremotos y en ese es de callar un edificio y murieron cuatro personas y uó temporada taurina, pero apenas digamos el redujo a su mínima expresión y por eso se llamó la miniféria pero algo se pudo hacer, la tragedia más grande que hemos tenido nosotros como pueblo y como país en la memoria ocurre en 1985 cuando el volcán nevado del Ruiz, vecino vecino de Manizales entra en errupción, eso lo vine a entender yo en Islandia cuando le explican a uno que lo que significa un volcán que arriba está taponado o por un claciar como es el caso de los islandeses o por un nevado como es el caso de nosotros, eso significa que cuando toda la lava del volcán va a salir con la fuerza de la tierra en lugar de esparcirse hacia el cielo se tapona porque arriba hay un nevado entonces lo que hace es que descongela el agua del nevado que en este caso como es una formación montañosa también se llena de barro eso generó un lodo infinito que se pultó un pueblo de la manera más dramática que produjo 23 mil muertos y que golpeó varias poblaciones en caldas y en el tolima mumiles de desaparecidos, de damnificados y con inmencísimas pérdidas materiales eso fue la tragedia más grande que nosotros tenemos memoria como país o sea no no hemos vivido nada tan aterraor como la tragedia de armero y el pues se está manizales está muy cerca de eso porque está muy cerca de una de las de las maravillas más grandes que tenemos en nuestro país pero que a la hora de una tragedia esa es muy terrible es el parque de los nevados ahí al ladito está el parque de los nevados lo que hace que manizales estén entre varios climas incluye a la nieve que estáis cerca cuando vino la tragedia de armero nos trataban de ayudar nos mandaban mucha ropa y montan tuvo que aclarar de a los franceses que no mandaran ropa de invierno que las poblaciones damnificadas eran poblaciones de traintagrados como lérida tratando de explicar la imagen el problema de los pisos térmicos que no lo pueden entender los pueblos de estación de todo nos mandaban chiquetas de invierno porque hablaban del nevado del riz se no ater o hay nieve o todo es caliente pues las dos cosas hay nieve y todo es caliente porque aquí hay pisos térmicos entonces la nieve cayó o sea el barro cayó sobre poblaciones inmensamente cálidas pero viene de una montaña nevada que tiene arriba nieve y abajo al can como hago para explicarte pero aquí todo es tierra caliente si me entiendo entonces estamos en ropa ropa de calor ropa de verano ni siquiera podíamos explicar geográficamente lo que nos pasó siquiera lo podíamos entender nosotros mismos yo lo vine a entender en islandia cuando me explicaron la coalición entre la lava y el hielo si y eso fue lo que pasó ella y no se pudo hacer la feria de manizales porque la tragedia fue total y nos la tragedia fue tan terrible tan terrible que en 1986 no pudo haber feria porque estábamos totalmente deluto perplejos ante el tamaño de la tragedia que acabábamos de vivir y en la memoria esto todavía genera un profundo dolor recordar lo que pasó en armero y tratar de explicarnos lo que pasó en ese momento la tercera vez que no se pueda hacer feria pero no puede hacer feria en ninguna parte pues es por la por la pandemia eso toda nuestras historias de ferias ahora que las estamos narrando pues van a traestar atravesadas por la pandemia que en la medida que se prohíbe el contacto entre las personas pues se prohíbe la feria porque mire usted como hace para helar el paso de la con covid pero no se puede si porque palpas oble toca ponerse de acuerdo y abrazarse o sea el coïdes una barbaridad porque todas las ferias se trata de encontrar se de abrazarse de pintarse de bailar paso doble de todo de toda la alegría y el contacto que se trata de una feria y una celebración eso no puede hacer durante la pandemia entonces ahí tampoco se pudo hacer entonces van a desarrollar una identidad con esta feria que es particularmente poderosa la consideran más suya que la misma catedral de la ciudad y empiezan a prepararse como toda la gente que vive en el afortunado de escenario de los lugares donde hay ferias y fiestas empiezan a prepararse desde diciembre y una manera de establecer los regalos de navidad es que a la gente se le regala plata para que gaste en la feria eso es una manera de de prepararte para la feria y entonces la gente se va poniendo de acuerdo y se va poniendo en el plan de toda esta devosión allá la cultura y al hecho de ser ser esta feria y hoy por hoy la feria de manizales incorpora una cantidad de tradiciones entonces tiene los desfiles de las carrosas de rocillo tiene el reinado el café tiene las exposiciones tiene mercados persas que son mercados con una gran cantidad de objetos diversos tiene ciertámenes deportivos tiene conciertos grandes conciertos o sea conciertos tremendos que son lo hemos visto en muchas pues desde lo desde calli barranquilla aquí también hay conciertos muy importantes hay otro elemento bueno la trova que ya la contamos hay otro elemento que es muy importante en la tradición cafetera el tango el hecho de que gardel haya muerto en Medellín y el hecho de que esta gente tenga una tradición tanguera o sea el tango es una parte estructural de la cultura de la eje cafetera esta gente es tanguera pero tanguera profundamente entonces el tango también está incorporado a la tradición de la feria también es una es una cosa de la que la gente en la región sabe y su alma habita el tango lo baila lo conoce mucho también en la feria y tango hay tradición cafetera pues de eso se trata la esencia de lo que estamos hablando y también hay eventos y también son frecuentes los boulevares y los paseos a los sitios representativos de la ciudad para rescatar patrimonio y valores culturales porque es que una de las cosas más importantes de todas las ferias y fiestas que nosotros narramos es el rescate de las tradiciones y de los valores culturales nosotros estamos hechos de todas estas tradiciones estamos hechos de todas estas figuras porque eso es lo que nos nutre y nos hace mirarnos a nosotros mismos con un sentido de valía de orgullo de pertenencia y identidad la feria de manizales genera en la ciudad esto un sentido de orgullo de pertenencia de identidad de de encontrarse en una serie de valores que comparten que bailan que danzan y que se enseñorean como yo mismo lo siente cada vez que hay una feria de manizales entonces desde los tiempos del paso doble desde los arrieros levantándose a través de la montaña abriéndose camino desde la llegada del progreso desde el café desde el cable desde las manolas desde las carrosas de rocillo desde la ciudad universitaria desde la ciudad teatro desde la ciudad en Galanada desde los carros de valineras a toda por las montañas desde este paisaje único y particular de estas ciudades embaradas en el centro de la construcción del imaginario del café en Colombia y que se enseñorea y se orgullese cada vez que se presenta en una feria que ellos adoran con el fondo del alma en la narración de Anauríbe y para ustedes feliz domingo este podcast fue posible gracias al equipo de la Casa de la historia de Ana Suárez, Milena Beltrán, Arturo Jiménez Finha, Daniel Moreno Franco, grabado en los gatos estudio la edición y la musicalización de Eduardo Corredor Fonseca de Rueda sonido y contamos con Daniel Shratz que está con nosotros acompañándonos de aquella adelante en ferias y fiestas y que lo introducimos con mucha alegría. En este relato tuvimos toda la colaboración, la ayuda, el cariño en la narración de Camilo Naranjo, Diana Ramírez, Paula Giraldu y todos los manizaleños y manizaleñas, ori que organizadores que participan con tanto orgullo y amor en esta feria de manizales del alma y siempre con la ayuda fuerte y poderosa de Santiago Espinoza Uribe y Laura Rojas Aponte del podcast Cozat Internet.
[{"start": 0.0, "end": 6.5, "text": " buenas siguiendo las tradiciones de las ferias y fiestas en las que estamos"}, {"start": 6.5, "end": 12.68, "text": " montados encontrando estas much\u00edsimas formas en que la diversidad de todas las"}, {"start": 12.68, "end": 17.82, "text": " culturas de Colombia nos habitan en la gran celebraci\u00f3n vamos a meter con"}, {"start": 17.82, "end": 21.580000000000002, "text": " una de las ferias m\u00e1s importantes de Colombia y probablemente el continente"}, {"start": 21.580000000000002, "end": 26.76, "text": " tambi\u00e9n porque tiene un alcance muy grande es la tremenda y la poderosa"}, {"start": 26.76, "end": 29.68, "text": " feria de Manizales"}, {"start": 57.739999999999995, "end": 78.58, "text": " para esto primero vamos a hablar un poco de la regi\u00f3n las ferias en general en"}, {"start": 78.58, "end": 83.8, "text": " todas las ferias que hemos visto hay una reivindicaci\u00f3n de las tres ra\u00edces"}, {"start": 83.8, "end": 90.03999999999999, "text": " que nos habita la ra\u00edz ind\u00edgena la ra\u00edz afro y la ra\u00edz ispanica esta en"}, {"start": 90.03999999999999, "end": 95.84, "text": " particular va a reimindicar la ra\u00edz ispanica cada una de ellas nos define en"}, {"start": 95.84, "end": 102.56, "text": " uno de los componentes culturales que nos habitan esta va a ser una una"}, {"start": 102.56, "end": 108.2, "text": " manera de arraigarse en la identidad de lo ispanico lo mismo que del"}, {"start": 108.2, "end": 112.24, "text": " carnaval de negros y blancos se ha arraigado en la ind\u00edgena y el carnaval de"}, {"start": 112.24, "end": 119.19999999999999, "text": " barrenquilla en lo afro esta se arraiga en lo ispanico las tres son vertientes"}, {"start": 119.19999999999999, "end": 123.64, "text": " hist\u00f3ricas y culturales que nos habitan a nosotros entonces vamos a empezar"}, {"start": 123.64, "end": 130.79999999999998, "text": " por la regi\u00f3n porque esta es una regi\u00f3n particular nodal neur\u00e1 lgica en"}, {"start": 130.79999999999998, "end": 137.32, "text": " toda la construcci\u00f3n de la modernidad de Colombia como pa\u00eds entonces es"}, {"start": 137.32, "end": 144.07999999999998, "text": " importante que vengamos de donde ha surgido todo esto esta es una ciudad reciente"}, {"start": 144.07999999999998, "end": 151.28, "text": " a diferencia de otras ciudades que tienen fundaciones coloniales y que vienen"}, {"start": 151.28, "end": 157.64, "text": " de tiempos muy anteriores esta ciudad aparece en el siglo XIX en el contexto"}, {"start": 157.64, "end": 162.35999999999999, "text": " de las colonizaciones que en ese momento se estaban dando en el pa\u00eds y es que"}, {"start": 162.35999999999999, "end": 166.07999999999998, "text": " tiene una historia muy distinta porque las poblaciones ind\u00edgenas de las"}, {"start": 166.08, "end": 171.48000000000002, "text": " monta\u00f1as y las vertientes de la cordillera central y occidental sufrieron el"}, {"start": 171.48000000000002, "end": 177.92000000000002, "text": " impacto de la invasi\u00f3n espa\u00f1ola de una manera tan dram\u00e1tica que ah\u00ed si"}, {"start": 177.92000000000002, "end": 183.84, "text": " hubo digamos como una tragedia demogr\u00e1fica enorme entre los siglos 16 y 16 17"}, {"start": 183.84, "end": 189.60000000000002, "text": " pr\u00e1cticamente desaparecieron o sea la tragedia demogr\u00e1fica de la peri\u00f3 de la"}, {"start": 189.60000000000002, "end": 194.28, "text": " conquista se siente m\u00e1s en unas regiones que en otras aqu\u00ed fue muy grave entonces"}, {"start": 194.28, "end": 200.48, "text": " esta zona qued\u00f3 fuera de los poblamientos durante mucho tiempo eso permiti\u00f3"}, {"start": 200.48, "end": 207.96, "text": " que toda la naturaleza los bosques y las selvas se fueran recuperando hasta"}, {"start": 207.96, "end": 214.84, "text": " convertirse en una zona boscosa denza profunda desde el punto de vista de la"}, {"start": 214.84, "end": 220.88, "text": " naturaleza entonces en el siglo XIX que es un siglo que est\u00e1 cargado de una"}, {"start": 220.88, "end": 227.16, "text": " manera de mirar la historia van a llegar una gran cantidad de poblaciones"}, {"start": 227.16, "end": 233.35999999999999, "text": " humanas para buscar un hogar en estas tierras que estaban totalmente pobladas"}, {"start": 233.35999999999999, "end": 242.68, "text": " por bosques entonces manizales es fundada en 1849 por colonos y por arrieros que"}, {"start": 242.68, "end": 248.24, "text": " se abrieron en el monte a su paso aqu\u00ed hay una construcci\u00f3n de un imaginario y es"}, {"start": 248.24, "end": 256.16, "text": " crearse en un paso de ciudad y de poblaci\u00f3n a trav\u00e9s de desmontar una"}, {"start": 256.16, "end": 261.48, "text": " regi\u00f3n eso es un imaginario muy importante para ellos porque es abrir"}, {"start": 261.48, "end": 267.0, "text": " se paso por entre lo m\u00e1s escarpado y espeso de la naturaleza dentro de toda la"}, {"start": 267.0, "end": 273.92, "text": " idea del siglo XIX de que el progreso era abrirse paso por el monte entonces la"}, {"start": 273.92, "end": 280.0, "text": " palabra manizales viene de unas piedras volc\u00e1nicas que se llaman mani que de las"}, {"start": 280.0, "end": 283.84000000000003, "text": " cuales encuentran much\u00edsimas en el sitio de manizales pues porque es una zona"}, {"start": 283.84000000000003, "end": 288.44, "text": " profundamente volc\u00e1nica entonces de ah\u00ed viene la palabra de mani de estas"}, {"start": 288.44, "end": 294.28000000000003, "text": " piedras manizales y esta es una regi\u00f3n que es muy importante porque es una"}, {"start": 294.28000000000003, "end": 300.08000000000004, "text": " regi\u00f3n que nos da un espejo de toda la diversidad geogr\u00e1fica que nosotros"}, {"start": 300.08, "end": 306.8, "text": " somos capaces de tener porque tiene volc\u00e1nes nevados despu\u00e9s vamos a ver lo que"}, {"start": 306.8, "end": 312.15999999999997, "text": " eso significa en tanto identidad y en tanto sobresalto pero tambi\u00e9n eso"}, {"start": 312.15999999999997, "end": 317.68, "text": " nos ha tra\u00eddo historias muy bravas tambi\u00e9n entonces esta condici\u00f3n"}, {"start": 317.68, "end": 323.96, "text": " monta\u00f1osa esta condici\u00f3n selb\u00e1tica y esta condici\u00f3n de nevados que al mismo"}, {"start": 323.96, "end": 328.12, "text": " tiempo son volc\u00e1nes y volc\u00e1nes que al mismo tiempo son nevados cosa que es"}, {"start": 328.12, "end": 334.72, "text": " dif\u00edcil de entender desde otras geograf\u00edas y desde otras desde otras"}, {"start": 334.72, "end": 340.24, "text": " regiones donde tales fen\u00f3menos no existen pero la geograf\u00eda en Colombia tiene"}, {"start": 340.24, "end": 346.28000000000003, "text": " cosas realmente tremendas, esplendorosas y un es\u00e9s en otro sitio donde yo"}, {"start": 346.28000000000003, "end": 351.76, "text": " he visto es en islandia eso que hay glaciares y volc\u00e1nes de bajo pero no es tan"}, {"start": 351.76, "end": 357.04, "text": " com\u00fan en la tierra y en cambio manizales est\u00e1 hay en el sitio donde estaban las"}, {"start": 357.04, "end": 363.08000000000004, "text": " elba donde escan los estando nevados entonces tiene como el tema de las"}, {"start": 363.08000000000004, "end": 368.44, "text": " capas t\u00e9rmicas como definici\u00f3n de las temperaturas en Colombia es"}, {"start": 368.44, "end": 373.72, "text": " particularmente importante en manizales porque ah\u00ed hay una cantidad de"}, {"start": 373.72, "end": 379.6, "text": " recursos geogr\u00e1ficos de variantes geogr\u00e1ficas que confluyen en esa"}, {"start": 379.6, "end": 384.76, "text": " ciudad \u00fanica que est\u00e1 que va a ser fundada y va a estar y va a funcionar en la"}, {"start": 384.76, "end": 390.92, "text": " cuchilla de una monta\u00f1a eso es una ciudad bastante dif\u00edcil de imaginar para"}, {"start": 390.92, "end": 397.48, "text": " que no haya ido a manizales entonces dos d\u00e9cadas despu\u00e9s de la fundaci\u00f3n la"}, {"start": 397.48, "end": 404.03999999999996, "text": " ciudad de manizales va a tener el impulso m\u00e1s importante que determin\u00f3 nuestro"}, {"start": 404.03999999999996, "end": 410.4, "text": " historia en el siglo XIX y en muy buena parte del siglo XX y que determina"}, {"start": 410.4, "end": 417.32, "text": " nuestra identidad ante el mundo en una gran medida el caf\u00e9 y es que en ese"}, {"start": 417.32, "end": 424.23999999999995, "text": " momento esta parte de Colombia va a tener como toda la luz de la historia"}, {"start": 424.23999999999995, "end": 429.06, "text": " porque va a ser el eje cafetero y hoy sigue siendo una parte nodal de nuestra"}, {"start": 429.06, "end": 438.15999999999997, "text": " historia todo el pa\u00eds va a vivir en la d\u00e9cada en 1870 el caf\u00e9 como nuestro"}, {"start": 438.16, "end": 444.96000000000004, "text": " producto insigne de exportaci\u00f3n ese caf\u00e9 permiti\u00f3 que nosotros"}, {"start": 444.96000000000004, "end": 450.68, "text": " centraramos a la moderna porque eso ha sido de tales proporciones que nos dio"}, {"start": 450.68, "end": 455.16, "text": " un lugar en el mundo nos dio un lugar en las exportaciones nos dio un lugar en"}, {"start": 455.16, "end": 460.24, "text": " la identidad planetaria y nos da la sabrosura del caf\u00e9 adem\u00e1s de todas las"}, {"start": 460.24, "end": 465.40000000000003, "text": " ventadas hist\u00f3ricas es que rico el caf\u00e9 yo que soy particularmente tintera pues"}, {"start": 465.4, "end": 471.59999999999997, "text": " me me me regocijo con esta historia porque pues la vivo diariamente entonces"}, {"start": 471.59999999999997, "end": 477.0, "text": " manizales se vuelve la capital del eje cafetero y eso le da un lugar en la"}, {"start": 477.0, "end": 504.72, "text": " historia muy preponderante"}, {"start": 567.8, "end": 573.64, "text": " entonces eso le genera un crecimiento la cosa m\u00e1s impresionante pero un crecimiento"}, {"start": 573.64, "end": 580.48, "text": " verti y no so verdaderamente grande y eso hace que r\u00e1pidamente vaya a tener una"}, {"start": 580.48, "end": 584.68, "text": " poblaci\u00f3n mucho m\u00e1s grande que otras ciudades que tienen muchos a\u00f1os"}, {"start": 584.68, "end": 590.68, "text": " anteriores de fundaci\u00f3n eso le va a dar un car\u00e1cter en que irrumpe manizales"}, {"start": 590.68, "end": 596.12, "text": " en la historia a partir de todo el empuje y el impulso del caf\u00e9 que le va a dar"}, {"start": 596.12, "end": 603.72, "text": " toda esta identidad Colombia en su construcci\u00f3n econ\u00f3mica de independencia va a"}, {"start": 603.72, "end": 610.0, "text": " buscar en el caf\u00e9 la base econ\u00f3mica y va a ser uno de los pa\u00edses exportadores"}, {"start": 610.0, "end": 615.4, "text": " m\u00e1s grandes y deba crear uno de los mercados m\u00e1s grandes nosotros tenemos"}, {"start": 615.4, "end": 621.16, "text": " representaci\u00f3n en Londres bueno eso ha sido toda una construcci\u00f3n de pa\u00eds a partir del caf\u00e9"}, {"start": 621.16, "end": 627.48, "text": " y digamos la agricultura nos ha generado muchos otros otros productos pero el"}, {"start": 627.48, "end": 632.7199999999999, "text": " caf\u00e9 siempre ha sido la bancuardia de la modernidad y de la entrada de Colombia"}, {"start": 632.7199999999999, "end": 638.8399999999999, "text": " al siglo XX y de la entrada de Colombia como a los mercados del mundo al escenario"}, {"start": 638.8399999999999, "end": 644.9599999999999, "text": " internacional y a la identidad que todo eso va a generar nosotros los colombianos"}, {"start": 644.9599999999999, "end": 649.16, "text": " socializamos a trav\u00e9s del caf\u00e9 o sea eso es a trav\u00e9s de lo cual nosotros nos"}, {"start": 649.16, "end": 654.76, "text": " sentamos y no seamos amigos tom\u00e1 menos un tinto si es la manera tinto llamamos"}, {"start": 654.76, "end": 661.8399999999999, "text": " nosotros para los que nos escuchan de otras latitudes a un caf\u00e9 que uno se toma"}, {"start": 661.8399999999999, "end": 667.7199999999999, "text": " que se llama tinto y es la base de toda la socializaci\u00f3n en Colombia porque uno"}, {"start": 667.7199999999999, "end": 673.12, "text": " empieza a portomarse un tinto para cualquier forma de relacionarse labora la"}, {"start": 673.12, "end": 678.64, "text": " efectiva amistosa parrande todo empieza por un tinto o cuando uno llega a una"}, {"start": 678.64, "end": 684.3199999999999, "text": " casa y no le ofrecieron ni un tinto aquello fue la miseria absoluta ni un tinto me"}, {"start": 684.3199999999999, "end": 691.72, "text": " ofrecieron o sea hay que entender la importancia del caf\u00e9 en nuestra cultura para"}, {"start": 691.72, "end": 697.04, "text": " entender la importancia de manizales en nuestro relato entonces somos un pa\u00eds"}, {"start": 697.04, "end": 702.4, "text": " totalmente permeado por el caf\u00e9 desde las viejas historias de todo ese caf\u00e9"}, {"start": 702.4, "end": 707.48, "text": " que en Europa va a llegar cuando los otomanos fueron derrotados y dejar una"}, {"start": 707.48, "end": 712.2, "text": " cantidad de sacos de caf\u00e9 el que hab\u00eda tenido su origen en etiop\u00eda que ha"}, {"start": 712.2, "end": 717.9200000000001, "text": " atravesado el imperio tomano que cuando los otomanos sufren esa derrota a"}, {"start": 717.9200000000001, "end": 721.76, "text": " manos del imperio osra ungo de los bieneses hasta el punto donde ya no pueden"}, {"start": 721.76, "end": 727.32, "text": " avanzar m\u00e1s en esas colinas a la salida de biena dejan una cantidad de saco de"}, {"start": 727.32, "end": 733.04, "text": " caf\u00e9 esa cantidad de saco de caf\u00e9 van a ser tomados por los bieneses van a ser"}, {"start": 733.04, "end": 738.76, "text": " colados se van a volver m\u00e1s suaves y van a empezar a crear los caf\u00e9s bieneses y"}, {"start": 738.76, "end": 743.3199999999999, "text": " ese caf\u00e9 va a llegar tambi\u00e9n por la v\u00eda de los caf\u00e9s pariscinos que es donde"}, {"start": 743.3199999999999, "end": 747.24, "text": " se va a forgar una buena parte de la ilustraci\u00f3n y la revoluci\u00f3n francesa y"}, {"start": 747.24, "end": 752.4, "text": " van a llegar despu\u00e9s a nuestra tierra por la v\u00eda de Santander por los"}, {"start": 752.4, "end": 757.4399999999999, "text": " Santanderes y hoy ac\u00e1 que va a llegar por primera vez y va a constituir la"}, {"start": 757.4399999999999, "end": 762.8399999999999, "text": " identidad hist\u00f3rica de manizales en la men que ellos van a ser el coraz\u00f3n del"}, {"start": 762.84, "end": 767.44, "text": " relato de la historia m\u00e1s importante que nosotros tenemos que la historia de"}, {"start": 767.44, "end": 772.2, "text": " nuestra relaci\u00f3n con el caf\u00e9 entonces ah\u00ed ya nos vamos situando en donde estamos"}, {"start": 772.2, "end": 777.6600000000001, "text": " entonces despu\u00e9s de que el caf\u00e9 lleg\u00f3 por los santanderes y por"}, {"start": 777.6600000000001, "end": 783.1600000000001, "text": " Kundina marca va a empezar a distribuirse por todo el pa\u00eds pero su epicentro"}, {"start": 783.1600000000001, "end": 788.88, "text": " va a ser manizales y toda esta regi\u00f3n lo que despu\u00e9s se n\u00edan los departamentos"}, {"start": 788.88, "end": 796.04, "text": " del quintillo de Rizaraldas de Caldas tanto el viejo Caldas que los abarcaba a"}, {"start": 796.04, "end": 801.96, "text": " todos como despu\u00e9s la reciente divisi\u00f3n de cada uno de estos departamentos en"}, {"start": 801.96, "end": 806.96, "text": " el quintillo en Rizaraldas y en Caldas va a generar una serie de capitales pues"}, {"start": 806.96, "end": 813.44, "text": " que van a ser Pereira que van a ser Armenia y manizales entonces en lo que"}, {"start": 813.44, "end": 819.6800000000001, "text": " tiene que ver con todo el epicentro del caf\u00e9 va a ser manizales la capital de"}, {"start": 819.6800000000001, "end": 824.4000000000001, "text": " ejecafetero entonces eso le va a dar a la regi\u00f3n un empuje econ\u00f3mico"}, {"start": 824.4000000000001, "end": 829.96, "text": " impresionante le va a dar prosperidad desarrollo de peque\u00f1as industrias una"}, {"start": 829.96, "end": 836.0, "text": " cantidad de peque\u00f1os negocios y despu\u00e9s vamos a tener una circunstancia"}, {"start": 836.0, "end": 843.18, "text": " hist\u00f3rica que es que en 1920 con la separaci\u00f3n de Panam\u00e1 los Estados Unidos"}, {"start": 843.18, "end": 849.32, "text": " va a dar una indemnizaci\u00f3n a Colombia por la p\u00e9rdida de Panam\u00e1 para nosotros"}, {"start": 849.32, "end": 853.88, "text": " esto es la p\u00e9rdida de Panam\u00e1 para Panam\u00e1 y su surgimiento como Estado nacional"}, {"start": 853.88, "end": 860.08, "text": " y es su independencia y para los Estados Unidos es una enorme movida de"}, {"start": 860.08, "end": 866.1600000000001, "text": " destino manifiesto para unir los dos oceanos en un proyecto de expansi\u00f3n imperial"}, {"start": 866.1600000000001, "end": 871.36, "text": " enorme el asunto es que por eso nos van a dar un vietel argo y ese vietel"}, {"start": 871.36, "end": 877.48, "text": " argo es la indemnizaci\u00f3n de la p\u00e9rdida de Panam\u00e1 ese vietel se va a invertir en"}, {"start": 877.48, "end": 885.44, "text": " caf\u00e9 y ese caf\u00e9 se hacen manizales y eso le va a dar a manizales un apoyo y un"}, {"start": 885.44, "end": 892.48, "text": " empuje econ\u00f3mico urban\u00edstico cultural va a ser como la gui\u00e1n beneficiaria de"}, {"start": 892.48, "end": 898.7600000000001, "text": " la indemnizaci\u00f3n de la separaci\u00f3n de Panam\u00e1 entonces esta ciudad brota as\u00ed"}, {"start": 898.7600000000001, "end": 904.36, "text": " brota inclusive ellos lo dicen con mucho orgullo que ellos brotan como brota de la"}, {"start": 904.36, "end": 908.2, "text": " tierra nosotros somos muy f\u00e9rtiles en este pa\u00eds tenemos regiones de una"}, {"start": 908.2, "end": 914.6400000000001, "text": " fertilidad incre\u00edble entonces as\u00ed como brota lo que se cae al piso porque"}, {"start": 914.64, "end": 918.0, "text": " aqu\u00ed hay cosas que se caen y brotan y de ah\u00ed salen \u00e1rboles y de ah\u00ed salen"}, {"start": 918.0, "end": 922.48, "text": " matas se pasa con el algod\u00f3n eso pasa con much\u00edsimos de nuestros productos"}, {"start": 922.48, "end": 928.96, "text": " que s\u00f3lo conestar en el piso brotan manizales brota brota como que irrumpe como"}, {"start": 928.96, "end": 935.68, "text": " que florece en la mitad de estas afortunad\u00edsimas coyunturas hist\u00f3ricas para la"}, {"start": 935.68, "end": 940.4, "text": " formaci\u00f3n de la ciudad entonces bueno pero aqu\u00ed tenemos un problema hay que"}, {"start": 940.4, "end": 944.36, "text": " transportar el caf\u00e9 y nosotros tenemos unas monta\u00f1as las m\u00e1s tremendas"}, {"start": 944.36, "end": 951.12, "text": " porque este sistema de tres cordilleras que se levantan a unas alturas tan grandes"}, {"start": 951.12, "end": 958.4, "text": " hace que el transporte entre ellas siempre haya sido complicado y la vuelta de"}, {"start": 958.4, "end": 965.6, "text": " encarreteras es es es es carpada es torposa y hacer un ferrocal\u00edde en ese momento"}, {"start": 965.6, "end": 970.48, "text": " ah\u00ed estaba muy dif\u00edcil porque era una tarea tit\u00e1nica y necesitamos llegar al"}, {"start": 970.48, "end": 975.52, "text": " Magdalena porque el Magdalena es el r\u00edo que realmente nos va a articular como"}, {"start": 975.52, "end": 981.64, "text": " pa\u00eds es el alma de este pueblo nuestro aqu\u00ed llegar al Magdalena como vamos a"}, {"start": 981.64, "end": 987.4, "text": " llegar al Magdalena por entre estas monta\u00f1as entonces va a surgir una idea"}, {"start": 987.4, "end": 995.52, "text": " buen\u00edsima y es la de hacer un cable un cable a\u00e9reo que comunique a manizales"}, {"start": 995.52, "end": 1000.76, "text": " con mariquita en el tolima porque de ah\u00ed salga para el Magdalena entonces"}, {"start": 1000.76, "end": 1005.96, "text": " est\u00e1 la manera de transportar el caf\u00e9 que en la medida en que crece est\u00e1n"}, {"start": 1005.96, "end": 1011.4, "text": " necesitando much\u00edsimo m\u00e1s soluciones de transporte que no se pod\u00eda andar de"}, {"start": 1011.4, "end": 1016.1999999999999, "text": " otra manera sin que fueran incre\u00edblemente tuertuosas y prolongadas entonces"}, {"start": 1016.1999999999999, "end": 1021.0, "text": " hay que atravesar r\u00edos cordilleras que imaginarse porque este es un punto en"}, {"start": 1021.0, "end": 1027.88, "text": " que nuestro geograf\u00eda es incre\u00edblemente diversa descomunal como se va a"}, {"start": 1027.88, "end": 1033.48, "text": " pasar por ah\u00ed entonces se hace el cable un cable que atraviese todo el cable"}, {"start": 1033.48, "end": 1038.24, "text": " m\u00e1s largo del mundo el cable a\u00e9reo m\u00e1s impresionante esto es una obra enorme"}, {"start": 1038.24, "end": 1045.0, "text": " empieza en 1912 con el apoyo t\u00e9cnico y econ\u00f3mico de una firma inglesa pero"}, {"start": 1045.0, "end": 1050.84, "text": " adivinen que el viejo truco la primera guerra mundial entonces todo lo que viene"}, {"start": 1050.84, "end": 1057.4, "text": " de Europa con transporte material o tecnolog\u00eda de Europa queda en el colapso"}, {"start": 1057.4, "end": 1062.32, "text": " del suicidio de la raz\u00f3n que es la primera guerra mundial en donde los"}, {"start": 1062.32, "end": 1067.52, "text": " europeos experimentan un entro piano un cante es vista pues bueno esto es lo que"}, {"start": 1067.52, "end": 1073.72, "text": " venga ya va a quedar va a quedar en veremos porque pues ellos est\u00e1n en el en el"}, {"start": 1073.72, "end": 1078.4, "text": " tema de destruirse y echarse 20 millones de muertos por un conflicto"}, {"start": 1078.4, "end": 1082.68, "text": " de mensual y sin sentido que quedar\u00eda mal terminado y llevar\u00eda a otro que"}, {"start": 1082.68, "end": 1088.64, "text": " ser\u00eda la segunda entonces mientras ellos est\u00e1n en la destrucci\u00f3n de la guerra"}, {"start": 1088.64, "end": 1093.88, "text": " nosotros nos quedamos as\u00ed como muy llora que vamos a hacer con esto la obra se"}, {"start": 1093.88, "end": 1099.76, "text": " finalizan 1923 y entonces con las regal\u00edas que est\u00e1n entrando en ese"}, {"start": 1099.76, "end": 1107.8799999999999, "text": " momento de panam\u00e1 ya se logr\u00f3 hacer con 20 estaciones 70 kil\u00f3metros de largo se"}, {"start": 1107.8799999999999, "end": 1112.04, "text": " convierte en el cable m\u00e1s largo del mundo o sea aqu\u00ed imaginarse esto porque"}, {"start": 1112.04, "end": 1118.0, "text": " estamos por entre unas monta\u00f1as que son particularmente abruptas escarpadas"}, {"start": 1118.0, "end": 1125.12, "text": " enormes en esa zona y esto va a conectar a esta regi\u00f3n del pa\u00eds y al caf\u00e9"}, {"start": 1125.12, "end": 1130.4399999999998, "text": " con el resto del mundo se cabe va a hacer la salida al magdalena y la salida al"}, {"start": 1130.4399999999998, "end": 1135.0, "text": " magdalena en la salida al mundo porque por el magdalena nosotros nos vamos a"}, {"start": 1135.0, "end": 1140.04, "text": " poder comunicar con el mundo hay que entender la maravilla de nuestra geograf\u00eda"}, {"start": 1140.04, "end": 1146.36, "text": " para poder dar lepiso a estos relatos y entender lo que significan este cable"}, {"start": 1146.36, "end": 1154.52, "text": " tiene 71.823 metros de longitud y tiene 375 torres de acero"}, {"start": 1154.52, "end": 1160.56, "text": " o sea agameso construyalo a excepci\u00f3n del ator de del herveo que est\u00e1"}, {"start": 1160.56, "end": 1167.6, "text": " hecho en madera y algunas de las alturas est\u00e1n entre 4 y 55 metros"}, {"start": 1167.6, "end": 1175.96, "text": " distribuidas en 15 secciones o sea eso es una obra monumental entonces esto"}, {"start": 1175.96, "end": 1180.6399999999999, "text": " ten\u00eda vagonetas tiene vagonetas por dentro se impulsaba carga y esas"}, {"start": 1180.64, "end": 1186.24, "text": " vagonetas van con 8 motores de 140 caballos de fuerza todo recorrido tomado"}, {"start": 1186.24, "end": 1193.8000000000002, "text": " 10 horas y el hecho de que el recorrido tomara 10 horas nos va a dar un gran"}, {"start": 1193.8000000000002, "end": 1200.88, "text": " progreso sobre lo que significaba cargarlas en mula que eran 10 d\u00edas y aqu\u00ed con"}, {"start": 1200.88, "end": 1205.0400000000002, "text": " el cable a\u00e9reo pues va a ser en 10 horas eso hace una diferencia en la"}, {"start": 1205.04, "end": 1211.2, "text": " productividad en el crecimiento en el desarrollo y en la pujanza pues del cielo a la"}, {"start": 1211.2, "end": 1214.6, "text": " tierra muy grande entonces con este cable nos vamos a comunicar con el mundo"}, {"start": 1214.6, "end": 1217.68, "text": " desde Manizales"}, {"start": 1220.52, "end": 1224.76, "text": " aqu\u00ed empieza el espacio comercial"}, {"start": 1229.76, "end": 1234.8, "text": " la feria de Manizales una de las celebraciones m\u00e1s tradicionales dentro de nuestro"}, {"start": 1234.8, "end": 1240.52, "text": " pa\u00eds y un referente para todos los que quieren venir a vivir y a sentir la"}, {"start": 1240.52, "end": 1246.2, "text": " la agr\u00eda de Colombia los invito a que escuchen el origen la trayectoria los"}, {"start": 1246.2, "end": 1250.52, "text": " protagonistas de la feria de Manizales en mi podcast las historias de Diana"}, {"start": 1250.52, "end": 1256.56, "text": " Uribe a trav\u00e9s de radio nacional de Colombia y luego escuchalo cuando quieras en"}, {"start": 1256.56, "end": 1265.32, "text": " rtbcplay.com"}, {"start": 1266.32, "end": 1273.32, "text": " bueno ya puestas todas las condiciones para que haya una ciudad que haya"}, {"start": 1273.32, "end": 1278.72, "text": " emergido de esta manera que se convierta en un epicentro tan importante como la"}, {"start": 1278.72, "end": 1284.56, "text": " historia cafetera de los colombianos ahora es tiempo de una feria ya lo vimos"}, {"start": 1284.56, "end": 1292.8, "text": " lo vimos en pasto lo vimos en barranquilla vimos como cuando se crea la feria la"}, {"start": 1292.8, "end": 1299.6399999999999, "text": " ciudad entra en una como en una armon\u00eda diferente porque irrumpe en su"}, {"start": 1299.6399999999999, "end": 1305.1599999999999, "text": " cultura aquello que va a determinar la vida y el amor de sus habitantes que son"}, {"start": 1305.1599999999999, "end": 1310.6, "text": " las ferias las ciudades que tienen ferias pues son ciudades felices porque la"}, {"start": 1310.6, "end": 1315.56, "text": " gente se va a pasar el resto de la vida todo el a\u00f1o preparando esta feria y eso"}, {"start": 1315.56, "end": 1320.1599999999999, "text": " como hemos visto es una de las ocupaciones m\u00e1s sabrosas y maravillosas de la"}, {"start": 1320.1599999999999, "end": 1325.9599999999998, "text": " vida que es tener una feria en casa bueno entonces vamos a hacer esta feria como"}, {"start": 1325.9599999999998, "end": 1331.84, "text": " como les digo al principio esta feria va a reivindicar un origen"}, {"start": 1331.84, "end": 1340.48, "text": " esp\u00e1nico al igual que barranquilla en un origen afro o pasto un origen ind\u00edgena"}, {"start": 1340.48, "end": 1346.56, "text": " un origen profundamente afro cada una de nuestras ra\u00edces se va a ver"}, {"start": 1346.56, "end": 1352.32, "text": " representada en mayor o menor grado en cada una de estas ferias la mezcla de"}, {"start": 1352.32, "end": 1359.56, "text": " todas ellas nos define y la vertiente de cada una de ellas nos habita no se"}, {"start": 1359.56, "end": 1363.8799999999999, "text": " haz\u00f3mos todo eso tambi\u00e9n y es parte de la diversidad impresionante que somos"}, {"start": 1363.8799999999999, "end": 1370.48, "text": " los colombianos entonces un momento en que quer\u00edan reactivar la econom\u00eda"}, {"start": 1370.48, "end": 1374.08, "text": " porque hab\u00eda habido unos declives por cuenta de grandes incendios que se"}, {"start": 1374.08, "end": 1379.44, "text": " tambi\u00e9n nos pas\u00f3 en cali cuando uosemejante est\u00e1 llido tan impresionante que"}, {"start": 1379.44, "end": 1383.08, "text": " se acab\u00f3 el centro de cali aqu\u00ed un incendios incendios poderosos"}, {"start": 1383.08, "end": 1388.52, "text": " vueltere motos porque esta es una zona geol\u00f3icamente inestable aqu\u00ed pasan"}, {"start": 1388.52, "end": 1394.32, "text": " terremotos muy terribles pasan cosas graves tambi\u00e9n"}, {"start": 1394.32, "end": 1400.8, "text": " entonces hab\u00eda habido un declive econ\u00f3mico y se no reactivemos todo esto con"}, {"start": 1400.8, "end": 1406.28, "text": " una feria y vamos a hacer una cosa bien bonita por los 100 a\u00f1os de la"}, {"start": 1406.28, "end": 1411.96, "text": " fundaci\u00f3n de manizales entonces se preparan celebraciones y fiestas pero"}, {"start": 1411.96, "end": 1417.12, "text": " a nosotros nos atraviesan unas cosas tan terribles de hombre y la gracias que"}, {"start": 1417.12, "end": 1421.36, "text": " salimos adelante por encima de ellas cuando ya estaba todo listo todo bonito y"}, {"start": 1421.36, "end": 1430.08, "text": " todo chevere a tan a Jorge Lisser gait\u00e1n en 1948 en Bogota y Estalla o"}, {"start": 1430.08, "end": 1435.9399999999998, "text": " se generaliza un fen\u00f3meno que nosotros vamos a conocer como la violencia con"}, {"start": 1435.9399999999998, "end": 1442.32, "text": " Vema y\u00fascula Nagerra Civil que nos desangru\u00f3 en la vida en la sangre y en la"}, {"start": 1442.32, "end": 1447.76, "text": " memoria y que todav\u00eda nos persigue en todos los fantasmas de lo que fue a haber"}, {"start": 1447.76, "end": 1452.1599999999999, "text": " vivido eso y que en toda la feria que tiene que ver con ese periodo de"}, {"start": 1452.1599999999999, "end": 1457.76, "text": " nos atraviesa de una u otra manera todas se vieron atravesadas por el momento de"}, {"start": 1457.76, "end": 1461.36, "text": " la violencia entonces esto no se puede hacer como lo pens\u00e1bamos hacer ese d\u00eda"}, {"start": 1461.36, "end": 1468.12, "text": " ni nada entonces se sigue pensando en c\u00f3mo celebrar esto en c\u00f3mo hacer una"}, {"start": 1468.12, "end": 1471.96, "text": " feria para la ciudad a pesar de todo lo que est\u00e1 pasando porque a nosotros"}, {"start": 1471.96, "end": 1476.8400000000001, "text": " no nos detiene nada o sea para gozar y para hacer ferias y para hacer fiestas"}, {"start": 1476.8400000000001, "end": 1483.6000000000001, "text": " lo hemos reiterado a lo largo de estos relatos no nos detiene nada aqu\u00ed se"}, {"start": 1483.6000000000001, "end": 1489.96, "text": " rumb\u00e9 a pase lo que pase y esa es esp\u00edritu de celebraci\u00f3n y de gozadera"}, {"start": 1489.96, "end": 1495.16, "text": " tambi\u00e9n un esp\u00edritu de resiliencia y nos hace poderosos en la celebraci\u00f3n"}, {"start": 1495.16, "end": 1499.76, "text": " es uno de nuestros grandes poderes como sociedad y como pueblo el poder de la"}, {"start": 1499.76, "end": 1505.36, "text": " celebraci\u00f3n es c\u00f3mo nos vamos a inventar esto lo hay una persona que vive enamorado"}, {"start": 1505.36, "end": 1511.08, "text": " de Sevilla y se iba para las ferias de Sevilla en abril que son las ferias que"}, {"start": 1511.08, "end": 1515.16, "text": " se hacen para conmemorar la llegada de la primera dicen de Sevilla que quien"}, {"start": 1515.16, "end": 1518.72, "text": " no ha visto Sevilla no conoce maravilla o sea para las fiestas de Sevilla y"}, {"start": 1518.72, "end": 1523.76, "text": " o as\u00ed es como las fiestas de Sevilla las vamos a hacer en manizales entonces"}, {"start": 1523.76, "end": 1530.28, "text": " trae parte de las principales caracter\u00edsticas de la celebraci\u00f3n de Sevilla las"}, {"start": 1530.28, "end": 1538.48, "text": " carretas del rocillo la manzanilla las manulas las casetas y esto se va a dar"}, {"start": 1538.48, "end": 1544.48, "text": " alrededor de un fen\u00f3meno que va a tener una importancia cultural muy grande en"}, {"start": 1544.48, "end": 1550.92, "text": " Colombia durante un periodo de su historia las corridas de toros de 10 a\u00f1os"}, {"start": 1550.92, "end": 1555.64, "text": " antes de la inauguraci\u00f3n de la feria y a manizales ten\u00eda corrida de"}, {"start": 1555.64, "end": 1560.48, "text": " toros y esto tambi\u00e9n va a atravesar nuestras celebraciones bastante pues la"}, {"start": 1560.48, "end": 1567.0800000000002, "text": " vimos en la feria de cali vimos que las corridas de toros van a tener hoy para"}, {"start": 1567.0800000000002, "end": 1570.64, "text": " tener una importancia muy grande hoy por hoy esto se ve con unos ojos muy"}, {"start": 1570.64, "end": 1578.0, "text": " distintos pero en su momento eso era una de las formas de entrar en una"}, {"start": 1578.0, "end": 1582.68, "text": " celebraci\u00f3n de la de lo que en ese momento se consier\u00e1 la modernidad las"}, {"start": 1582.68, "end": 1587.52, "text": " cosas van cambiando de categor\u00edas y de maneras de verse hoy por hoy eso se ve"}, {"start": 1587.52, "end": 1591.56, "text": " como bastante m\u00e1s cercano a la barbar\u00eda que a la modernidad pero en esa"}, {"start": 1591.56, "end": 1596.24, "text": " \u00e9poca eso se ve\u00eda como una modernidad se empiezan a hacer plazas de toros en"}, {"start": 1596.24, "end": 1602.32, "text": " toda Colombia y la celebraci\u00f3n del torneo va a llegar a crear toda una cultura"}, {"start": 1602.32, "end": 1609.8799999999999, "text": " entre nosotros es parte de lo que nos ha transitado en estas en estas b\u00fasquedas"}, {"start": 1609.8799999999999, "end": 1615.32, "text": " de identidades y de y de construcciones aqu\u00ed el tema de los toros va a ser"}, {"start": 1615.32, "end": 1624.56, "text": " muy importante en 1955 oficialmente se hizo la primera feria de manizales y desde"}, {"start": 1624.56, "end": 1632.0, "text": " ah\u00ed la han celebrado 65 veces o sea esta feria es muy tradicional en la"}, {"start": 1632.0, "end": 1636.8, "text": " media que ha tenido una continuidad de celebraci\u00f3n muy importante y que genera"}, {"start": 1636.8, "end": 1643.56, "text": " un esp\u00edritu de pertenencia y de identidad en la ciudad muy poderos en cada uno"}, {"start": 1643.56, "end": 1648.08, "text": " de estos a\u00f1os y es el momento en que todas las ferias y fiestas est\u00e1n en ese"}, {"start": 1648.08, "end": 1654.48, "text": " momento en Colombia o sea nosotros tenemos dos puntos important\u00edsimos y es entre"}, {"start": 1654.48, "end": 1659.92, "text": " diciembre y enero que todo el pa\u00eds est\u00e1 de fiestas y carnavales y el otro es"}, {"start": 1659.92, "end": 1665.5, "text": " ahorita en un julio que vienen tambi\u00e9n otra otra etapa de carnavales y ah\u00ed"}, {"start": 1665.5, "end": 1670.28, "text": " todo el pa\u00eds se enrumba entonces la feria de manizales forma parte de esta"}, {"start": 1670.28, "end": 1674.88, "text": " rumba junto con la feria de cal y junto con todo y despu\u00e9s de que t\u00fa"}, {"start": 1674.88, "end": 1678.72, "text": " sales de la feria de manizales y de la feria de cal y esto va a empezar el"}, {"start": 1678.72, "end": 1684.3200000000002, "text": " carnaval de negros y blancos as\u00ed que el que se quieren rumbarse pueden rumbar"}, {"start": 1684.32, "end": 1690.4399999999998, "text": " desde las cuadrillas de san mart\u00edn en noviembre pasarse todo y ciembre en ferias"}, {"start": 1690.4399999999998, "end": 1695.72, "text": " desde las velitas que empieza la navidad y rematar con negros y blancos y si"}, {"start": 1695.72, "end": 1701.04, "text": " adem\u00e1s son los d\u00edas de r\u00edo sucio que es cada dos a\u00f1os pues mira la enrubada"}, {"start": 1701.04, "end": 1706.3999999999999, "text": " que te puedes pegar es absolutamente magn\u00edfica porque se nos juntan todas al"}, {"start": 1706.3999999999999, "end": 1711.8799999999999, "text": " tiempo y se nos ponen de la m\u00e1s emocionantes porque se hace esto en manizales"}, {"start": 1711.88, "end": 1717.44, "text": " en esa fecha porque manizales es una ciudad donde llueve mucho en la regi\u00f3n"}, {"start": 1717.44, "end": 1721.44, "text": " del eje para que afeter\u00f3 llueve mucho su produce parte de la fertilidad y del"}, {"start": 1721.44, "end": 1726.44, "text": " florecimiento entonces es la semana m\u00e1s seca del a\u00f1o en la ciudad a cu\u00e9les"}, {"start": 1726.44, "end": 1730.16, "text": " que nosotros tenemos un zona donde llueve mucho en el choc\u00f3 cuando no llueve"}, {"start": 1730.16, "end": 1735.5200000000002, "text": " dos d\u00edas lo llaman veranillo y aqu\u00ed es la semana m\u00e1s seca del a\u00f1o por eso es"}, {"start": 1735.5200000000002, "end": 1740.5600000000002, "text": " la que semana que se hace la feria entonces ya con el \u00e9xito de la feria"}, {"start": 1740.56, "end": 1744.84, "text": " vamos a musicalizar la feria la banda sonora de la feria de manizales es el"}, {"start": 1744.84, "end": 1751.8799999999999, "text": " paso doble y eso ha sido el \u00e9xito desde el comienzo de la feria porque un"}, {"start": 1751.8799999999999, "end": 1757.36, "text": " reconocido poetacal dense Guillermo Gonz\u00e1lez ospina quiso escribir unos"}, {"start": 1757.36, "end": 1763.3999999999999, "text": " versos en honor a la ciudad de manizales y le llev\u00f3 la letra a Oscar oyos"}, {"start": 1763.3999999999999, "end": 1767.9199999999998, "text": " botero que fue el fundador de la feria y a \u00e9l le padeci\u00f3 mucho ver la idea"}, {"start": 1767.92, "end": 1773.28, "text": " entonces el maestro Gonz\u00e1lez ten\u00eda como la idea de que fuera un bombuco pero"}, {"start": 1773.28, "end": 1777.8000000000002, "text": " oyos botero que viena murado Espa\u00f1a y de las corridas le hicieron que"}, {"start": 1777.8000000000002, "end": 1782.44, "text": " ser un paso doble entonces porque es el ritmo tradicional del sur de Espa\u00f1a"}, {"start": 1782.44, "end": 1787.0800000000002, "text": " entonces por eso bail\u00e9 pide a un director de orquestado al anciano que es Jos\u00e9"}, {"start": 1787.0800000000002, "end": 1793.2, "text": " Mariancis que convierta la letra del maestro Gonz\u00e1lez en un paso doble cosa que"}, {"start": 1793.2, "end": 1799.0, "text": " las hace y al hacerlo va a crear algo que es realmente el digno extraoficial"}, {"start": 1799.0, "end": 1803.72, "text": " pero el digno de manizales el paso doble de manizales del alma eso se"}, {"start": 1803.72, "end": 1809.68, "text": " bailen las fiestas es donde los sabemos todos se ha convertido en una canci\u00f3n"}, {"start": 1809.68, "end": 1814.72, "text": " que nos identifica culturalmente en much\u00edsimos lugares la feria de manizales"}, {"start": 1814.72, "end": 1819.24, "text": " es una cosa que todo el mundo se much\u00edsima gente se sabe y nos ha tocado en"}, {"start": 1819.24, "end": 1825.48, "text": " todas las regiones pasodobles y la gente sabe mucha gente sabe bailar pasodobles es"}, {"start": 1825.48, "end": 1828.88, "text": " parte de lo que de lo que nos ha habitado es exacto esa parte"}, {"start": 1828.88, "end": 1834.72, "text": " hispanica del paso doble la herencia de las t\u00faanas que recorre absolutamente"}, {"start": 1834.72, "end": 1841.32, "text": " todo el continente las t\u00faanas como estos cantos de viguela de bandola donde"}, {"start": 1841.32, "end": 1847.68, "text": " realmente se viste como el siglo 16 de eso hay en todo el continente y es esa"}, {"start": 1847.68, "end": 1853.88, "text": " parte digamos musical que se nutre de las tradiciones espa\u00f1olas y que la"}, {"start": 1853.88, "end": 1859.2, "text": " lleva en la en la sonoridad y en la sonoridad de las m\u00fasicas de cuerda y todo"}, {"start": 1859.2, "end": 1864.92, "text": " eso aqu\u00ed de esa digamos de esa misma beta de donde vienen las t\u00faanas de"}, {"start": 1864.92, "end": 1872.04, "text": " donde viene toda la musicalidad espa\u00f1ola viene el paso doble y va a tener su"}, {"start": 1872.04, "end": 1876.8400000000001, "text": " mayor representaci\u00f3n en la feria de manizales y se va a convertir el elimno de"}, {"start": 1876.84, "end": 1881.12, "text": " la ciudad o sea no es el l\u00edmno oficial pero es el libro de manizales todo el"}, {"start": 1881.12, "end": 1885.3999999999999, "text": " mundo va a conocer estas letras y va a conocer esta m\u00fasica en colombian sus"}, {"start": 1885.3999999999999, "end": 1889.12, "text": " partes de los de nosotros vemos unas narrativas colectivas que van desde el"}, {"start": 1889.12, "end": 1894.76, "text": " ballenato hasta el paso doble pasando por la cumbia pasando por el porro"}, {"start": 1894.76, "end": 1899.4399999999998, "text": " porque todas las tradiciones de fiestas que les cuento van a entrar en un"}, {"start": 1899.4399999999998, "end": 1905.0, "text": " momento a mezclarse para formar este conjunto de diversidad cultural que"}, {"start": 1905.0, "end": 1910.56, "text": " somos los colombianos parte de esa narrativa musical es el paso doble y de"}, {"start": 1910.56, "end": 1917.68, "text": " la feria de manizales toda la feria como tal se va a realizar en 1957 la"}, {"start": 1917.68, "end": 1922.84, "text": " primera versi\u00f3n del rey nado internacional del caf\u00e9 que tambi\u00e9n le va a dar a la"}, {"start": 1922.84, "end": 1927.52, "text": " feria mucha importancia y se coro una la primera reina del caf\u00e9 que va a ser"}, {"start": 1927.52, "end": 1932.76, "text": " una paname\u00f1a anhelida al faru y ah\u00ed en adelante las reinas que llegaron a"}, {"start": 1932.76, "end": 1936.76, "text": " tener tanta importancia que ten\u00edan vuelos directos a manizales en los reynados"}, {"start": 1936.76, "end": 1941.28, "text": " van a ser muy importantes en las ferias eso lo hemos visto son parte digamos de"}, {"start": 1941.28, "end": 1945.32, "text": " momentos cumbres de la celebraci\u00f3n de la feria son los reynados lo vimos en la"}, {"start": 1945.32, "end": 1951.24, "text": " reina del carnaval de barranquilla aqu\u00ed la reina del caf\u00e9 tiene una importancia"}, {"start": 1951.24, "end": 1956.82, "text": " muy grande sobre todo desde cuando luz varina su lo haga fue nombrada mis"}, {"start": 1956.82, "end": 1963.0, "text": " universo en 1958 y es la primera mis universo que hemos tenido en la historia y"}, {"start": 1963.0, "end": 1967.04, "text": " ella va a ser la que va a promocionar la feria siempre va a decir lo que los"}, {"start": 1967.04, "end": 1972.1599999999999, "text": " esperen la feria en el rey nado mundial del caf\u00e9 en el festival folklorico"}, {"start": 1972.1599999999999, "end": 1976.6799999999998, "text": " internacional entonces eso le va a dar a la feria de manizales tambi\u00e9n toda"}, {"start": 1976.6799999999998, "end": 1983.6399999999999, "text": " una forma de revestirse de una gala especial que dan las ferias pero la feria"}, {"start": 1983.64, "end": 1989.0, "text": " de manizales se ha venido entroncando con una gran cantidad de"}, {"start": 1989.0, "end": 1996.4, "text": " tradiciones de la ciudad de la regi\u00f3n que la han venido diversificando ella"}, {"start": 1996.4, "end": 2003.0400000000002, "text": " es una feria digamos de una vertiente espa\u00f1ola de Sevilla pero queda en este"}, {"start": 2003.0400000000002, "end": 2008.5600000000002, "text": " territorio y todas las cosas que habitan este territorio van a empezar a formar"}, {"start": 2008.56, "end": 2015.3999999999999, "text": " parte de la feria entonces la figura de la riero como este relato fundacional de"}, {"start": 2015.3999999999999, "end": 2021.72, "text": " quienes entr\u00f3 abriendo monte y de quien recorre esas monta\u00f1as al homo de mula"}, {"start": 2021.72, "end": 2032.12, "text": " atravesando lugares in\u00f3spitos y va llevando las colonizaciones eso va a ser una"}, {"start": 2032.12, "end": 2036.04, "text": " tradici\u00f3n popular que est\u00e1 incorporada a la feria y unes manizales ve una"}, {"start": 2036.04, "end": 2042.04, "text": " estatua muy importante a la figura del arriero entonces eso va a ser que haya"}, {"start": 2042.04, "end": 2046.8, "text": " una feria de la rier\u00eda es donde se honra al arriero con desfiles"}, {"start": 2046.8, "end": 2051.56, "text": " tradicionales contra gest\u00edpicos o sea con el poncho con el carril que le"}, {"start": 2051.56, "end": 2055.68, "text": " cabe un cl\u00f3seta dentro es pues eso no tiene fondo y con el machete y la mula"}, {"start": 2055.68, "end": 2060.96, "text": " eso es un digamos como un relato fundacional de toda la historia de la"}, {"start": 2060.96, "end": 2066.12, "text": " colonizaci\u00f3n antioque\u00f1a y de lo que va a ser el eje cafetero y tambi\u00e9n de lo"}, {"start": 2066.12, "end": 2070.96, "text": " que va a ser manizales tambi\u00e9n hay otro elemento muy importante de la cultura"}, {"start": 2070.96, "end": 2076.04, "text": " de esta cultura digamos que nosotros llamamos paisa que son todos estos"}, {"start": 2076.04, "end": 2079.68, "text": " departamentos que tienen que ver con el caf\u00e9 es todo ade de antioqu\u00eda quien"}, {"start": 2079.68, "end": 2093.8799999999997, "text": " dio caldas risar alta son los trovadores"}, {"start": 2110.6, "end": 2133.2, "text": " riding"}, {"start": 2133.2, "end": 2140.2, "text": " en el mejor vividor. En armella est\u00e1 panaca, el ruiza y"}, {"start": 2140.2, "end": 2144.2, "text": " amanizales, enfera y hasta la rumba y en santa Rosa,"}, {"start": 2144.2, "end": 2149.2, "text": " termales, nuestra regi\u00f3n cafetera, el lado de m\u00e1s crecimiento,"}, {"start": 2149.2, "end": 2153.2, "text": " el progreso se refleja en sus tres departamentos."}, {"start": 2153.2, "end": 2156.2, "text": " La trova a nosotros nos atraviesa por todos los extremos,"}, {"start": 2156.2, "end": 2159.2, "text": " porque nosotros somos un pueblo de mucha tradicional."}, {"start": 2159.2, "end": 2164.2, "text": " Y los trovadores son poetas que cantan a la monta\u00f1a y al trabajo"}, {"start": 2164.2, "end": 2168.2, "text": " y provisan coplas de una manera incre\u00edble."}, {"start": 2168.2, "end": 2172.2, "text": " Eso pasa en todas nuestras tradiciones, pero aqu\u00ed son muy importantes."}, {"start": 2172.2, "end": 2177.2, "text": " Y cada vez tiene las mujeres m\u00e1s participaci\u00f3n en este mundo coplero"}, {"start": 2177.2, "end": 2182.2, "text": " que antes era solamente de hombres y que cada vez cuenta m\u00e1s con la participaci\u00f3n"}, {"start": 2182.2, "end": 2186.2, "text": " de las mujeres. Hay otra cosa que me parece ataque."}, {"start": 2186.2, "end": 2190.2, "text": " Uno tiene que imaginarse c\u00f3mo son las calles en manizales."}, {"start": 2190.2, "end": 2194.2, "text": " Las calles en manizales se descoelgan, se ruedan."}, {"start": 2194.2, "end": 2199.2, "text": " Cuando usted le dice que suba, es que sube, o sea, es que es una trepada,"}, {"start": 2199.2, "end": 2204.2, "text": " incre\u00edble. Y cuando hay que bajar por ah\u00ed, es que es que usted se me ha dicho"}, {"start": 2204.2, "end": 2209.2, "text": " que se rueda ah\u00ed. Esto, digamos, apata ese escarpado,"}, {"start": 2209.2, "end": 2214.2, "text": " en carro es dif\u00edcil, en bicicleta es suicida,"}, {"start": 2214.2, "end": 2217.2, "text": " pero esto lo vamos a hacer en carritos de valinera."}, {"start": 2217.2, "end": 2222.2, "text": " Yo quiero que ustedes se imaginen lo que es bajar por una falda"}, {"start": 2222.2, "end": 2228.2, "text": " de una manera enloquecida en un carrito de valineras."}, {"start": 2228.2, "end": 2233.2, "text": " Curiosamente no se ha matado la gente, o sea, esto no se me ocurre nada m\u00e1s vertiginoso"}, {"start": 2233.2, "end": 2237.2, "text": " ni un deporte extremo m\u00e1s arriesgado."}, {"start": 2237.2, "end": 2241.2, "text": " Sin embargo, hay concursos de carrito de valinera y se echan"}, {"start": 2241.2, "end": 2244.2, "text": " muntia bajo por las faldas de manizales."}, {"start": 2244.2, "end": 2249.2, "text": " Y aquello es de vertigo, es de vertiginoso, es vertiginoso de ver,"}, {"start": 2249.2, "end": 2254.2, "text": " eso s\u00ed es un deporte extremo, o no ponerse a hacer concursos de carritos de"}, {"start": 2254.2, "end": 2259.2, "text": " valineras, pues las calles de manizales es una cosa loca."}, {"start": 2259.2, "end": 2264.2, "text": " Y hay concursos y estos concursos se han hecho la gente hace los carritos"}, {"start": 2264.2, "end": 2269.2, "text": " en casa, los impulsar al principio, ap\u00e1tano, pealiendo."}, {"start": 2269.2, "end": 2274.2, "text": " Y luego, por entre esas calles as\u00ed en pina, es que esto es verdad,"}, {"start": 2274.2, "end": 2279.2, "text": " es ch\u00e9vere que ustedes no solamente vean el mapa, que no solamente que vayan"}, {"start": 2279.2, "end": 2285.2, "text": " a manizales sino que traten de ver esto para que uno se imagine como es quien se"}, {"start": 2285.2, "end": 2290.2, "text": " rueda en un carrito de valineras, con una velocidad absolutamente incre\u00edble"}, {"start": 2290.2, "end": 2293.2, "text": " y hacen concursos de esto en la feria."}, {"start": 2293.2, "end": 2297.2, "text": " Entonces, vienes que cada una de las tradiciones se va uniendo a la feria,"}, {"start": 2297.2, "end": 2301.2, "text": " los carritos de valineras, la feria de la rier\u00eda,"}, {"start": 2301.2, "end": 2305.2, "text": " este es una ciudad muy importante en el teatro, lo contamos en el libro"}, {"start": 2305.2, "end": 2310.2, "text": " americano, esta es una ciudad de teatro y es una ciudad universitaria,"}, {"start": 2310.2, "end": 2316.2, "text": " es una ciudad estudiantil, lo que le da una permanente transformaci\u00f3n"}, {"start": 2316.2, "end": 2320.2, "text": " en la m\u00eda que van llegando estudiantes de todo el pa\u00eds a manizales"}, {"start": 2320.2, "end": 2324.2, "text": " y van participando de estas tradiciones, es una ciudad donde se encuentran"}, {"start": 2324.2, "end": 2328.2, "text": " muchas vertientes de la cultura para esta feria,"}, {"start": 2328.2, "end": 2332.2, "text": " he hecho la mente en tres ocasiones, ha sido suspendida a la feria,"}, {"start": 2332.2, "end": 2339.2, "text": " una fue en 1980 porque un terremoto les cuento de esta zona geol\u00f3gicamente inestable"}, {"start": 2339.2, "end": 2343.2, "text": " y en 1979 un terremoto grande,"}, {"start": 2343.2, "end": 2346.2, "text": " esos los terremotos nos van a travesar a la feria despu\u00e9s,"}, {"start": 2346.2, "end": 2348.2, "text": " les cuento como Vuel de Popall\u00e1n,"}, {"start": 2348.2, "end": 2352.2, "text": " hemos tenido terremoto realmente muy graves"}, {"start": 2352.2, "end": 2356.2, "text": " porque la cordillera de los andes es la m\u00e1s joven de las cordilleras planetarias,"}, {"start": 2356.2, "end": 2362.2, "text": " esto est\u00e1 apenas esta informaci\u00f3n a\u00fan con respecto a otras cordilleras del planeta,"}, {"start": 2362.2, "end": 2369.2, "text": " entonces hay terremotos y en ese es de callar un edificio y murieron cuatro personas"}, {"start": 2369.2, "end": 2372.2, "text": " y u\u00f3 temporada taurina,"}, {"start": 2372.2, "end": 2377.2, "text": " pero apenas digamos el redujo a su m\u00ednima expresi\u00f3n y por eso se llam\u00f3 la minif\u00e9ria"}, {"start": 2377.2, "end": 2383.2, "text": " pero algo se pudo hacer, la tragedia m\u00e1s grande que hemos tenido nosotros como pueblo"}, {"start": 2383.2, "end": 2392.2, "text": " y como pa\u00eds en la memoria ocurre en 1985 cuando el volc\u00e1n nevado del Ruiz,"}, {"start": 2392.2, "end": 2395.2, "text": " vecino vecino de Manizales entra en errupci\u00f3n,"}, {"start": 2395.2, "end": 2400.2, "text": " eso lo vine a entender yo en Islandia cuando le explican a uno"}, {"start": 2400.2, "end": 2406.2, "text": " que lo que significa un volc\u00e1n que arriba est\u00e1 taponado"}, {"start": 2406.2, "end": 2412.2, "text": " o por un claciar como es el caso de los islandeses o por un nevado como es el caso de nosotros,"}, {"start": 2412.2, "end": 2418.2, "text": " eso significa que cuando toda la lava del volc\u00e1n va a salir con la fuerza de la tierra"}, {"start": 2418.2, "end": 2426.2, "text": " en lugar de esparcirse hacia el cielo se tapona porque arriba hay un nevado"}, {"start": 2426.2, "end": 2430.2, "text": " entonces lo que hace es que descongela el agua del nevado"}, {"start": 2430.2, "end": 2435.2, "text": " que en este caso como es una formaci\u00f3n monta\u00f1osa tambi\u00e9n se llena de barro"}, {"start": 2435.2, "end": 2443.2, "text": " eso gener\u00f3 un lodo infinito que se pult\u00f3 un pueblo de la manera m\u00e1s dram\u00e1tica"}, {"start": 2443.2, "end": 2452.2, "text": " que produjo 23 mil muertos y que golpe\u00f3 varias poblaciones en caldas y en el tolima"}, {"start": 2452.2, "end": 2459.2, "text": " mumiles de desaparecidos, de damnificados y con inmenc\u00edsimas p\u00e9rdidas materiales"}, {"start": 2459.2, "end": 2465.2, "text": " eso fue la tragedia m\u00e1s grande que nosotros tenemos memoria como pa\u00eds"}, {"start": 2465.2, "end": 2471.2, "text": " o sea no no hemos vivido nada tan aterraor como la tragedia de armero"}, {"start": 2471.2, "end": 2476.2, "text": " y el pues se est\u00e1 manizales est\u00e1 muy cerca de eso porque est\u00e1 muy cerca de una de las"}, {"start": 2476.2, "end": 2481.2, "text": " de las maravillas m\u00e1s grandes que tenemos en nuestro pa\u00eds pero que a la hora de una tragedia"}, {"start": 2481.2, "end": 2486.2, "text": " esa es muy terrible es el parque de los nevados ah\u00ed al ladito est\u00e1 el parque de los nevados"}, {"start": 2486.2, "end": 2491.2, "text": " lo que hace que manizales est\u00e9n entre varios climas incluye a la nieve"}, {"start": 2491.2, "end": 2496.2, "text": " que est\u00e1is cerca cuando vino la tragedia de armero nos trataban de ayudar"}, {"start": 2496.2, "end": 2503.2, "text": " nos mandaban mucha ropa y montan tuvo que aclarar de a los franceses que no mandaran"}, {"start": 2503.2, "end": 2509.2, "text": " ropa de invierno que las poblaciones damnificadas eran poblaciones de traintagrados"}, {"start": 2509.2, "end": 2514.2, "text": " como l\u00e9rida tratando de explicar la imagen el problema de los pisos t\u00e9rmicos"}, {"start": 2514.2, "end": 2518.2, "text": " que no lo pueden entender los pueblos de estaci\u00f3n de todo nos mandaban chiquetas de invierno"}, {"start": 2518.2, "end": 2524.2, "text": " porque hablaban del nevado del riz se no ater o hay nieve o todo es caliente pues"}, {"start": 2524.2, "end": 2529.2, "text": " las dos cosas hay nieve y todo es caliente porque aqu\u00ed hay pisos t\u00e9rmicos"}, {"start": 2529.2, "end": 2535.2, "text": " entonces la nieve cay\u00f3 o sea el barro cay\u00f3 sobre poblaciones inmensamente c\u00e1lidas"}, {"start": 2535.2, "end": 2542.2, "text": " pero viene de una monta\u00f1a nevada que tiene arriba nieve y abajo al can"}, {"start": 2542.2, "end": 2546.2, "text": " como hago para explicarte pero aqu\u00ed todo es tierra caliente si me entiendo"}, {"start": 2546.2, "end": 2551.2, "text": " entonces estamos en ropa ropa de calor ropa de verano ni siquiera pod\u00edamos"}, {"start": 2551.2, "end": 2556.2, "text": " explicar geogr\u00e1ficamente lo que nos pas\u00f3 siquiera lo pod\u00edamos entender nosotros"}, {"start": 2556.2, "end": 2561.2, "text": " mismos yo lo vine a entender en islandia cuando me explicaron la coalici\u00f3n entre la"}, {"start": 2561.2, "end": 2567.2, "text": " lava y el hielo si y eso fue lo que pas\u00f3 ella y no se pudo hacer la feria"}, {"start": 2567.2, "end": 2573.2, "text": " de manizales porque la tragedia fue total y nos la tragedia fue tan terrible"}, {"start": 2573.2, "end": 2580.2, "text": " tan terrible que en 1986 no pudo haber feria porque est\u00e1bamos totalmente deluto"}, {"start": 2580.2, "end": 2587.2, "text": " perplejos ante el tama\u00f1o de la tragedia que acab\u00e1bamos de vivir y en la memoria"}, {"start": 2587.2, "end": 2592.2, "text": " esto todav\u00eda genera un profundo dolor recordar lo que pas\u00f3 en armero"}, {"start": 2592.2, "end": 2597.2, "text": " y tratar de explicarnos lo que pas\u00f3 en ese momento la tercera vez que no se"}, {"start": 2597.2, "end": 2601.2, "text": " pueda hacer feria pero no puede hacer feria en ninguna parte pues es por la"}, {"start": 2601.2, "end": 2606.2, "text": " por la pandemia eso toda nuestras historias de ferias ahora que las estamos"}, {"start": 2606.2, "end": 2611.2, "text": " narrando pues van a traestar atravesadas por la pandemia que en la"}, {"start": 2611.2, "end": 2617.2, "text": " medida que se proh\u00edbe el contacto entre las personas pues se proh\u00edbe la"}, {"start": 2617.2, "end": 2620.2, "text": " feria porque mire usted como hace para helar el paso de la con covid pero no"}, {"start": 2620.2, "end": 2626.2, "text": " se puede si porque palpas oble toca ponerse de acuerdo y abrazarse o"}, {"start": 2626.2, "end": 2632.2, "text": " sea el co\u00efdes una barbaridad porque todas las ferias se trata de encontrar"}, {"start": 2632.2, "end": 2639.2, "text": " se de abrazarse de pintarse de bailar paso doble de todo de toda la alegr\u00eda"}, {"start": 2639.2, "end": 2642.2, "text": " y el contacto que se trata de una feria y una celebraci\u00f3n eso no puede"}, {"start": 2642.2, "end": 2647.2, "text": " hacer durante la pandemia entonces ah\u00ed tampoco se pudo hacer entonces"}, {"start": 2647.2, "end": 2651.2, "text": " van a desarrollar una identidad con esta feria que es particularmente"}, {"start": 2651.2, "end": 2658.2, "text": " poderosa la consideran m\u00e1s suya que la misma catedral de la ciudad y"}, {"start": 2658.2, "end": 2662.2, "text": " empiezan a prepararse como toda la gente que vive en el afortunado de"}, {"start": 2662.2, "end": 2667.2, "text": " escenario de los lugares donde hay ferias y fiestas empiezan a prepararse"}, {"start": 2667.2, "end": 2673.2, "text": " desde diciembre y una manera de establecer los regalos de navidad"}, {"start": 2673.2, "end": 2677.2, "text": " es que a la gente se le regala plata para que gaste en la feria eso es"}, {"start": 2677.2, "end": 2683.2, "text": " una manera de de prepararte para la feria y entonces la gente se va"}, {"start": 2683.2, "end": 2689.2, "text": " poniendo de acuerdo y se va poniendo en el plan de toda esta devosi\u00f3n"}, {"start": 2689.2, "end": 2695.2, "text": " all\u00e1 la cultura y al hecho de ser ser esta feria y hoy por hoy la"}, {"start": 2695.2, "end": 2700.2, "text": " feria de manizales incorpora una cantidad de tradiciones entonces tiene"}, {"start": 2700.2, "end": 2705.2, "text": " los desfiles de las carrosas de rocillo tiene el reinado el caf\u00e9 tiene"}, {"start": 2705.2, "end": 2710.2, "text": " las exposiciones tiene mercados persas que son mercados con una"}, {"start": 2710.2, "end": 2715.2, "text": " gran cantidad de objetos diversos tiene ciert\u00e1menes deportivos tiene"}, {"start": 2715.2, "end": 2719.2, "text": " conciertos grandes conciertos o sea conciertos tremendos que"}, {"start": 2719.2, "end": 2724.2, "text": " son lo hemos visto en muchas pues desde lo desde calli barranquilla"}, {"start": 2724.2, "end": 2728.2, "text": " aqu\u00ed tambi\u00e9n hay conciertos muy importantes hay otro elemento bueno"}, {"start": 2728.2, "end": 2732.2, "text": " la trova que ya la contamos hay otro elemento que es muy importante en la"}, {"start": 2732.2, "end": 2738.2, "text": " tradici\u00f3n cafetera el tango el hecho de que gardel haya muerto en"}, {"start": 2738.2, "end": 2743.2, "text": " Medell\u00edn y el hecho de que esta gente tenga una tradici\u00f3n tanguera"}, {"start": 2743.2, "end": 2748.2, "text": " o sea el tango es una parte estructural de la cultura de la"}, {"start": 2748.2, "end": 2754.2, "text": " eje cafetera esta gente es tanguera pero tanguera profundamente"}, {"start": 2754.2, "end": 2760.2, "text": " entonces el tango tambi\u00e9n est\u00e1 incorporado a la tradici\u00f3n de la"}, {"start": 2760.2, "end": 2765.2, "text": " feria tambi\u00e9n es una es una cosa de la que la gente en la regi\u00f3n sabe"}, {"start": 2765.2, "end": 2772.2, "text": " y su alma habita el tango lo baila lo conoce mucho tambi\u00e9n en la"}, {"start": 2772.2, "end": 2776.2, "text": " feria y tango hay tradici\u00f3n cafetera pues de eso se trata la"}, {"start": 2776.2, "end": 2782.2, "text": " esencia de lo que estamos hablando y tambi\u00e9n hay eventos y tambi\u00e9n"}, {"start": 2782.2, "end": 2787.2, "text": " son frecuentes los boulevares y los paseos a los sitios representativos"}, {"start": 2787.2, "end": 2792.2, "text": " de la ciudad para rescatar patrimonio y valores culturales porque es que"}, {"start": 2792.2, "end": 2796.2, "text": " una de las cosas m\u00e1s importantes de todas las ferias y fiestas que"}, {"start": 2796.2, "end": 2801.2, "text": " nosotros narramos es el rescate de las tradiciones y de los valores"}, {"start": 2801.2, "end": 2805.2, "text": " culturales nosotros estamos hechos de todas estas tradiciones"}, {"start": 2805.2, "end": 2810.2, "text": " estamos hechos de todas estas figuras porque eso es lo que nos"}, {"start": 2810.2, "end": 2816.2, "text": " nutre y nos hace mirarnos a nosotros mismos con un sentido de"}, {"start": 2816.2, "end": 2821.2, "text": " val\u00eda de orgullo de pertenencia y identidad la feria de"}, {"start": 2821.2, "end": 2826.2, "text": " manizales genera en la ciudad esto un sentido de orgullo de"}, {"start": 2826.2, "end": 2831.2, "text": " pertenencia de identidad de de encontrarse en una serie de"}, {"start": 2831.2, "end": 2836.2, "text": " valores que comparten que bailan que danzan y que se ense\u00f1orean"}, {"start": 2836.2, "end": 2843.2, "text": " como yo mismo lo siente cada vez que hay una feria de manizales"}, {"start": 2843.2, "end": 2850.2, "text": " entonces desde los tiempos del paso doble desde los arrieros"}, {"start": 2850.2, "end": 2856.2, "text": " levant\u00e1ndose a trav\u00e9s de la monta\u00f1a abri\u00e9ndose camino desde la"}, {"start": 2856.2, "end": 2862.2, "text": " llegada del progreso desde el caf\u00e9 desde el cable desde las manolas"}, {"start": 2862.2, "end": 2866.2, "text": " desde las carrosas de rocillo desde la ciudad universitaria desde"}, {"start": 2866.2, "end": 2871.2, "text": " la ciudad teatro desde la ciudad en Galanada desde los carros de"}, {"start": 2871.2, "end": 2876.2, "text": " valineras a toda por las monta\u00f1as desde este paisaje \u00fanico y"}, {"start": 2876.2, "end": 2881.2, "text": " particular de estas ciudades embaradas en el centro de la construcci\u00f3n"}, {"start": 2881.2, "end": 2887.2, "text": " del imaginario del caf\u00e9 en Colombia y que se ense\u00f1orea y se"}, {"start": 2887.2, "end": 2892.2, "text": " orgullese cada vez que se presenta en una feria que ellos adoran con"}, {"start": 2892.2, "end": 2895.2, "text": " el fondo del alma en la narraci\u00f3n de Anaur\u00edbe y para ustedes"}, {"start": 2895.2, "end": 2898.2, "text": " feliz domingo"}, {"start": 2908.2, "end": 2913.2, "text": " este podcast fue posible gracias al equipo de la Casa de la historia"}, {"start": 2913.2, "end": 2919.2, "text": " de Ana Su\u00e1rez, Milena Beltr\u00e1n, Arturo Jim\u00e9nez Finha, Daniel Moreno"}, {"start": 2919.2, "end": 2924.2, "text": " Franco, grabado en los gatos estudio la edici\u00f3n y la musicalizaci\u00f3n"}, {"start": 2924.2, "end": 2929.2, "text": " de Eduardo Corredor Fonseca de Rueda sonido y contamos con Daniel Shratz"}, {"start": 2929.2, "end": 2933.2, "text": " que est\u00e1 con nosotros acompa\u00f1\u00e1ndonos de aquella adelante en ferias"}, {"start": 2933.2, "end": 2936.2, "text": " y fiestas y que lo introducimos con mucha alegr\u00eda."}, {"start": 2936.2, "end": 2942.2, "text": " En este relato tuvimos toda la colaboraci\u00f3n, la ayuda, el cari\u00f1o"}, {"start": 2942.2, "end": 2947.2, "text": " en la narraci\u00f3n de Camilo Naranjo, Diana Ram\u00edrez, Paula Giraldu y"}, {"start": 2947.2, "end": 2953.2, "text": " todos los manizale\u00f1os y manizale\u00f1as, ori que organizadores que"}, {"start": 2953.2, "end": 2957.2, "text": " participan con tanto orgullo y amor en esta feria de manizales"}, {"start": 2957.2, "end": 2962.2, "text": " del alma y siempre con la ayuda fuerte y poderosa de Santiago Espinoza"}, {"start": 2962.2, "end": 2974.2, "text": " Uribe y Laura Rojas Aponte del podcast Cozat Internet."}]
Diana Uribe
https://www.youtube.com/watch?v=CUDzFxG6cJ4
Hable con ella
#miercolesdecine #almodovar Nuestro patrocinador @MUBI nos regaló 30 días gratis de cine en su plataforma, solo debemos ingresar a mubi.com/dianauribe en el siguiente link y ya, podrán disfrutar de una película nueva todos los días. →https://mubi.com/dianauribe?utm_source=social%20channels&utm_medium=influencer&utm_campaign=comiercolesdecine Hable con ella es una película disponible en MUBI donde se cuenta de Benigno y Marco que son dos personajes opuestos, unidos por el destino y por la larga e imprevisible convalecencia de las mujeres a las que aman. Ganadora de un Oscar® al mejor guion original, Hable con ella cuenta con una premisa improbable que la hace una síntesis perfecta del melodrama moderno. Esta versión oscura de La bella durmiente, donde las heridas y el deseo son indiscernibles, es una celebración de la vulnerabilidad masculina. ¡Síguenos en nuestras Redes Sociales! Facebook: https://www.facebook.com/DianaUribe.fm/ Instagram: https://www.instagram.com/dianauribefm/?hl=es-la Twitter: https://twitter.com/dianauribefm?lang=es Pagina web: https://www.dianauribe.fm
Buenas, les conté que muy bien nos está patos de ineados que tenemos una lianza con muy, les conté la vez pasada y que con esa lianza ustedes pueden entrar en el link que les vamos a dejar en este episodio de miércoles de cine y tener 30 días de sesiones de muy gratis. Eso es lo primero que tiene que tener claro y que muy pues es una plataforma de cine independiente, de cine arte, de cine clásico digamos es una manera de entrar en una alta calidad de cine, pues eso es un cine super curado. Tenía que contar, de qué vamos a hablar, vamos a hablar de un maestro en este momento, en muy hay un ciclo de todas todas todas las películas del maestro Pedro al modo bar. Pedro al modo bar es un tipo que empieza a recorrer una España en plena transformación, una España de la democracia del destape, de la movida, de la movida madrileña, digamos en esos años en que España se transforma completamente este hombre, entra con un relato absolutamente poderoso y maravilloso y empieza una carrera cinematográfica de las más sorprendentes que uno puede haber en el mundo del cine porque tiene una cantidad de relatos superambivalentes de relaciones encontradas, tiene un uso a mi manera de ver magnífico del elemento de la telenovela, que es tan esencial en hispanohamérica, tanto en España como en América Latina. Sus películas tienen giros telenovelescos, retorcidos, absurdos, llevan a límites del relato increíble a través de sus personajes, o sea tiene una trevimiento y un desparpao para manejar los temas más álgidos de la condición humana, sin dejar de tener elementos de ironía, elementos de humor, elementos grotescos, pero de todas maneras luminosos, porque su cine no es hordido, aunque haya hordidez en sus historias, su cine es pletórico de emocionalidad, de luz y en términos generales no deja de tener un amor profundo en el fondo. Entonces el escapadre hacer uno de su primera película mujeres, digamos no su primera, pero una de las primeras que fue conocida a mujeres al borde de un ataque nervioso, que es una locura con los doblajes en España, que durante mucho tiempo impedían que España viera el mundo, porque dobla el mundo que le llegaba. Son muchísimas historias, átame una mujer que se enamora de un hombre que la secuestra y le dicen que como es posible que ella esté enamorada de un hombre que la ha atado a la pata de la cama. Sí, bueno, hay una cantidad de historias, hay unas obras maestras, como todo sobre mi madre, que es de una complejidad y de una rista y una grande cantidad de relatos múltiples, absolutamente impresionantes, es toda una un paseo por la condición humana, por las sensibilidades, suceptibilidad y la subjetividad femenina, su experto en las mujeres, sin embargo en las que vamos a referir, no soy dirigible hombres. Y además va a decir que él también puede dirigir hombres y que las lágrimas de los hombres son más dramáticas y más tenaces que las de las mujeres. Y la película de la que vamos a hablar hoy dentro del mosaico de cine de Almodóvar, porque allí está volviera y está Julieta, que son asultas de las carne trémula, o sea, realmente esto es un recorrido increíble. Pero vamos a hablar de una película que tiene 20 años, que fue una película de las galardonadas, digamos, como él va entrando en el gran cine mundial, hasta convertirse en un icono y un referente del cine contemporáneo. Este es una de las que lo va a llevar a la gloria, hable con ella. Y hable con ella es una historia de dos hombres que sufren uno de ellos la pérdida de un amor sobre dos mujeres en coma, una que es una torera y que en medio de la digamos como de inolación de la fiesta brava con todo lo que eso tiene de ancho de largo y más sabor. Ella, el toro, la atraviesa y la dejen en coma. Y hay otra chica que también quedan como por un accidente automobilista, digamos, por una pasada de una calle sin cuidado. Y ellas entran en la clínica y en la clínica se van a encontrar dos hombres a través de quienes uno entra en el relato de la pérdida de estas mujeres. Grandineti, que es el argentino, el del laoscuro del corazón, que hace el papel del periobistar argentino, que amaba a Lídia, la que está en coma la torera. Y él, digamos, sufre toda la perdida, la enfermedad, el dolor, la muerte, toda la cantidad de cosas que que pasan, pero nuestro personaje siempre al modo de dar pone un personaje que es a través de quién vamos siempre a donar pone un personaje que es a través de quién vamos a problematizarnos en la complejidad del alma humana. O sea, él siempre tiene una cantidad de personajes a veces los trabaja todos simultáneamente a veces hay uno donde nos pone realmente el cuestionamiento de la complejidad de los sentimientos humanos. Aquí es Benicdo. Benic no es un enfermero, un tipo solitario que toda la vida cuidó a su madre, que no ha tenido relaciones y que despliega toda su ternura, toda su capacidad de cuidado y toda su capacidad de afecto, sobrealicia una mujer en coma, la quiere, la pinta le corte el pelo, le corta la sueña, divino él y la mujer en coma y la viola y la mujer en coma y ella queda embarazada y la mujer en coma y resulta que él todo lo hace desde el mayor de los cariños con una belleza y con una ternura que no dice pero que es esto que nos están mostrando y resulta que él dice que yo son la pareja perfecta porque no tienen ni un no, nunca pelea y por supuesto como no, pero si la otra es un objeto inerte, entonces no le lleva la contraria, no le discute, no le pelea, es una motrafora perturbadora de lo que pretende alguien que sea una relación con una persona que solamente sea el objeto de sus fantasías de sus anelos pero no un ser humano de carne y hueso, sino alguien a quien pueda marre de su más extrema fantasía objetivizar al otro para que sea solamente una parte de mis propios deseos y sin embargo esto es con una ternura, entonces aún no lo pone, él crea estas situaciones súper ambivalentes en las cuales con el mayor de los cariños está ejerciendo una patología tenaz y no dice pero como nos van a echar este cuento y en la escena digamos en la escena como de la violación nos lleva a otra escena de otro cine de una figura de un novietito pequeñito que se mete dentro de una científica y que la hace tener orgasmos, entonces al modo a lo que hace es que nos expande los ciclos de la realidad y nos hace meternos en un universo sin pensables y lo hace de una manera tan magistral y lo lleva a uno a marcos y a lugares donde uno jamás se atrevería a llegar y al mudurlo lleva uno con toda naturalidad con personajes llenos de ternura, entonces esto lida entre la cosa, la cosificación, la violación es la extremo cuidado, la ternura, el cariñito y uno no sabe qué fue lo que se comió ahí en ese indiesción de sentimientos que le producían o y cuando lo otro dice pero es que no puedo el otro que no puede con el dolor de su mujer y coma le dice habla con ella, pero si está en coma y si pero quien dice que no nos oye habla con ella qué tipo de diálogo es ese, como va a conversar con una persona que está en coma que sabemos de lo que está pasando ahí y esto abordado es de la mayor de las ternuras desde la mayor de las solidades desde la mayor de las decepciones amorosas que cada una tenido de la vida todo lo hace caledoscópico múltiples complejos, nada de blanco y negro en una película de almodobar y el elemento de la telenovela siempre está ahí cuando puede llevarnos al absurdo de la tragedia y a la tragedia del absurdo y todo va cambiando de color en la medida en que se desarrolla la película meterse en una película de almodobar es meterse en una aventura peligrosa y extrema de los abismos de la alma humana con un humor pintoresco en la mayoría de ellas y con finales inesperados es atreverse a cualquier cosa para la gente que no se ha metido en la aventura de almodobar esta es una invitación peligrosa y extrema y para los que lo hemos disfrutado toda la vida es volver a pasear por entre el cine de un maestro de maestros hoy en miércoles de cine el maestro Pedro al modobar y habló con él.
[{"start": 0.0, "end": 18.88, "text": " Buenas, les cont\u00e9 que muy bien nos est\u00e1 patos de ineados que tenemos una"}, {"start": 18.88, "end": 25.88, "text": " lianza con muy, les cont\u00e9 la vez pasada y que con esa lianza ustedes pueden entrar"}, {"start": 25.88, "end": 33.72, "text": " en el link que les vamos a dejar en este episodio de mi\u00e9rcoles de cine y tener"}, {"start": 33.72, "end": 41.8, "text": " 30 d\u00edas de sesiones de muy gratis. Eso es lo primero que tiene que tener claro y que muy"}, {"start": 41.8, "end": 49.120000000000005, "text": " pues es una plataforma de cine independiente, de cine arte, de cine cl\u00e1sico digamos es una"}, {"start": 49.12, "end": 78.64, "text": " manera de entrar en una alta calidad de cine, pues eso es un cine super curado."}, {"start": 78.64, "end": 86.08, "text": " Ten\u00eda que contar, de qu\u00e9 vamos a hablar, vamos a hablar de un maestro en este momento,"}, {"start": 86.08, "end": 94.42, "text": " en muy hay un ciclo de todas todas todas las pel\u00edculas del maestro Pedro al modo bar."}, {"start": 94.42, "end": 101.52, "text": " Pedro al modo bar es un tipo que empieza a recorrer una Espa\u00f1a en plena transformaci\u00f3n,"}, {"start": 101.52, "end": 109.64, "text": " una Espa\u00f1a de la democracia del destape, de la movida, de la movida madrile\u00f1a, digamos"}, {"start": 109.64, "end": 115.97999999999999, "text": " en esos a\u00f1os en que Espa\u00f1a se transforma completamente este hombre, entra con un relato"}, {"start": 115.97999999999999, "end": 122.88, "text": " absolutamente poderoso y maravilloso y empieza una carrera cinematogr\u00e1fica de las m\u00e1s"}, {"start": 122.88, "end": 130.06, "text": " sorprendentes que uno puede haber en el mundo del cine porque tiene una cantidad de"}, {"start": 130.06, "end": 140.88, "text": " relatos superambivalentes de relaciones encontradas, tiene un uso a mi manera de ver magn\u00edfico"}, {"start": 140.88, "end": 147.28, "text": " del elemento de la telenovela, que es tan esencial en hispanoham\u00e9rica, tanto en Espa\u00f1a como"}, {"start": 147.28, "end": 157.48000000000002, "text": " en Am\u00e9rica Latina. Sus pel\u00edculas tienen giros telenovelescos, retorcidos, absurdos,"}, {"start": 157.48, "end": 164.56, "text": " llevan a l\u00edmites del relato incre\u00edble a trav\u00e9s de sus personajes, o sea tiene una"}, {"start": 164.56, "end": 173.16, "text": " trevimiento y un desparpao para manejar los temas m\u00e1s \u00e1lgidos de la condici\u00f3n humana,"}, {"start": 173.16, "end": 181.64, "text": " sin dejar de tener elementos de iron\u00eda, elementos de humor, elementos grotescos, pero de todas"}, {"start": 181.64, "end": 188.79999999999998, "text": " maneras luminosos, porque su cine no es hordido, aunque haya hordidez en sus historias,"}, {"start": 188.79999999999998, "end": 195.64, "text": " su cine es plet\u00f3rico de emocionalidad, de luz y en t\u00e9rminos generales no deja de tener"}, {"start": 195.64, "end": 201.44, "text": " un amor profundo en el fondo. Entonces el escapadre hacer uno de su primera pel\u00edcula"}, {"start": 201.44, "end": 205.88, "text": " mujeres, digamos no su primera, pero una de las primeras que fue conocida a mujeres"}, {"start": 205.88, "end": 212.12, "text": " al borde de un ataque nervioso, que es una locura con los doblajes en Espa\u00f1a, que durante mucho"}, {"start": 212.12, "end": 218.12, "text": " tiempo imped\u00edan que Espa\u00f1a viera el mundo, porque dobla el mundo que le llegaba. Son much\u00edsimas"}, {"start": 218.12, "end": 224.2, "text": " historias, \u00e1tame una mujer que se enamora de un hombre que la secuestra y le dicen que"}, {"start": 224.2, "end": 229.16, "text": " como es posible que ella est\u00e9 enamorada de un hombre que la ha atado a la pata de la cama."}, {"start": 229.16, "end": 236.56, "text": " S\u00ed, bueno, hay una cantidad de historias, hay unas obras maestras, como todo sobre mi madre,"}, {"start": 236.56, "end": 242.0, "text": " que es de una complejidad y de una rista y una grande cantidad de relatos m\u00faltiples,"}, {"start": 242.0, "end": 249.92, "text": " absolutamente impresionantes, es toda una un paseo por la condici\u00f3n humana, por las sensibilidades,"}, {"start": 249.92, "end": 256.12, "text": " suceptibilidad y la subjetividad femenina, su experto en las mujeres, sin embargo en las"}, {"start": 256.12, "end": 263.08, "text": " que vamos a referir, no soy dirigible hombres. Y adem\u00e1s va a decir que \u00e9l tambi\u00e9n puede dirigir"}, {"start": 263.08, "end": 270.6, "text": " hombres y que las l\u00e1grimas de los hombres son m\u00e1s dram\u00e1ticas y m\u00e1s tenaces que las de las"}, {"start": 270.6, "end": 277.28000000000003, "text": " mujeres. Y la pel\u00edcula de la que vamos a hablar hoy dentro del mosaico de cine de Almod\u00f3var,"}, {"start": 277.28000000000003, "end": 283.96, "text": " porque all\u00ed est\u00e1 volviera y est\u00e1 Julieta, que son asultas de las carne tr\u00e9mula, o sea,"}, {"start": 283.96, "end": 291.44, "text": " realmente esto es un recorrido incre\u00edble. Pero vamos a hablar de una pel\u00edcula que tiene 20 a\u00f1os,"}, {"start": 291.44, "end": 299.91999999999996, "text": " que fue una pel\u00edcula de las galardonadas, digamos, como \u00e9l va entrando en el gran cine mundial,"}, {"start": 299.91999999999996, "end": 306.79999999999995, "text": " hasta convertirse en un icono y un referente del cine contempor\u00e1neo. Este es una de las que lo"}, {"start": 306.8, "end": 315.12, "text": " va a llevar a la gloria, hable con ella. Y hable con ella es una historia de dos hombres que"}, {"start": 315.12, "end": 324.72, "text": " sufren uno de ellos la p\u00e9rdida de un amor sobre dos mujeres en coma, una que es una torera y que en"}, {"start": 324.72, "end": 331.28000000000003, "text": " medio de la digamos como de inolaci\u00f3n de la fiesta brava con todo lo que eso tiene de ancho de"}, {"start": 331.28, "end": 342.71999999999997, "text": " largo y m\u00e1s sabor. Ella, el toro, la atraviesa y la dejen en coma. Y hay otra chica que tambi\u00e9n quedan"}, {"start": 342.71999999999997, "end": 352.28, "text": " como por un accidente automobilista, digamos, por una pasada de una calle sin cuidado. Y ellas"}, {"start": 352.28, "end": 358.03999999999996, "text": " entran en la cl\u00ednica y en la cl\u00ednica se van a encontrar dos hombres a trav\u00e9s de quienes uno entra"}, {"start": 358.04, "end": 365.72, "text": " en el relato de la p\u00e9rdida de estas mujeres. Grandineti, que es el argentino, el del laoscuro del"}, {"start": 365.72, "end": 371.96000000000004, "text": " coraz\u00f3n, que hace el papel del periobistar argentino, que amaba a L\u00eddia, la que est\u00e1 en coma la torera."}, {"start": 373.20000000000005, "end": 380.68, "text": " Y \u00e9l, digamos, sufre toda la perdida, la enfermedad, el dolor, la muerte, toda la cantidad de cosas que"}, {"start": 380.68, "end": 389.04, "text": " que pasan, pero nuestro personaje siempre al modo de dar pone un personaje que es a trav\u00e9s de"}, {"start": 389.04, "end": 398.0, "text": " qui\u00e9n vamos siempre a donar pone un personaje que es a trav\u00e9s de qui\u00e9n vamos a problematizarnos"}, {"start": 398.0, "end": 404.36, "text": " en la complejidad del alma humana. O sea, \u00e9l siempre tiene una cantidad de personajes a veces"}, {"start": 404.36, "end": 411.2, "text": " los trabaja todos simult\u00e1neamente a veces hay uno donde nos pone realmente el cuestionamiento"}, {"start": 411.2, "end": 418.28000000000003, "text": " de la complejidad de los sentimientos humanos. Aqu\u00ed es Benicdo. Benic no es un enfermero, un tipo"}, {"start": 418.28000000000003, "end": 426.76, "text": " solitario que toda la vida cuid\u00f3 a su madre, que no ha tenido relaciones y que despliega toda su"}, {"start": 426.76, "end": 435.0, "text": " ternura, toda su capacidad de cuidado y toda su capacidad de afecto, sobrealicia una mujer en coma,"}, {"start": 435.0, "end": 445.36, "text": " la quiere, la pinta le corte el pelo, le corta la sue\u00f1a, divino \u00e9l y la mujer en coma y la viola"}, {"start": 445.36, "end": 453.92, "text": " y la mujer en coma y ella queda embarazada y la mujer en coma y resulta que \u00e9l todo lo hace desde"}, {"start": 453.92, "end": 459.6, "text": " el mayor de los cari\u00f1os con una belleza y con una ternura que no dice pero que es esto que nos"}, {"start": 459.6, "end": 467.56, "text": " est\u00e1n mostrando y resulta que \u00e9l dice que yo son la pareja perfecta porque no tienen ni un"}, {"start": 467.56, "end": 475.88, "text": " no, nunca pelea y por supuesto como no, pero si la otra es un objeto inerte, entonces no le lleva"}, {"start": 475.88, "end": 486.2, "text": " la contraria, no le discute, no le pelea, es una motrafora perturbadora de lo que pretende"}, {"start": 486.2, "end": 491.88, "text": " alguien que sea una relaci\u00f3n con una persona que solamente sea el objeto de sus fantas\u00edas de sus"}, {"start": 491.88, "end": 496.92, "text": " anelos pero no un ser humano de carne y hueso, sino alguien a quien pueda marre de su m\u00e1s"}, {"start": 496.92, "end": 503.92, "text": " extrema fantas\u00eda objetivizar al otro para que sea solamente una parte de mis propios deseos y"}, {"start": 503.92, "end": 510.8, "text": " sin embargo esto es con una ternura, entonces a\u00fan no lo pone, \u00e9l crea estas situaciones s\u00faper"}, {"start": 510.8, "end": 519.44, "text": " ambivalentes en las cuales con el mayor de los cari\u00f1os est\u00e1 ejerciendo una patolog\u00eda tenaz"}, {"start": 520.5600000000001, "end": 527.0, "text": " y no dice pero como nos van a echar este cuento y en la escena digamos en la escena como de la"}, {"start": 527.0, "end": 535.56, "text": " violaci\u00f3n nos lleva a otra escena de otro cine de una figura de un novietito peque\u00f1ito que se"}, {"start": 535.56, "end": 543.08, "text": " mete dentro de una cient\u00edfica y que la hace tener orgasmos, entonces al modo a lo que hace es que"}, {"start": 543.08, "end": 551.24, "text": " nos expande los ciclos de la realidad y nos hace meternos en un universo sin pensables y lo hace de"}, {"start": 551.24, "end": 559.04, "text": " una manera tan magistral y lo lleva a uno a marcos y a lugares donde uno jam\u00e1s se atrever\u00eda a llegar"}, {"start": 559.04, "end": 565.96, "text": " y al mudurlo lleva uno con toda naturalidad con personajes llenos de ternura, entonces esto"}, {"start": 565.96, "end": 573.4, "text": " lida entre la cosa, la cosificaci\u00f3n, la violaci\u00f3n es la extremo cuidado, la ternura, el cari\u00f1ito y uno"}, {"start": 573.4, "end": 581.56, "text": " no sabe qu\u00e9 fue lo que se comi\u00f3 ah\u00ed en ese indiesci\u00f3n de sentimientos que le produc\u00edan o y cuando"}, {"start": 581.56, "end": 586.84, "text": " lo otro dice pero es que no puedo el otro que no puede con el dolor de su mujer y coma le dice"}, {"start": 586.84, "end": 595.56, "text": " habla con ella, pero si est\u00e1 en coma y si pero quien dice que no nos oye habla con ella qu\u00e9 tipo de"}, {"start": 595.56, "end": 602.0, "text": " di\u00e1logo es ese, como va a conversar con una persona que est\u00e1 en coma que sabemos de lo que est\u00e1"}, {"start": 602.0, "end": 609.04, "text": " pasando ah\u00ed y esto abordado es de la mayor de las ternuras desde la mayor de las solidades desde"}, {"start": 609.04, "end": 616.88, "text": " la mayor de las decepciones amorosas que cada una tenido de la vida todo lo hace caledosc\u00f3pico"}, {"start": 616.88, "end": 623.2, "text": " m\u00faltiples complejos, nada de blanco y negro en una pel\u00edcula de almodobar y el elemento de la"}, {"start": 623.2, "end": 629.72, "text": " telenovela siempre est\u00e1 ah\u00ed cuando puede llevarnos al absurdo de la tragedia y a la tragedia"}, {"start": 629.72, "end": 637.12, "text": " del absurdo y todo va cambiando de color en la medida en que se desarrolla la pel\u00edcula meterse en"}, {"start": 637.12, "end": 643.36, "text": " una pel\u00edcula de almodobar es meterse en una aventura peligrosa y extrema de los abismos de"}, {"start": 643.36, "end": 651.32, "text": " la alma humana con un humor pintoresco en la mayor\u00eda de ellas y con finales inesperados es atreverse a"}, {"start": 651.32, "end": 657.6, "text": " cualquier cosa para la gente que no se ha metido en la aventura de almodobar esta es una invitaci\u00f3n"}, {"start": 657.6, "end": 665.9200000000001, "text": " peligrosa y extrema y para los que lo hemos disfrutado toda la vida es volver a pasear por entre"}, {"start": 665.92, "end": 695.88, "text": " el cine de un maestro de maestros hoy en mi\u00e9rcoles de cine el maestro Pedro al modobar y habl\u00f3 con \u00e9l."}]
Diana Uribe
https://www.youtube.com/watch?v=1ZDKBghlo-g
Festival y Carnaval de la Subienda de Honda
#podcastdianauribe #honda Esta vez en nuestra serie de Ferias y Fiestas de Colombia nuestro destino es una celebración hecha en honor a un río que ha marcado nuestra identidad y nuestra historia: El Magdalena. Hablaremos del Festival y Carnaval de la Subienda en Honda, Tolima. Esta ciudad porteña tolimense alberga algunos de los relatos y tradiciones más importantes del Magdalena, el río que define la identidad del centro de la nación colombiana. Les contaremos historias de peces, pescadores, reinas, mohanes y puentes. Y también habrá tiempo para reconstruir el relato de una región y un país que han crecido sobre las orillas de un río. Notas del episodio Alfredo Molano escribió esta hermosa columna en la que dejó testimonio de algunas de las cosas que han atravesado las aguas del Magdalena →https://www.elespectador.com/opinion/columnistas/alfredo-molano-bravo/las-aguas-del-magdalena-column-612308/ La importancia de Honda en la historia de Colombia →https://www.revistacredencial.com/historia/temas/honda El blog del profesor Tiberio Murcia Godoy, una enciclopedia completa de lo que ha pasado y pasa en Honda →http://tiberiomurciagodoy.blogspot.com/ «La Subienda» el documental de Luis Ernesto Arocha y Álvaro Cepeda Samudio que retrató este fenómeno que convocaba a todos los pescadores del país. También aparece una de las primeras versiones del Festival de la Subienda →https://youtu.be/-Q2CST5L9oY «Colombia es un regalo del río Magdalena» A propósito de su libro, el antropólogo Wade Davis explica la historia colombiana a través de las aguas del Magdalena →https://diariocriterio.com/colombia-es-un-regalo-del-rio-magdalena-wade-davis/ Algunos datos más sobre el Carnaval y Reinado de la Subienda https://www.colombia.com/turismo/ferias-y-fiestas/carnaval-y-reinado-la-subienda/ ¡Síguenos en nuestras Redes Sociales! Facebook: https://www.facebook.com/DianaUribe.fm/ Instagram: https://www.instagram.com/dianauribefm/?hl=es-la Twitter: https://twitter.com/dianauribefm?lang=es Pagina web: https://www.dianauribe.fm
¡Buenas! Hoy nos vamos a meter con un carnaval para el cual tenemos que tocar toda la arteria fundamental. La parte más importante que nos ha ayudado a nosotros en toda nuestra historia de país. Vamos a meternos con el río Magdalena y vamos a meternos con el carnaval de la subienda en onda. Y al meternos con el carnaval de la subienda nos vamos a meter con una ciudad histórica donde están comunicados una cantidad de puntos diferentes de la historia de este país y todo está comunicado por el río Magdalena. Entonces este es un carnaval que se distingue de todas las demás fiestas o cediferencia de todas las demás fiestas que hemos tratado en nuestra camino por esta construcción de ferias y fiestas y carnavales que no se hace a un hito cultural propiamente dicho sino a la naturaleza. Es un festival que se hace para honrar un fenómeno natural del río Magdalena que se llama la subienda y es el carnaval de la subienda en onda. La subienda es ese momento en que las aguas de la trámite se calientan mucho de la parte de arriba del Magdalena se calientan mucho y entonces es una gran cantidad de peces de un montón de especies. Empiezan una travesía hasta llegar a casi a los orígenes del río atravesando a los rápidos y llegando en grandes cantidades. Ese fenómeno que existe desde tiempos inmemoriales va a ser el objeto de la conmemoración y del festival que en su nombre lleva el nombre de festival de la subienda. La subienda es ese momento en que las aguas de la trámite se calientan mucho y en grandes cantidades y el objeto de la trámite se calientan mucho y llegando en grandes cantidades y las aguas de la trámite se calientan mucho y el objeto de la trámite se calientan mucho eternos con el río, porque el río realmente es el protagonista, es el que hace posible el festival, es el que le dio toda la importancia histórica a onda y que nos comunica esta grandeza histórica que existió en otro tiempo entre onda y barranquilla, los dos extremos, del río Magdalena en su desarrollo histórico, por donde atravesó una gran cantidad de poblaciones todavía está ahí, pero en un momento todo fue el polo de desarrollo, por ahí entraron los pianos, por ahí entraron, una gran cantidad de historias, la cultura árabe, muchísimas costos todavía en onda y casas que tienen letreros en árabe, de todo lo que significó durante tantos años, esta arteria fluvial, que estuvo, digamos, moldeando nuestra historia con su largo y enorme recorrido, el río Magdalena se extiende por más de 1500 kilómetros del interior del actual Colombia, porque es mucho más viejo el que todos nosotros y todos los que contamos el cuento y las aguas serpentean entre las cordilleras central y oriental hasta encontrarse arriba, ya en la costa con las abanas y las cienagas del caribe colombiano, nace en un paramo como la mayoría de nuestros ríos, nace en un paramo que comparten los departamentos del huila y el cauca, exactamente en la laguna de Jumma muy, por eso uno de los nombres que tiene es Juma, después de un recorrido enorme, va a llegar a un punto de esa invocadura que se conoce como las bocas de ceniza, cerca de barranquilla y el tamaño del río hace que se diga regionalmente entre secciones geográficas, el alto Magdalena, el Magdalena medio y el bajo Magdalena, el mismo también es un río región, porque los departamentos que están a la vera del Magdalena generan en sí mismo regiones, culturas, toda una Colombia dentro de las colombias, una región dentro de las regiones, es decir cada uno de esos departamentos pertenece a una región geográfica, pero al estar a la vera del Magdalena ellos van a crear también una propia articulación cultural y geográfica distinta como hemos visto que entre nuestras regiones hay subregiones, ahí es decir espacios culturales y geográficos propios dentro de otros espacios regionales, en esta complejidad regional que es Colombia y que estamos abordando en nuestras ferias y fiestas, entonces hay un río región que son todos los pueblos que están a la vera del Magdalena, este río afectado a la geología, la biología, la geografía, es decir este río nos define, este río nos construye, este río es absolutamente importante en un país donde los ríos son descomunales, donde tenemos unos ríos absolutamente impresionantes, porque acuerdes de que no te molen a más zonas el orinoco, el cauca el atarato, es decir nosotros tenemos una hidrografía descomunal y maravillosa a nosotros nos parece casi normal no lo es, muchos puntos de la tierra son secos, esta maravilla y esta diversidad de bosques secos tropicales y todo esto que están alrededor del Magdalena son un milagro de la naturaleza y una dada diva que nos ha dado digamos la geografía inmensa y maravillosa de nuestro país, entonces esta biodiversidad genera una cantidad de ecosistemas de microclimas, modifica el paisaje desde la cuenca hasta las aguas y ha sostenido poblaciones humanas desde tiempos inmemoriales, desde los pueblos originales, desde los tiempos de las leyendas, por lo tanto ha tenido muchos nombres a lo largo de su causa se le ha llamado el guacahayo, horrió de las tumbas, se le ha llamado en el alto Magdalena Juma, horrió del país amigo, cerca de las poblaciones muy escas se le llama Arli o Arbi, río el pez, horrió del boca chico en el Magdalena medio y en el caribe Caracali el gran río de los caimanes o Cariwanya Agua Grande, hace parte de un país que tenemos en Colombia que es un país anfibio que se ha caracterizado porque todas su existencia está en el río y ha sido testigo de todo, del progreso, del desarrollo, de la navegación, de la época de que estaban los ferrocarriles y se comunicaba todo el país por el río del ferrocarril ha sido también testigo de los episodios de conflicto armado y violencia tanto que una parte del río que se conoce como el Magdalena medio fue en un momento epicentro de uno de los momentos más complicados y más duros del conflicto el río conflicto colombiano que pues necesariamente atraviesa nuestros relatos porque nos ha atravesado durante muchísimo tiempo para todas estas historias tuvimos la fortuna de contar en la feria del libro en onda con una conferencia de nuestro gran mentor y digamos adorador de este río Wade Davis que ha escrito el libro sobre el Magdalena y con Germán Ferro que es el que ha creado el museo del que hablaremos el museo del río de onda que es una cosa bien importante para conocer y nos desea Wade Davis con toda una vida atravesando el río conociendo sus poblaciones conociendo los ribereños los pescadores las historias los carnavales las fiestas la vida cotidiana nos decía que unos mamos le habían comentado que la paz en Colombia se logra cuando se logre recuperar la totalidad del río Magdalena que el río Magdalena es no solamente la arteria geográfica sino el centro espiritual del país y que es a través del Magdalena que se puede recuperar todo el sentido de paz que estamos tan duramente luchando por conseguir el dice que el secreto está en el Magdalena y que fueron los mamos los que le contaron que ese río es verdaderamente nuestra identidad y nuestra fuente de espiritualidad entonces por eso honramos al río agradecemos al Magdalena y su participación para contarnos en la experiencia increíble que ha aplasmado a través de sus libros y al Magdalena se van ferro toda la historia del museo del río y vamos para nuestro carnaval que es en la subyenda pero ahí nos vamos a meter también con onda y la población de onda tiene su propia y maravillosa historia ella fue fundada en 1539 de las bien antiguas San Bartolome de onda la villa de San Bartolome de Begunda en la tierra donde vivía la hernia de los panches y se inspiraron para llamarla así en los ondimas que era el pueblo panche que vivía ahí que quedaba en la cercanía la actual rey onda de onda es importantísima de hace miles de años el punto en que clonfluyen lo rápido o sea unos raudales que hacen que sea importantísima y se aclave dentro del transporte del río entonces las condiciones geográficas hacen que cuando llegué al salto de onda se produzca una barrera natural para la navegación y ya en las últimas décadas del siglo XVI había establecido ahí un fuerte fluvial que era clave para los españoles para comunicarse con el interior de lo que en esa época ellos llamaron el nuevo reino de granada en el caribe y en el resto del mundo entonces para ellos era clave o sea desde ahí lo fue siempre pero también lo fue durante la época del imperio español y entonces por votivos de su estratégica ubicación geográfica onda fue una de las poblaciones más importantes en la época colonial y esto explica el origen el mocísimo de su arquitectura ella tiene una arquitectura colonial una serie de joyas en la misma estructura de la ciudad de joyas y de puentes que a lo largo de toda su renovada importancia histórica se han visto remozadas en diferentes etapas de nuestra vida como país y hay calles que se asemejan a Cartagena o a Montpós porque corresponden a esos puntos estratégicos de desarrollo colonial en tiempo del imperio español en nuestra historia en la época republicana después de las independencias sigue siendo muy importante onda en el siglo XX ya comienzo del siglo XX hasta mediados la ciudad sigue siendo súper importante es una ciudad de prosperidad va a ser una ciudad de industrias va a tener fábrica serbeserías las serbeserías todavía son puntos históricos de recuerdo y también va a tener imparentas importantísimas el aeródromo que es muy importante en el orgullo de la ciudad porque fue pionero antes que muchas otras partes del país y una élite política que no solamente fue importante dentro de la historia de la ciudad sino dentro de la historia del país ahí es donde tenemos la cuna de alfonso lupes pumarejo y muchas otras figuras que tendrían una importancia fundamental en nuestro relato político como país y también va a tener una cantidad de momentos de la historia de la construcción de la historia republicana la banda sonora es del lucho vermudes porque el lucho vermudes fue el que estuvo siempre musicalizando las inauguraciones de la época en que estábamos construyendo un país que buscaba la modernidad un país en tiempos de los ferrocarriles de la flota mercante gran colombiana en tiempos de la navegación del maidalena como puente fundamental de desarrollo leamos nosotros y hicimos un país que el llegó hasta un punto y después nos va a través de saber conflicto mucho tiempo entonces estamos tratando de salir de ahí y seguir haciendo países como la idea pero seguimos haciendo fiestas antes durante y después aquí empieza el espacio comercial el moan y los pescadores se encuentran en la alegría del carnaval y reinado de la subyenda en onda en esta fiesta se celebra al río maigdalena un río que marca el ritmo de la vida de nuestro país descubre en la onda sonora los sonidos que acompañan la vida cotidiana de Colombia del unes abiernes desde las 10 de la mañana en Radio Nacional de Colombia estamos donde tú estás entonces a dios momento en que el vapor era la forma de navegación para llegar a Bogotá por llegar unos inmigrantes y por ahí llegaban toda la gran migración árabe por eso les digo que hay sitios en onda que tienen casas donde mantienen todavía los letreros en árabe por ahí se conectaba a la industria por ahí se conectaba todo ese proyecto de país y se movía de un lado al otro llegaban a onda de ahí van a villeta de villeta y van a entrenar esta Bogotá después las carreteras y la la aviación le fueron quitando relevancia a esta manera de comunicación a través del río y el ferrocarril y entonces lo que hacía que onda conectara directamente tanto al caribe como a la estación de la sabana y en esa medida fuera un punto de partida y un punto de encuentro entre todos los diferentes dinámicas de modernidad y desarrollo que estaba teniendo el país y que era la época también dorada de barranquilla estas son dos ciudades hermanas y son hermanadas por el río entonces tienen un efecto digamos simultáneo la gran época de la puerta de oro de Colombia de barranquilla también lo va a hacer de onda y luego un tiempo en que todo ese esplendor se viene un poco a menos en barranquilla también eso va a pasar en onda y empiezan a hacer menos importantes sobre todo onda para la economía nacional en la medida en que se empiece a la espalda al río cuando le empezamos a dar la espalda al río le damos también la espalda toda la historia a toda la cantidad de bagaje y de relato que hay de nuestra historia en onda haya y casas donde estuvo la expedición botánica está eso todavía hay casas también donde nació al fósforo pésimo marejo que fue uno de los primer reformadores sociales en Colombia se ha habido un relato de pais que está en onda y que ahora está reverdesiendo y está teniendo un nuevo aire y una nueva vida y una nueva importancia cultural y está digamos como volviendo a narrarse y a mirarse en la historia la importancia de onda en nuestra cultura pero ya siempre lo ha sido es un punto de relato en la colonia en la república en el desarrollo en la modernidad en las ideas políticas en la misma expedición es usted puede rastrear diferentes momentos de nuestra historia como país a través de las huellas de relato que quedan en onda no le digo que hay hay huellas de los árabes de la expedición botánica de la colonia de la formación de la cidad republicana de la industria de la época dorada del río Magdalena todo eso está nonda entonces empezamos por resaltar al Magdalena a quien rendimos tributo con esta fiesta a la ciudad de onda a quien rendimos un enorme reconocimiento histórico y al festival de la subienda que es la que nos convoca para atraer en el marco de estas historias un fenómeno natural único y particular de nuestra ya singular y particularísima geografía de este país exuberante y maravilloso en el que hemos tenido la suerte de nacer para contemplar en él todas sus posibilidades y maravillas entonces resulta que a este fenómeno se le conoce con el nombre del maná Riveraño porque es una cosa prodigiosa y esa comienzo del año en el verano en la época más calidad del año para nosotros las estaciones son cosas muy diferentes a lo que son las estaciones de los pueblos que están marcados por la forma como el invierno el verano o la primavera aparecen en su guía lo largo del año nosotros no tenemos este tipo de estaciones pero sí tenemos otras tenemos estaciones de verano que es cuando el bosque se cotropical entra como una especie de invernación para volver a florescer en tiempo de lluvias entonces cuando viene en la época más cálida ahí es cuando lo digo que las aguas y las cienagas del país se calientan y empiezan a perder oxígeno y esto hace que muchísimas especies no se sientan cómo hacen ese calor o no qué hace y arranquen a buscar aguas más frescas y empiezan a buscar el sur y remonte en el río y cuando remontan el río van alcanzando sumadores sexuales decir cuando empieza toda la la gran búsqueda de estas aguas son pequeñitos y en la medida que remontan el río se van desarrollando sexualmente y ahí vienen los bocachicolos ni curos los tolónvalos vagres y muchas otras especies que empiezan este gran viajaga de cuenta como las grandes migraciones de las aves pero son migraciones de peces en busca de estas aguas entonces ellos empiezan a buscar esas caudales de agua pero se van a encontrar en el magdalena a la altura de onda con que hay unos rápidos y hay unos bancos entonces en ese punto se digamos se divide todo ese jardúmen que viene de todo a esta parte del río y ahí no pueden circular dentro del río por lo cual se van hacia los lados a la ribera en la ribera es donde los esperan los pescadores que están atabiados de las atarrallas de los chinchorros de los congolos y a través de todas estas prácticas que requieren una enorme pericia los capturan con facilidad y dan toda una forma de vida a esta cultura antigua de los pescadores del río cuando se acercan a la ribera se acercan en unas grandes cantidades que producen o está en milagro del maná riberaño y esta práctica está antigua que los arqueólogos han encontrado testimonios de sociedades pesqueras que habitaban en las orillas de onda desde el año 500 antes de nuestra era entonces esto es una maravilla que produce una sostenibilidad alimentaria lo ha producido a lo largo de muchos siglos y hace que toda la época de la dominación espayola de la república y la época actual hace que los habitantes riberaños sean como la la parte fundamental de la cultura del río y de la cultura de onda efectiva de la subiente es entrenero y abril que es cuando se presenta el fenómeno y en ese momento onda en sus mejores épocas llegaba a quinto aplicar la población y los pescadores se iban todos para allá Colombia es un país que ha habido de cosechas y de grandes bonanzas la subiendera como una bonanza es como una bonanza cada año y atrae un montón de gente que va a llegar a vivir este milagro de prosperidad que ha significado la subienda y eso les permitía en las mejores épocas comprar casa solamente con lo que hacían en una sola subienda y son pues una cantidad de bonanza a las que se van a dar allá en el marco de este fenómeno antes de los años 50 onda quería celebrar los espíritus del río y de la bonanza pero esa partir de 1962 que se produce el primer carnaval que se organiza ya apropiamente el carnaval de la subienda y la fiesta nace para honrar la riqueza que representa la subienda la albricia de este fenómeno maravilloso es una manera de emular el espíritu del carnaval y de las fiestas acuerde que todas las ciudades que hemos visto en nuestras historias cuando sienten la necesidad de reiterar y de darle una representación de importancia de la ciudad crean una fiesta y crean un carnaval eso lo hemos visto a lo largo de todos estos relatos y onda no va a ser la excepción entonces ellos hacen su carnaval y el carnaval se inspira en las mismas fechas del carnaval de barranquilla y muchos barranquilleros arriba a la ciudad a lo largo del siglo XX porque entre las cosas porque barranquilla es como la el carnaval y signa de donde se toman todos y por esta conexión entre onda y barranquilla entonces eso tiene su reinado y el reinado está he dado por los barrios de la ciudad y cada barrio lleva su candidata y hay rivalidades legendarias entre barrios que siempre han estado compitiendo por el barrio Bogotá el barrio de Santa Lucía que nos contaba el maestro tiberio murcia de cómo esto va generando digamos vieja rivalidades históricas alrededor de la reina y la manera como se lige la reina es muy bonita porque a la reina le mandan cartas de amor cada carta de amor es equivalente a un voto entonces la que recibe la mayor cantidad de cartas de amor es la que va a ganar y esto en otra época tenía un despliegue importante si salían la revista corromos el mundo estaba pendiente de cual era la que había ganado la subienda el pueblo todo participaba en la elección de las candidatas y el carnaval tiene su reinado y en todas las fiestas hemos visto en la importancia del reinado y aquí también lo tiene pero también hacen comparsas hacen bailes típicos en las calles todo con la iconografía del río y sobre todo para honrar a los pescadores cuya música también nos permean otros de mochas canciones a los pescadores y a la figura del bogga y a la figura de estos seres de la pesca que son los que generan tanto alimento en el país y hay competencias deportivas y hay competencias en dos de ese se alaba la destreza para poder manejar desde la canoa los rápidos a la altura del magdalena en onda también hay competencias sobre las atarrallas y últimamente las mujeres han entrado en esas competencias para lanzar las atarrallas o sea todo lo que tiene que ver con la actividad del río con el bogga con el canalete con las canuas con la destreza con los rápidos todo eso según re en este festival y según la pericia de los pescadores que a lo largo del río van generando todo esta manera de alimentarnos a partir de un milagro o o 네 No su calle va a venir al puerto de tu amor, ¿Dónde es bien es superé? El rescador habla con la Luna, habla con la valla, no tiene por una, solo con las vallas, el rescador habla con la Luna. También se le va a rendir tributo a un personaje que atraviesa todo nuestro país, incluso más allá de él. El moan, el moan es uno de las figuras más legendarias porque uno lo ve desde Palenque, desde el Palenque es ambasílio donde alaséis de la tarde no pueden ir a la quebrada donde se juntan normalmente para hacer la vida social los palinqueros porque alaséis de la tarde aparecen el moan. Allá no van a día a las seis. Lo vemos en Puerto Nariño en el punto de leticia donde se une como una unita al trapecio. Ahí queda Puerto Nariño que estuvimos hablando de Puerto Nariño en el festival del Pirarucú para terminar la temporada pasada. Ahí el moan aparece ellegantísimo con un cinturón de cangrejo y unos zapatos muy elegantes se juntan las fiestas de la iniciación del ciclo menstrual de las mujeres y las secuestras, las embarazas y luego ellas aparecen por eso un tipo elegantísimo que baila muy bien y luego las chicas aparecen embarazadas del moan. Sí, entonces el moan en todas partes es el espíritu del agua, el espíritu burlón que se hablaba también de la mojana. El moan también lo encontramos necesariamente en el huila y lo encontramos en el tolima y aquí en onda lo vamos a encontrar también. El moan es el espíritu del agua y como nosotros somos un pueblo tan hídrico y somos un pueblo de culturas anfibias pues nuestro pueblo es un pueblo que habita con el espíritu del moan que está en todas las culturas y digo que a veces fuera de nosotros porque hay canciones cubanas de saline, reutil, y lo que dice, yo tengo un moan que me acompaña y me protege de la gente. Entonces, esto es un espíritu de nuestra naturaleza que es un espíritu de mucho respeto y todas las diferentes fiestas de una u otra manera en las regiones donde habita las riberas y donde se lleva las doncellas a determinadas horas de la madrugada de la noche y donde se confunden las fiestas. En todos los sitios hay una representación del moan forma parte del universo mítico y simbólico de las aguas de este país. Entonces, pues se le rinde al mítico moan que nos atraviesa de un extremo al otro en todas las diferentes imágenes un moan que esté desambasilio de palenque está puerto en ariño y para el cual hay un tributo especial durante la subienda en onda, o sea, es un personaje de una capacidad de ubiquidad espiritual en las aguas de este país impresionante y le rendimos homenaje porque los pescadores le piden permiso al moan para poder pescar en el río. Los ríos tienen dueños dicen las comunidades originarias y hay que pedir permiso para entrar en el río hay que pedir respeto para entrar en el río porque el río es vivo. Entonces, en el magdalena se le pide respeto al moan para poder entrar. Entonces, durante la subienda el río se convierte en el epicentro y todo está de cara al río, las fiestas, la reunión, en los eventos durante el tiempo de la subienda el río nunca nunca está solo y están ocurriendo conciertos y llegan los visitantes de las regiones y todo en post de la pesca y empieza todo este fenómeno de la subienda que va a llegar en febrero y marzo y que va a traer toda esta cantidad de gente y a partir de esta tradición y a partir de esta memoria de esta riquísima memoria que supone onda y el magdalena en nuestra historia hay una forma de florecer y debe rever de ser que hoy está teniendo la ciudad de onda la ciudad de onda está recoperando su importancia cultural y ahora muchos festivales están alrededor de ella está el magdalena fest que ya lleva diez años hay un festival de Iván Asca que es un festival de mujeres moralistas que llegan a pintar la ciudad de onda hay todo un reconocimiento de la arquitectura de sus calles y uno de los eventos o de las situaciones que hace más tributo al magdalena que es bellísimo, que es el buceo del río magdalena y el museo del río magdalena es precisamente la idea de reconectar todo el río con el país y con la región y con los colombianos y es como una manera de visibilizar toda la importancia del río en cada una de las etapas de nuestra historia ese museo en sí mismo tiene por un lado la mitología del río está el moán y están las figuras del río por el otro lado todo un recuerdo de la importancia de la navegación en el magdalena para el desarrollo de nuestra historia entonces en el museo están todas las diferentes etapas del desarrollo de las cartas de navegación la época de los capitanes que atravesaban en esos vapores el río las bajillas que se encontraban en esa época los mapas todo lo que significaba el magdalena y tiene un espacio y tinerante de exposiciones en esta ocasión cuando estuvimos en la feria del libro y estuvimos mirando todo esta toda esta expectativa de onda con toda su milagro y su carnaval nos encontramos con una exposición sobre las brujas alrededor del río magdalena una exposición que nos permitía encontrar los relatos entre las brujas y sus saberes tanto en Europa donde se castigó fuertemente los saberes de las mujeres cuando se les sustituyó a las antiguas mujeres Celta si a las mujeres antiguas de las bikingas y se les quitó el saber de las plantas el mismo fenómeno se produjo cuando llegó la inquisición a las tierras de América y persiguó los saberes femeninos los saberes de las plantas, los saberes del bosque los saberes de la partería todos los saberes que se cohabían concentrado en miles de años de tradición esta exposición que estábamos viendo recupera esos saberes y la figura de las brujas como aquellas que se habían revelado contra la dominación española o como aquellas que poseían saberes muy antiguos que no eran comprendidos por esa nueva forma de mirar el mundo en donde todo esto se asimilaba a la brujería es la exposición iterante que hay en este momento pero permanentemente hay exposiciones que van recuperando de muchas formas la memoria del río porque la memoria del río es completamente inagotable y también nos decía wayday is que el río nunca nos ha fallado que el río siempre ha fluido ahí para nosotros que pase lo que pase los tiempos de lo olvidó los tiempos de recuerdo el amor en los tiempos del color a uno de los relatos literarios más maravillosos de nuestra identidad cultural que se se gran viaje por el magdalena que hacen estos dos personajes que el amor es posible en la veje sino lo fue en la juventud y que se entregan a la dicha de estar el uno al otro recorriendo el magdalena en un vapor cuando no podían at llegar a ninguna orilla porque el país estaba tomado por el cólera entonces todo esta gran memoria desde el moán hasta las bocas de ceniza desde el amor en los tiempos del cólera hasta los vapores por el magdalena desde el florecimiento cultural de la región ahora entre talleres entre historias entre imprentas entre serbeserías que fueron antes grandiosas industrias que ya no están pero que se recuerdan todo un florecimiento cultural que en este momento le devuelve a onda su antigua importancia en la memoria y en la historia y el compromiso fundamental de mirar al magdalena para poder nos mirar de entender el magdalena para poder nos entender de sanear el magdalena para poder sanear nuestro corazón y nuestra alma porque es ahí en ese río donde reside el espíritu de este pueblo y es en ese río donde sucede el milagro de la subienda y este manar riberenio nos cuenta la historia de los bocas y de todo esta gente que hace posible que el milagro se vuelva alimento es necesario cuidar el río porque hay pesca industrial que hace que la subienda no llegue en la misma cantidad que llegaba en otras épocas y no genere la misma prosperidad es un istos de inibilidad de vida que había en el pasado es necesario que haya relajos generacionales en el festival de la subienda para que esta tradición tan antigua y tan poderosa se mantenga en el tiempo la subienda del festival no se pudo hacer durante la pandemia y no se pudo hacer este año porque hubo un pico de COVID en el momento justo en que venía el festival por el carnaval de Barrequilla se tuvo que posponer pero de la subienda pues no se puede posponer porque el de la subienda es cuando sube cuando suven los peces entonces no como está tan ligado al fenómeno natural pues no no podemos pasar la fecha por otro lado porque es cuando llegan los peces a diferencia de otros festivales que o sea trasladado en fechas o sucede en cualquier momento como el de la tigra que sucede cuando se puede pero durante el año este toca cuando es porque es por eso entonces no lo puede mostrar a la gran ningún otro lugar pero es un festival que nos acuerda la generosidad tan impresionante de la naturaleza con nosotros los egipcios cuando dominaron el río ni se convirtieron en imperio ellos consideraban que el río ni lo era sagrado porque algo que les daba tantas cosas tan maravillosas desde el limo las crecientes y todo tenía que ser un dios como de otra manera podrían interpretar los egipcios el milagro del dilo que hace posible todo el imperio pues el maigdalena es un milagro y hay que entenderlo como tal y el sitio donde se entiende plenamente el milagro del maigdalena como la mayor generosidad de la naturaleza con nosotros es en el carnaval de la subienda y por eso contamos las historias del carnaval de la subienda para mirar a los pescadores y para mirar desde el espíritu la geografía la historia la misma resistencia que ha tenido la fidelidad y la compañía que nos hace este río cuya recuperación es también la recuperación de nuestro propio espíritu y de nuestra propia alma por lo tanto el río maigdalena tiene una banda sonora tarea y menda y todo el mundo le ha cantado al maigdalena desde la piragua de Guillermo Cuvillos desde José pues de José Varrio de Bejo Jorge Villamil bueno, toda la música del 8º Hermudés inaugurando país por todo el maigdalena que fue donde se construyó todas las historias de los pescadores que hablan con la luna que hablan con la playa que no tienen fortuna solo su batarraya hay toda una secuencia musical de seguimiento del maigdalena que atraviesa la música de todos los departamentos a cuya vera fluye el río entonces es también parte de nuestra música es parte de nuestra literatura es parte de nuestra historia es parte de nuestra geología es parte de nuestra geografía es parte de nuestro relato como país y el que celebra todo eso es el festival de la subienda entonces esa fiesta es tan especial porque es una manera de poner nuestros ojos con fieras y fiestas sobre aquello que nos da el carácter histórico de nuestra presencia en el mapa que es el río maigdalena en el río la nación río la nación de Colombia en sus aguas de la insuido en el remarcial de Boglán con ellos el rey del río maigle y roca tena es el mar el perejito y el río es el mar de la río antes en la sianista con los bienos pescadores también van a pescadores en el guante estarrejido y sencaros pescadores les escondió la terra rey por confiado por los bols y el tevaro su amala y los rey son del río a parecia como maglia con un tava consentido y su padre guiera la río la río de la río y su padre guiera la río y su padre guiera la río y su padre guiera la río en sus ríleros y su padre guiera la río y su padre guiera la río entonces desde el punto de vista musical pues esto es inagotable porque todo es de una u otra manera le cantan al maigdalena le cantan a todas su historia su mítico recorrido los puentes sobre el maigdalena eso para nosotros como comunica todo el país entonces uno se lo va a encontrar en todas partes vayad donde vayas se lo va a encontrar porque el maigdalena es muy grande tiene esta enorme cuenca con el cauca que ahora se considera en nuestra precisión geográfica una gran cuenca la del cauca y el maigdalena corren paralelos comunicando este país inarrando los historias el cauca tiene los suy eso ya es otra historia pero estamos en la suyenda que es en el maigdalena por eso nos concentramos en el río maigdalena pero son los ríos los que fluyen con nuestra manera de considerarnos país y de habernos articulado alrededor de toda esta milagro hídrico que nos atraviesa de un lado al otro y que está en escaso ni siquiera nuestra propia américa es así de irrigada de grandes ríos nosotros somos un privilegio dentro de pueblos que tienen grandísimos desiertos nosotros tenemos acá esta cantidad de ríos y entre todos los ríos el rey nuestro es el maigdalena entonces entre toda la música entre la literatura que les digo del amor y los tiempos del colera aprovechamos esta este programa para que se lo vuelvan a leer los que se lo leyaron o empiecen por disfrutarlo cuando dicen era inevitable el olor al vender a las amargas siempre le traía la memoria a los amores contrariados y empezar por ahí a vivir el amor en los tiempos del colera y recorrer este maigdalena y recuperar en esa narrativa la memoria de este maravilloso río y en toda la música la memoria musical que nos une al río y en la mitología del moán y en la maravilla de su recorrido en sus historias a veces tenembrosas, a veces seróicas siempre resilentes de un río que nos ha visto en la serie y nos ha visto crecer y nos va a existir como país y a cuya vera y a cuya memoria y a cuya espíritu rinde particular honor el carnaval de la suyenda entonces desde el milagro de la naturaleza que es el río maigdalena desde el manar riberenio que es la suyenda desde los pescadores desde las atarrallas desde los congos desde la música desde todo lo que significa para nosotros el maigdalena desde su gran espíritu desde la mirada de todos los que lo han recorrido y lo han amado desde los tiempos del gran desarrollo de la ciudad y de los pueblos y del país alrededor del maigdalena de las rutas de los vapores de todo lo que llegaba por la maigdalena de la gran migración de los árabes de las antigüas ciudades coloniales de la época del imperio español del tiempo de la república del nacimiento de las ideas de la belleza de la ciudad de Onda y del milagro fantástico de la suyenda única y particular situación geográfica y geológica que nos brinda en su infinita riqueza el maigdalena y desde todo lo que esto se reconoce en el carnaval de la suyenda en la narración de Anauríbe Este podcast fue posible gracias al equipo de la Casa de Historia de Anasoares, Milenabel Trán, Arturo Jiménez Vigna, Daniel Moreno Franco, grabado en los gatos estudio la edición y la musicalización de Eduardo Corredor Fonseca de Rueda sonido y contamos con Daniel Schradz que está con nosotros acompañándonos de Aquena adelante en ferias y fiestas y que lo introducimos en nuestro relato con mucha alegría para construir esta historia contamos con el testimonio del maestro Tiberio Murcia con las historias de Ángel Imoreno con Germán Ferro y su creación del museo que esto es un relato del maigdalena con Wade Aves que nos hizo semejante charla tan maravillosa con todas estas historias de la gente que está construyendo en este momento patrimonio, ferias, actividades, historias y que nos ayudan desde los murales desde las ferias del libro desde todo esto a mirar ese punto tan rico de nuestra historia, de nuestra geografía y de nuestro mito y nuestra leyenda y de nuestro forclero y de nuestras ferias que es el carnaval y siempre con la ayuda fuerte y poderosa de Santiago Espinoza Uribe y Laura Rojas Aponte de las pocas cosas de internet
[{"start": 0.0, "end": 15.14, "text": " \u00a1Buenas! Hoy nos vamos a meter con un carnaval para el cual tenemos que tocar toda la"}, {"start": 15.14, "end": 22.54, "text": " arteria fundamental. La parte m\u00e1s importante que nos ha ayudado a nosotros en toda nuestra"}, {"start": 22.54, "end": 28.86, "text": " historia de pa\u00eds. Vamos a meternos con el r\u00edo Magdalena y vamos a meternos con el carnaval"}, {"start": 28.86, "end": 30.86, "text": " de la subienda en onda."}, {"start": 58.86, "end": 69.94, "text": " Y al meternos con el carnaval de la subienda nos vamos a meter con una ciudad hist\u00f3rica donde"}, {"start": 69.94, "end": 77.02, "text": " est\u00e1n comunicados una cantidad de puntos diferentes de la historia de este pa\u00eds y todo"}, {"start": 77.02, "end": 84.98, "text": " est\u00e1 comunicado por el r\u00edo Magdalena. Entonces este es un carnaval que se distingue de todas"}, {"start": 84.98, "end": 91.16, "text": " las dem\u00e1s fiestas o cediferencia de todas las dem\u00e1s fiestas que hemos tratado en nuestra"}, {"start": 91.16, "end": 99.78, "text": " camino por esta construcci\u00f3n de ferias y fiestas y carnavales que no se hace a un hito cultural"}, {"start": 99.78, "end": 109.42, "text": " propiamente dicho sino a la naturaleza. Es un festival que se hace para honrar un fen\u00f3meno"}, {"start": 109.42, "end": 117.42, "text": " natural del r\u00edo Magdalena que se llama la subienda y es el carnaval de la subienda en onda."}, {"start": 117.42, "end": 123.74000000000001, "text": " La subienda es ese momento en que las aguas de la tr\u00e1mite se calientan mucho de la parte"}, {"start": 123.74000000000001, "end": 128.3, "text": " de arriba del Magdalena se calientan mucho y entonces es una gran cantidad de peces de un mont\u00f3n"}, {"start": 128.3, "end": 135.46, "text": " de especies. Empiezan una traves\u00eda hasta llegar a casi a los or\u00edgenes del r\u00edo atravesando"}, {"start": 135.46, "end": 142.46, "text": " a los r\u00e1pidos y llegando en grandes cantidades. Ese fen\u00f3meno que existe desde tiempos inmemoriales"}, {"start": 142.46, "end": 149.62, "text": " va a ser el objeto de la conmemoraci\u00f3n y del festival que en su nombre lleva el nombre"}, {"start": 149.62, "end": 166.62, "text": " de festival de la subienda."}, {"start": 179.62, "end": 193.1, "text": " La subienda es ese momento en que las aguas de la tr\u00e1mite se calientan mucho y"}, {"start": 193.1, "end": 210.1, "text": " en grandes cantidades y el objeto de la tr\u00e1mite se calientan mucho y llegando en grandes cantidades"}, {"start": 210.1, "end": 230.9, "text": " y las aguas de la tr\u00e1mite se calientan mucho y el objeto de la tr\u00e1mite se calientan mucho"}, {"start": 230.9, "end": 237.34, "text": " eternos con el r\u00edo, porque el r\u00edo realmente es el protagonista, es el que hace posible"}, {"start": 237.34, "end": 244.62, "text": " el festival, es el que le dio toda la importancia hist\u00f3rica a onda y que nos comunica esta"}, {"start": 244.62, "end": 252.78, "text": " grandeza hist\u00f3rica que existi\u00f3 en otro tiempo entre onda y barranquilla, los dos extremos,"}, {"start": 252.78, "end": 259.22, "text": " del r\u00edo Magdalena en su desarrollo hist\u00f3rico, por donde atraves\u00f3 una gran cantidad de poblaciones"}, {"start": 259.22, "end": 265.22, "text": " todav\u00eda est\u00e1 ah\u00ed, pero en un momento todo fue el polo de desarrollo, por ah\u00ed entraron"}, {"start": 265.22, "end": 273.02000000000004, "text": " los pianos, por ah\u00ed entraron, una gran cantidad de historias, la cultura \u00e1rabe, much\u00edsimas"}, {"start": 273.02000000000004, "end": 279.70000000000005, "text": " costos todav\u00eda en onda y casas que tienen letreros en \u00e1rabe, de todo lo que signific\u00f3"}, {"start": 279.70000000000005, "end": 285.86, "text": " durante tantos a\u00f1os, esta arteria fluvial, que estuvo, digamos, moldeando nuestra historia"}, {"start": 285.86, "end": 292.18, "text": " con su largo y enorme recorrido, el r\u00edo Magdalena se extiende por m\u00e1s de 1500 kil\u00f3metros del"}, {"start": 292.18, "end": 298.42, "text": " interior del actual Colombia, porque es mucho m\u00e1s viejo el que todos nosotros y todos los que contamos"}, {"start": 298.42, "end": 304.98, "text": " el cuento y las aguas serpentean entre las cordilleras central y oriental hasta encontrarse"}, {"start": 304.98, "end": 312.06, "text": " arriba, ya en la costa con las abanas y las cienagas del caribe colombiano, nace en un paramo"}, {"start": 312.06, "end": 317.34, "text": " como la mayor\u00eda de nuestros r\u00edos, nace en un paramo que comparten los departamentos"}, {"start": 317.34, "end": 324.14, "text": " del huila y el cauca, exactamente en la laguna de Jumma muy, por eso uno de los nombres"}, {"start": 324.14, "end": 330.54, "text": " que tiene es Juma, despu\u00e9s de un recorrido enorme, va a llegar a un punto de esa invocadura"}, {"start": 330.54, "end": 338.22, "text": " que se conoce como las bocas de ceniza, cerca de barranquilla y el tama\u00f1o del r\u00edo hace"}, {"start": 338.22, "end": 344.14000000000004, "text": " que se diga regionalmente entre secciones geogr\u00e1ficas, el alto Magdalena, el Magdalena"}, {"start": 344.14000000000004, "end": 350.86, "text": " medio y el bajo Magdalena, el mismo tambi\u00e9n es un r\u00edo regi\u00f3n, porque los departamentos"}, {"start": 350.86, "end": 359.42, "text": " que est\u00e1n a la vera del Magdalena generan en s\u00ed mismo regiones, culturas, toda una"}, {"start": 359.42, "end": 364.86, "text": " Colombia dentro de las colombias, una regi\u00f3n dentro de las regiones, es decir cada uno de"}, {"start": 364.86, "end": 371.26, "text": " esos departamentos pertenece a una regi\u00f3n geogr\u00e1fica, pero al estar a la vera del Magdalena"}, {"start": 371.26, "end": 377.74, "text": " ellos van a crear tambi\u00e9n una propia articulaci\u00f3n cultural y geogr\u00e1fica distinta como hemos"}, {"start": 377.74, "end": 384.56, "text": " visto que entre nuestras regiones hay subregiones, ah\u00ed es decir espacios culturales y geogr\u00e1ficos"}, {"start": 384.56, "end": 391.90000000000003, "text": " propios dentro de otros espacios regionales, en esta complejidad regional que es Colombia"}, {"start": 391.9, "end": 397.34, "text": " y que estamos abordando en nuestras ferias y fiestas, entonces hay un r\u00edo regi\u00f3n que"}, {"start": 397.34, "end": 403.46, "text": " son todos los pueblos que est\u00e1n a la vera del Magdalena, este r\u00edo afectado a la geolog\u00eda,"}, {"start": 403.46, "end": 409.29999999999995, "text": " la biolog\u00eda, la geograf\u00eda, es decir este r\u00edo nos define, este r\u00edo nos construye, este"}, {"start": 409.29999999999995, "end": 416.97999999999996, "text": " r\u00edo es absolutamente importante en un pa\u00eds donde los r\u00edos son descomunales, donde tenemos"}, {"start": 416.97999999999996, "end": 421.82, "text": " unos r\u00edos absolutamente impresionantes, porque acuerdes de que no te molen a m\u00e1s zonas"}, {"start": 421.82, "end": 430.34, "text": " el orinoco, el cauca el atarato, es decir nosotros tenemos una hidrograf\u00eda descomunal y maravillosa"}, {"start": 430.34, "end": 437.86, "text": " a nosotros nos parece casi normal no lo es, muchos puntos de la tierra son secos, esta maravilla"}, {"start": 437.86, "end": 443.38, "text": " y esta diversidad de bosques secos tropicales y todo esto que est\u00e1n alrededor del Magdalena"}, {"start": 443.38, "end": 450.14, "text": " son un milagro de la naturaleza y una dada diva que nos ha dado digamos la geograf\u00eda inmensa"}, {"start": 450.14, "end": 457.21999999999997, "text": " y maravillosa de nuestro pa\u00eds, entonces esta biodiversidad genera una cantidad de ecosistemas"}, {"start": 457.21999999999997, "end": 464.18, "text": " de microclimas, modifica el paisaje desde la cuenca hasta las aguas y ha sostenido poblaciones"}, {"start": 464.18, "end": 468.9, "text": " humanas desde tiempos inmemoriales, desde los pueblos originales, desde los tiempos"}, {"start": 468.9, "end": 474.62, "text": " de las leyendas, por lo tanto ha tenido muchos nombres a lo largo de su causa se le ha"}, {"start": 474.62, "end": 481.1, "text": " llamado el guacahayo, horri\u00f3 de las tumbas, se le ha llamado en el alto Magdalena Juma,"}, {"start": 481.1, "end": 488.54, "text": " horri\u00f3 del pa\u00eds amigo, cerca de las poblaciones muy escas se le llama Arli o Arbi, r\u00edo el pez,"}, {"start": 488.54, "end": 494.94, "text": " horri\u00f3 del boca chico en el Magdalena medio y en el caribe Caracali el gran r\u00edo de"}, {"start": 494.94, "end": 503.46, "text": " los caimanes o Cariwanya Agua Grande, hace parte de un pa\u00eds que tenemos en Colombia que es"}, {"start": 503.46, "end": 511.21999999999997, "text": " un pa\u00eds anfibio que se ha caracterizado porque todas su existencia est\u00e1 en el r\u00edo y ha sido"}, {"start": 511.21999999999997, "end": 519.14, "text": " testigo de todo, del progreso, del desarrollo, de la navegaci\u00f3n, de la \u00e9poca de que estaban"}, {"start": 519.14, "end": 524.8199999999999, "text": " los ferrocarriles y se comunicaba todo el pa\u00eds por el r\u00edo del ferrocarril ha sido tambi\u00e9n"}, {"start": 524.8199999999999, "end": 531.5799999999999, "text": " testigo de los episodios de conflicto armado y violencia tanto que una parte del r\u00edo que"}, {"start": 531.58, "end": 537.38, "text": " se conoce como el Magdalena medio fue en un momento epicentro de uno de los momentos m\u00e1s"}, {"start": 537.38, "end": 543.4200000000001, "text": " complicados y m\u00e1s duros del conflicto el r\u00edo conflicto colombiano que pues necesariamente"}, {"start": 543.4200000000001, "end": 549.38, "text": " atraviesa nuestros relatos porque nos ha atravesado durante much\u00edsimo tiempo para todas estas"}, {"start": 549.38, "end": 557.26, "text": " historias tuvimos la fortuna de contar en la feria del libro en onda con una conferencia"}, {"start": 557.26, "end": 564.34, "text": " de nuestro gran mentor y digamos adorador de este r\u00edo Wade Davis que ha escrito el libro"}, {"start": 564.34, "end": 569.8199999999999, "text": " sobre el Magdalena y con Germ\u00e1n Ferro que es el que ha creado el museo del que hablaremos"}, {"start": 569.8199999999999, "end": 576.9, "text": " el museo del r\u00edo de onda que es una cosa bien importante para conocer y nos desea"}, {"start": 576.9, "end": 583.5, "text": " Wade Davis con toda una vida atravesando el r\u00edo conociendo sus poblaciones conociendo"}, {"start": 583.5, "end": 591.42, "text": " los ribere\u00f1os los pescadores las historias los carnavales las fiestas la vida cotidiana nos"}, {"start": 591.42, "end": 600.82, "text": " dec\u00eda que unos mamos le hab\u00edan comentado que la paz en Colombia se logra cuando se logre"}, {"start": 600.82, "end": 608.82, "text": " recuperar la totalidad del r\u00edo Magdalena que el r\u00edo Magdalena es no solamente la arteria"}, {"start": 608.82, "end": 615.74, "text": " geogr\u00e1fica sino el centro espiritual del pa\u00eds y que es a trav\u00e9s del Magdalena que se"}, {"start": 615.74, "end": 623.38, "text": " puede recuperar todo el sentido de paz que estamos tan duramente luchando por conseguir"}, {"start": 623.38, "end": 628.6600000000001, "text": " el dice que el secreto est\u00e1 en el Magdalena y que fueron los mamos los que le contaron"}, {"start": 628.6600000000001, "end": 635.9000000000001, "text": " que ese r\u00edo es verdaderamente nuestra identidad y nuestra fuente de espiritualidad entonces"}, {"start": 635.9, "end": 643.98, "text": " por eso honramos al r\u00edo agradecemos al Magdalena y su participaci\u00f3n para contarnos en la"}, {"start": 643.98, "end": 649.02, "text": " experiencia incre\u00edble que ha aplasmado a trav\u00e9s de sus libros y al Magdalena se van"}, {"start": 649.02, "end": 655.78, "text": " ferro toda la historia del museo del r\u00edo y vamos para nuestro carnaval que es en la"}, {"start": 655.78, "end": 663.1, "text": " subyenda pero ah\u00ed nos vamos a meter tambi\u00e9n con onda y la poblaci\u00f3n de onda tiene su propia"}, {"start": 663.1, "end": 673.0600000000001, "text": " y maravillosa historia ella fue fundada en 1539 de las bien antiguas San Bartolome de onda"}, {"start": 673.0600000000001, "end": 680.78, "text": " la villa de San Bartolome de Begunda en la tierra donde viv\u00eda la hernia de los panches y"}, {"start": 680.78, "end": 688.14, "text": " se inspiraron para llamarla as\u00ed en los ondimas que era el pueblo panche que viv\u00eda ah\u00ed que"}, {"start": 688.14, "end": 695.26, "text": " quedaba en la cercan\u00eda la actual rey onda de onda es important\u00edsima de hace miles de a\u00f1os el"}, {"start": 695.26, "end": 702.78, "text": " punto en que clonfluyen lo r\u00e1pido o sea unos raudales que hacen que sea important\u00edsima y se"}, {"start": 702.78, "end": 710.02, "text": " aclave dentro del transporte del r\u00edo entonces las condiciones geogr\u00e1ficas hacen que cuando llegu\u00e9 al"}, {"start": 710.02, "end": 717.18, "text": " salto de onda se produzca una barrera natural para la navegaci\u00f3n y ya en las \u00faltimas d\u00e9cadas del"}, {"start": 717.18, "end": 724.6999999999999, "text": " siglo XVI hab\u00eda establecido ah\u00ed un fuerte fluvial que era clave para los espa\u00f1oles para comunicarse"}, {"start": 724.6999999999999, "end": 732.8599999999999, "text": " con el interior de lo que en esa \u00e9poca ellos llamaron el nuevo reino de granada en el caribe y en el"}, {"start": 732.8599999999999, "end": 739.02, "text": " resto del mundo entonces para ellos era clave o sea desde ah\u00ed lo fue siempre pero tambi\u00e9n lo fue"}, {"start": 739.02, "end": 747.26, "text": " durante la \u00e9poca del imperio espa\u00f1ol y entonces por votivos de su estrat\u00e9gica ubicaci\u00f3n geogr\u00e1fica"}, {"start": 747.26, "end": 754.6999999999999, "text": " onda fue una de las poblaciones m\u00e1s importantes en la \u00e9poca colonial y esto explica el origen"}, {"start": 754.6999999999999, "end": 762.42, "text": " el moc\u00edsimo de su arquitectura ella tiene una arquitectura colonial una serie de joyas en la"}, {"start": 762.42, "end": 770.3, "text": " misma estructura de la ciudad de joyas y de puentes que a lo largo de toda su renovada importancia"}, {"start": 770.3, "end": 778.62, "text": " hist\u00f3rica se han visto remozadas en diferentes etapas de nuestra vida como pa\u00eds y hay calles que se"}, {"start": 778.62, "end": 784.86, "text": " asemejan a Cartagena o a Montp\u00f3s porque corresponden a esos puntos estrat\u00e9gicos de desarrollo"}, {"start": 784.86, "end": 791.18, "text": " colonial en tiempo del imperio espa\u00f1ol en nuestra historia en la \u00e9poca republicana despu\u00e9s de"}, {"start": 791.18, "end": 796.3399999999999, "text": " las independencias sigue siendo muy importante onda en el siglo XX ya comienzo del siglo XX"}, {"start": 796.3399999999999, "end": 803.6999999999999, "text": " hasta mediados la ciudad sigue siendo s\u00faper importante es una ciudad de prosperidad va a ser una"}, {"start": 803.6999999999999, "end": 810.18, "text": " ciudad de industrias va a tener f\u00e1brica serbeser\u00edas las serbeser\u00edas todav\u00eda son puntos hist\u00f3ricos"}, {"start": 810.18, "end": 817.54, "text": " de recuerdo y tambi\u00e9n va a tener imparentas important\u00edsimas el aer\u00f3dromo que es muy importante"}, {"start": 817.54, "end": 824.78, "text": " en el orgullo de la ciudad porque fue pionero antes que muchas otras partes del pa\u00eds y una \u00e9lite"}, {"start": 824.78, "end": 830.2199999999999, "text": " pol\u00edtica que no solamente fue importante dentro de la historia de la ciudad sino dentro de la"}, {"start": 830.2199999999999, "end": 836.9399999999999, "text": " historia del pa\u00eds ah\u00ed es donde tenemos la cuna de alfonso lupes pumarejo y muchas otras figuras"}, {"start": 836.9399999999999, "end": 842.6999999999999, "text": " que tendr\u00edan una importancia fundamental en nuestro relato pol\u00edtico como pa\u00eds y tambi\u00e9n"}, {"start": 842.7, "end": 847.86, "text": " va a tener una cantidad de momentos de la historia de la construcci\u00f3n de la historia republicana"}, {"start": 847.86, "end": 855.0200000000001, "text": " la banda sonora es del lucho vermudes porque el lucho vermudes fue el que estuvo siempre musicalizando"}, {"start": 855.0200000000001, "end": 861.22, "text": " las inauguraciones de la \u00e9poca en que est\u00e1bamos construyendo un pa\u00eds que buscaba la modernidad"}, {"start": 861.22, "end": 867.1800000000001, "text": " un pa\u00eds en tiempos de los ferrocarriles de la flota mercante gran colombiana en tiempos de"}, {"start": 867.18, "end": 873.8599999999999, "text": " la navegaci\u00f3n del maidalena como puente fundamental de desarrollo leamos nosotros y hicimos un pa\u00eds"}, {"start": 873.8599999999999, "end": 881.02, "text": " que el lleg\u00f3 hasta un punto y despu\u00e9s nos va a trav\u00e9s de saber conflicto mucho tiempo entonces"}, {"start": 881.02, "end": 885.9399999999999, "text": " estamos tratando de salir de ah\u00ed y seguir haciendo pa\u00edses como la idea pero seguimos haciendo"}, {"start": 885.94, "end": 904.62, "text": " fiestas antes durante y despu\u00e9s aqu\u00ed empieza el espacio comercial el moan y los pescadores"}, {"start": 904.62, "end": 909.82, "text": " se encuentran en la alegr\u00eda del carnaval y reinado de la subyenda en onda en esta fiesta se"}, {"start": 909.82, "end": 916.6600000000001, "text": " celebra al r\u00edo maigdalena un r\u00edo que marca el ritmo de la vida de nuestro pa\u00eds descubre"}, {"start": 916.6600000000001, "end": 922.86, "text": " en la onda sonora los sonidos que acompa\u00f1an la vida cotidiana de Colombia del unes abiernes"}, {"start": 922.86, "end": 927.2600000000001, "text": " desde las 10 de la ma\u00f1ana en Radio Nacional de Colombia estamos donde t\u00fa est\u00e1s"}, {"start": 927.26, "end": 944.18, "text": " entonces a dios momento en que el vapor era la forma de navegaci\u00f3n para llegar a Bogot\u00e1 por"}, {"start": 944.18, "end": 950.42, "text": " llegar unos inmigrantes y por ah\u00ed llegaban toda la gran migraci\u00f3n \u00e1rabe por eso les digo que"}, {"start": 950.42, "end": 956.58, "text": " hay sitios en onda que tienen casas donde mantienen todav\u00eda los letreros en \u00e1rabe por ah\u00ed se"}, {"start": 956.58, "end": 962.96, "text": " conectaba a la industria por ah\u00ed se conectaba todo ese proyecto de pa\u00eds y se mov\u00eda de un lado al"}, {"start": 962.96, "end": 969.7800000000001, "text": " otro llegaban a onda de ah\u00ed van a villeta de villeta y van a entrenar esta Bogot\u00e1 despu\u00e9s las"}, {"start": 969.7800000000001, "end": 976.1800000000001, "text": " carreteras y la la aviaci\u00f3n le fueron quitando relevancia a esta manera de comunicaci\u00f3n a trav\u00e9s"}, {"start": 976.1800000000001, "end": 984.0600000000001, "text": " del r\u00edo y el ferrocarril y entonces lo que hac\u00eda que onda conectara directamente tanto al"}, {"start": 984.06, "end": 991.6199999999999, "text": " caribe como a la estaci\u00f3n de la sabana y en esa medida fuera un punto de partida y un punto de"}, {"start": 991.6199999999999, "end": 998.2199999999999, "text": " encuentro entre todos los diferentes din\u00e1micas de modernidad y desarrollo que estaba teniendo el"}, {"start": 998.2199999999999, "end": 1004.66, "text": " pa\u00eds y que era la \u00e9poca tambi\u00e9n dorada de barranquilla estas son dos ciudades hermanas y son"}, {"start": 1004.66, "end": 1013.8199999999999, "text": " hermanadas por el r\u00edo entonces tienen un efecto digamos simult\u00e1neo la gran \u00e9poca de la"}, {"start": 1013.82, "end": 1021.62, "text": " puerta de oro de Colombia de barranquilla tambi\u00e9n lo va a hacer de onda y luego un tiempo en que todo"}, {"start": 1021.62, "end": 1028.8200000000002, "text": " ese esplendor se viene un poco a menos en barranquilla tambi\u00e9n eso va a pasar en onda y empiezan a"}, {"start": 1028.8200000000002, "end": 1036.14, "text": " hacer menos importantes sobre todo onda para la econom\u00eda nacional en la medida en que se empiece a"}, {"start": 1036.14, "end": 1042.8200000000002, "text": " la espalda al r\u00edo cuando le empezamos a dar la espalda al r\u00edo le damos tambi\u00e9n la espalda"}, {"start": 1042.82, "end": 1050.82, "text": " toda la historia a toda la cantidad de bagaje y de relato que hay de nuestra historia en onda"}, {"start": 1050.82, "end": 1058.7, "text": " haya y casas donde estuvo la expedici\u00f3n bot\u00e1nica est\u00e1 eso todav\u00eda hay casas tambi\u00e9n donde"}, {"start": 1058.7, "end": 1064.34, "text": " naci\u00f3 al f\u00f3sforo p\u00e9simo marejo que fue uno de los primer reformadores sociales en Colombia"}, {"start": 1064.34, "end": 1072.34, "text": " se ha habido un relato de pais que est\u00e1 en onda y que ahora est\u00e1 reverdesiendo y est\u00e1 teniendo un"}, {"start": 1072.34, "end": 1079.6999999999998, "text": " nuevo aire y una nueva vida y una nueva importancia cultural y est\u00e1 digamos como volviendo a narrarse y"}, {"start": 1079.6999999999998, "end": 1086.8999999999999, "text": " a mirarse en la historia la importancia de onda en nuestra cultura pero ya siempre lo ha sido es"}, {"start": 1086.8999999999999, "end": 1093.06, "text": " un punto de relato en la colonia en la rep\u00fablica en el desarrollo en la modernidad en las ideas"}, {"start": 1093.06, "end": 1100.62, "text": " pol\u00edticas en la misma expedici\u00f3n es usted puede rastrear diferentes momentos de nuestra historia como"}, {"start": 1100.62, "end": 1107.94, "text": " pa\u00eds a trav\u00e9s de las huellas de relato que quedan en onda no le digo que hay hay huellas de los"}, {"start": 1107.94, "end": 1114.34, "text": " \u00e1rabes de la expedici\u00f3n bot\u00e1nica de la colonia de la formaci\u00f3n de la cidad republicana de la"}, {"start": 1114.34, "end": 1122.1799999999998, "text": " industria de la \u00e9poca dorada del r\u00edo Magdalena todo eso est\u00e1 nonda entonces empezamos por resaltar"}, {"start": 1122.18, "end": 1131.26, "text": " al Magdalena a quien rendimos tributo con esta fiesta a la ciudad de onda a quien rendimos un enorme"}, {"start": 1131.26, "end": 1138.22, "text": " reconocimiento hist\u00f3rico y al festival de la subienda que es la que nos convoca para atraer en el"}, {"start": 1138.22, "end": 1146.46, "text": " marco de estas historias un fen\u00f3meno natural \u00fanico y particular de nuestra ya singular y particular\u00edsima"}, {"start": 1146.46, "end": 1153.54, "text": " geograf\u00eda de este pa\u00eds exuberante y maravilloso en el que hemos tenido la suerte de nacer para"}, {"start": 1153.54, "end": 1161.54, "text": " contemplar en \u00e9l todas sus posibilidades y maravillas entonces resulta que a este fen\u00f3meno se le"}, {"start": 1161.54, "end": 1168.3, "text": " conoce con el nombre del man\u00e1 Rivera\u00f1o porque es una cosa prodigiosa y esa comienzo del a\u00f1o"}, {"start": 1168.3, "end": 1174.3400000000001, "text": " en el verano en la \u00e9poca m\u00e1s calidad del a\u00f1o para nosotros las estaciones son cosas muy diferentes a"}, {"start": 1174.34, "end": 1179.86, "text": " lo que son las estaciones de los pueblos que est\u00e1n marcados por la forma como el invierno el"}, {"start": 1179.86, "end": 1185.6999999999998, "text": " verano o la primavera aparecen en su gu\u00eda lo largo del a\u00f1o nosotros no tenemos este tipo de"}, {"start": 1185.6999999999998, "end": 1190.6599999999999, "text": " estaciones pero s\u00ed tenemos otras tenemos estaciones de verano que es cuando el bosque se"}, {"start": 1190.6599999999999, "end": 1196.82, "text": " cotropical entra como una especie de invernaci\u00f3n para volver a florescer en tiempo de lluvias entonces"}, {"start": 1196.82, "end": 1203.54, "text": " cuando viene en la \u00e9poca m\u00e1s c\u00e1lida ah\u00ed es cuando lo digo que las aguas y las cienagas del pa\u00eds se"}, {"start": 1203.54, "end": 1212.6599999999999, "text": " calientan y empiezan a perder ox\u00edgeno y esto hace que much\u00edsimas especies no se sientan"}, {"start": 1212.6599999999999, "end": 1219.94, "text": " c\u00f3mo hacen ese calor o no qu\u00e9 hace y arranquen a buscar aguas m\u00e1s frescas y empiezan a buscar el sur"}, {"start": 1219.94, "end": 1227.78, "text": " y remonte en el r\u00edo y cuando remontan el r\u00edo van alcanzando sumadores sexuales decir cuando empieza"}, {"start": 1227.78, "end": 1234.8999999999999, "text": " toda la la gran b\u00fasqueda de estas aguas son peque\u00f1itos y en la medida que remontan el r\u00edo se van"}, {"start": 1234.8999999999999, "end": 1241.74, "text": " desarrollando sexualmente y ah\u00ed vienen los bocachicolos ni curos los tol\u00f3nvalos vagres y muchas otras"}, {"start": 1241.74, "end": 1247.06, "text": " especies que empiezan este gran viajaga de cuenta como las grandes migraciones de las aves pero"}, {"start": 1247.06, "end": 1253.98, "text": " son migraciones de peces en busca de estas aguas entonces ellos empiezan a buscar esas caudales de"}, {"start": 1253.98, "end": 1261.58, "text": " agua pero se van a encontrar en el magdalena a la altura de onda con que hay unos r\u00e1pidos y"}, {"start": 1261.58, "end": 1270.42, "text": " hay unos bancos entonces en ese punto se digamos se divide todo ese jard\u00famen que viene de todo a"}, {"start": 1270.42, "end": 1277.38, "text": " esta parte del r\u00edo y ah\u00ed no pueden circular dentro del r\u00edo por lo cual se van hacia los lados a la"}, {"start": 1277.38, "end": 1285.74, "text": " ribera en la ribera es donde los esperan los pescadores que est\u00e1n atabiados de las atarrallas de"}, {"start": 1285.74, "end": 1292.6200000000001, "text": " los chinchorros de los congolos y a trav\u00e9s de todas estas pr\u00e1cticas que requieren una enorme"}, {"start": 1292.6200000000001, "end": 1301.3400000000001, "text": " pericia los capturan con facilidad y dan toda una forma de vida a esta cultura antigua de los pescadores"}, {"start": 1301.34, "end": 1310.26, "text": " del r\u00edo cuando se acercan a la ribera se acercan en unas grandes cantidades que producen"}, {"start": 1310.26, "end": 1318.1399999999999, "text": " o est\u00e1 en milagro del man\u00e1 ribera\u00f1o y esta pr\u00e1ctica est\u00e1 antigua que los arque\u00f3logos han"}, {"start": 1318.1399999999999, "end": 1323.6999999999998, "text": " encontrado testimonios de sociedades pesqueras que habitaban en las orillas de onda desde el"}, {"start": 1323.6999999999998, "end": 1330.78, "text": " a\u00f1o 500 antes de nuestra era entonces esto es una maravilla que produce una sostenibilidad"}, {"start": 1330.78, "end": 1338.3799999999999, "text": " alimentaria lo ha producido a lo largo de muchos siglos y hace que toda la \u00e9poca de la dominaci\u00f3n"}, {"start": 1338.3799999999999, "end": 1344.3, "text": " espayola de la rep\u00fablica y la \u00e9poca actual hace que los habitantes ribera\u00f1os sean como la"}, {"start": 1345.02, "end": 1353.5, "text": " la parte fundamental de la cultura del r\u00edo y de la cultura de onda efectiva de la subiente"}, {"start": 1353.5, "end": 1360.34, "text": " es entrenero y abril que es cuando se presenta el fen\u00f3meno y en ese momento onda en sus mejores"}, {"start": 1360.34, "end": 1366.7, "text": " \u00e9pocas llegaba a quinto aplicar la poblaci\u00f3n y los pescadores se iban todos para all\u00e1 Colombia"}, {"start": 1366.7, "end": 1373.26, "text": " es un pa\u00eds que ha habido de cosechas y de grandes bonanzas la subiendera como una bonanza es como"}, {"start": 1373.26, "end": 1381.02, "text": " una bonanza cada a\u00f1o y atrae un mont\u00f3n de gente que va a llegar a vivir este milagro de prosperidad"}, {"start": 1381.02, "end": 1387.7, "text": " que ha significado la subienda y eso les permit\u00eda en las mejores \u00e9pocas comprar casa solamente"}, {"start": 1387.7, "end": 1392.94, "text": " con lo que hac\u00edan en una sola subienda y son pues una cantidad de bonanza a las que se van a"}, {"start": 1392.94, "end": 1399.42, "text": " dar all\u00e1 en el marco de este fen\u00f3meno antes de los a\u00f1os 50 onda quer\u00eda celebrar los esp\u00edritus"}, {"start": 1399.42, "end": 1408.74, "text": " del r\u00edo y de la bonanza pero esa partir de 1962 que se produce el primer carnaval que se organiza"}, {"start": 1408.74, "end": 1416.06, "text": " ya apropiamente el carnaval de la subienda y la fiesta nace para honrar la riqueza que representa"}, {"start": 1416.06, "end": 1422.74, "text": " la subienda la albricia de este fen\u00f3meno maravilloso es una manera de emular el esp\u00edritu del"}, {"start": 1422.74, "end": 1427.22, "text": " carnaval y de las fiestas acuerde que todas las ciudades que hemos visto en nuestras historias"}, {"start": 1427.22, "end": 1436.58, "text": " cuando sienten la necesidad de reiterar y de darle una representaci\u00f3n de importancia de la"}, {"start": 1436.58, "end": 1443.1399999999999, "text": " ciudad crean una fiesta y crean un carnaval eso lo hemos visto a lo largo de todos estos relatos"}, {"start": 1443.1399999999999, "end": 1451.82, "text": " y onda no va a ser la excepci\u00f3n entonces ellos hacen su carnaval y el carnaval se inspira en"}, {"start": 1451.82, "end": 1458.82, "text": " las mismas fechas del carnaval de barranquilla y muchos barranquilleros arriba a la ciudad a lo largo del"}, {"start": 1458.82, "end": 1464.6599999999999, "text": " siglo XX porque entre las cosas porque barranquilla es como la el carnaval y signa de donde se"}, {"start": 1464.66, "end": 1472.66, "text": " toman todos y por esta conexi\u00f3n entre onda y barranquilla entonces eso tiene su reinado y el"}, {"start": 1472.66, "end": 1480.1000000000001, "text": " reinado est\u00e1 he dado por los barrios de la ciudad y cada barrio lleva su candidata y hay rivalidades"}, {"start": 1480.1000000000001, "end": 1487.8200000000002, "text": " legendarias entre barrios que siempre han estado compitiendo por el barrio Bogot\u00e1 el barrio de"}, {"start": 1487.82, "end": 1495.3799999999999, "text": " Santa Luc\u00eda que nos contaba el maestro tiberio murcia de c\u00f3mo esto va generando digamos vieja rivalidades"}, {"start": 1495.3799999999999, "end": 1500.8999999999999, "text": " hist\u00f3ricas alrededor de la reina y la manera como se lige la reina es muy bonita porque a la reina"}, {"start": 1500.8999999999999, "end": 1507.22, "text": " le mandan cartas de amor cada carta de amor es equivalente a un voto entonces la que recibe"}, {"start": 1507.22, "end": 1514.22, "text": " la mayor cantidad de cartas de amor es la que va a ganar y esto en otra \u00e9poca ten\u00eda un despliegue"}, {"start": 1514.22, "end": 1520.26, "text": " importante si sal\u00edan la revista corromos el mundo estaba pendiente de cual era la que hab\u00eda ganado"}, {"start": 1520.26, "end": 1527.74, "text": " la subienda el pueblo todo participaba en la elecci\u00f3n de las candidatas y el carnaval tiene"}, {"start": 1527.74, "end": 1533.34, "text": " su reinado y en todas las fiestas hemos visto en la importancia del reinado y aqu\u00ed tambi\u00e9n lo tiene"}, {"start": 1533.34, "end": 1540.54, "text": " pero tambi\u00e9n hacen comparsas hacen bailes t\u00edpicos en las calles todo con la iconograf\u00eda del r\u00edo"}, {"start": 1540.54, "end": 1547.06, "text": " y sobre todo para honrar a los pescadores cuya m\u00fasica tambi\u00e9n nos permean otros de"}, {"start": 1547.06, "end": 1554.3, "text": " mochas canciones a los pescadores y a la figura del bogga y a la figura de estos seres de la pesca"}, {"start": 1554.3, "end": 1560.94, "text": " que son los que generan tanto alimento en el pa\u00eds y hay competencias deportivas y hay competencias"}, {"start": 1560.94, "end": 1569.78, "text": " en dos de ese se alaba la destreza para poder manejar desde la canoa los r\u00e1pidos a la altura del"}, {"start": 1569.78, "end": 1576.06, "text": " magdalena en onda tambi\u00e9n hay competencias sobre las atarrallas y \u00faltimamente las mujeres han"}, {"start": 1576.06, "end": 1582.1399999999999, "text": " entrado en esas competencias para lanzar las atarrallas o sea todo lo que tiene que ver con la"}, {"start": 1582.1399999999999, "end": 1588.74, "text": " actividad del r\u00edo con el bogga con el canalete con las canuas con la destreza con los r\u00e1pidos"}, {"start": 1588.74, "end": 1597.26, "text": " todo eso seg\u00fan re en este festival y seg\u00fan la pericia de los pescadores que a lo largo del r\u00edo"}, {"start": 1597.26, "end": 1602.06, "text": " van generando todo esta manera de alimentarnos a partir de un milagro"}, {"start": 1657.58, "end": 1664.34, "text": " o"}, {"start": 1664.3799999999999, "end": 1674.82, "text": " o"}, {"start": 1677.98, "end": 1683.98, "text": " \ub124"}, {"start": 1683.98, "end": 1689.58, "text": " No su calle va a venir al puerto de tu amor,"}, {"start": 1689.58, "end": 1692.6200000000001, "text": " \u00bfD\u00f3nde es bien es super\u00e9?"}, {"start": 1692.6200000000001, "end": 1695.9, "text": " El rescador habla con la Luna,"}, {"start": 1695.9, "end": 1698.7, "text": " habla con la valla,"}, {"start": 1698.7, "end": 1701.8600000000001, "text": " no tiene por una,"}, {"start": 1701.8600000000001, "end": 1704.06, "text": " solo con las vallas,"}, {"start": 1704.06, "end": 1705.8600000000001, "text": " el rescador habla con la Luna."}, {"start": 1705.8600000000001, "end": 1709.1, "text": " Tambi\u00e9n se le va a rendir tributo"}, {"start": 1709.1, "end": 1711.5, "text": " a un personaje que atraviesa"}, {"start": 1711.5, "end": 1714.74, "text": " todo nuestro pa\u00eds, incluso m\u00e1s all\u00e1 de \u00e9l."}, {"start": 1714.74, "end": 1716.54, "text": " El moan,"}, {"start": 1716.54, "end": 1720.34, "text": " el moan es uno de las figuras m\u00e1s legendarias"}, {"start": 1720.34, "end": 1724.06, "text": " porque uno lo ve desde Palenque,"}, {"start": 1724.06, "end": 1725.82, "text": " desde el Palenque es ambas\u00edlio"}, {"start": 1725.82, "end": 1727.38, "text": " donde alas\u00e9is de la tarde"}, {"start": 1727.38, "end": 1728.86, "text": " no pueden ir a la quebrada"}, {"start": 1728.86, "end": 1731.14, "text": " donde se juntan normalmente"}, {"start": 1731.14, "end": 1734.06, "text": " para hacer la vida social los palinqueros"}, {"start": 1734.06, "end": 1736.62, "text": " porque alas\u00e9is de la tarde aparecen el moan."}, {"start": 1736.62, "end": 1738.54, "text": " All\u00e1 no van a d\u00eda a las seis."}, {"start": 1738.54, "end": 1741.58, "text": " Lo vemos en Puerto Nari\u00f1o"}, {"start": 1741.58, "end": 1743.3799999999999, "text": " en el punto de leticia"}, {"start": 1743.3799999999999, "end": 1746.22, "text": " donde se une como una unita al trapecio."}, {"start": 1746.22, "end": 1747.58, "text": " Ah\u00ed queda Puerto Nari\u00f1o"}, {"start": 1747.58, "end": 1749.26, "text": " que estuvimos hablando de Puerto Nari\u00f1o"}, {"start": 1749.26, "end": 1750.42, "text": " en el festival del Piraruc\u00fa"}, {"start": 1750.42, "end": 1752.74, "text": " para terminar la temporada pasada."}, {"start": 1752.74, "end": 1756.42, "text": " Ah\u00ed el moan aparece ellegant\u00edsimo"}, {"start": 1756.42, "end": 1758.1, "text": " con un cintur\u00f3n de cangrejo"}, {"start": 1758.1, "end": 1759.7, "text": " y unos zapatos muy elegantes"}, {"start": 1759.7, "end": 1762.6599999999999, "text": " se juntan las fiestas de la iniciaci\u00f3n"}, {"start": 1762.6599999999999, "end": 1766.1, "text": " del ciclo menstrual de las mujeres"}, {"start": 1766.1, "end": 1768.98, "text": " y las secuestras, las embarazas"}, {"start": 1768.98, "end": 1769.9399999999998, "text": " y luego ellas aparecen"}, {"start": 1769.9399999999998, "end": 1771.1799999999998, "text": " por eso un tipo elegant\u00edsimo"}, {"start": 1771.1799999999998, "end": 1772.58, "text": " que baila muy bien"}, {"start": 1772.58, "end": 1774.5, "text": " y luego las chicas aparecen embarazadas"}, {"start": 1774.5, "end": 1775.58, "text": " del moan."}, {"start": 1775.58, "end": 1777.54, "text": " S\u00ed, entonces"}, {"start": 1777.54, "end": 1779.4199999999998, "text": " el moan en todas partes"}, {"start": 1779.4199999999998, "end": 1780.74, "text": " es el esp\u00edritu del agua,"}, {"start": 1780.74, "end": 1782.1799999999998, "text": " el esp\u00edritu burl\u00f3n"}, {"start": 1782.1799999999998, "end": 1784.58, "text": " que se hablaba tambi\u00e9n de la mojana."}, {"start": 1784.58, "end": 1786.1, "text": " El moan tambi\u00e9n lo encontramos"}, {"start": 1786.1, "end": 1787.8999999999999, "text": " necesariamente en el huila"}, {"start": 1787.8999999999999, "end": 1789.8999999999999, "text": " y lo encontramos en el tolima"}, {"start": 1789.8999999999999, "end": 1791.58, "text": " y aqu\u00ed en onda"}, {"start": 1791.58, "end": 1793.74, "text": " lo vamos a encontrar tambi\u00e9n."}, {"start": 1793.74, "end": 1795.6599999999999, "text": " El moan es el esp\u00edritu del agua"}, {"start": 1795.66, "end": 1798.46, "text": " y como nosotros somos un pueblo tan h\u00eddrico"}, {"start": 1798.46, "end": 1801.1000000000001, "text": " y somos un pueblo de culturas anfibias"}, {"start": 1801.1000000000001, "end": 1802.94, "text": " pues nuestro pueblo es un pueblo"}, {"start": 1802.94, "end": 1805.5, "text": " que habita con el esp\u00edritu del moan"}, {"start": 1805.5, "end": 1807.74, "text": " que est\u00e1 en todas las culturas"}, {"start": 1807.74, "end": 1809.6200000000001, "text": " y digo que a veces fuera de nosotros"}, {"start": 1809.6200000000001, "end": 1811.5, "text": " porque hay canciones cubanas"}, {"start": 1811.5, "end": 1812.5800000000002, "text": " de saline, reutil,"}, {"start": 1812.5800000000002, "end": 1814.14, "text": " y lo que dice, yo tengo un moan"}, {"start": 1814.14, "end": 1815.46, "text": " que me acompa\u00f1a"}, {"start": 1815.46, "end": 1817.5, "text": " y me protege de la gente."}, {"start": 1817.5, "end": 1820.6200000000001, "text": " Entonces, esto es un esp\u00edritu"}, {"start": 1820.6200000000001, "end": 1822.8200000000002, "text": " de nuestra naturaleza"}, {"start": 1822.8200000000002, "end": 1825.3000000000002, "text": " que es un esp\u00edritu de mucho respeto"}, {"start": 1825.3, "end": 1827.34, "text": " y todas las diferentes fiestas"}, {"start": 1827.34, "end": 1828.54, "text": " de una u otra manera"}, {"start": 1828.54, "end": 1831.3799999999999, "text": " en las regiones donde habita las riberas"}, {"start": 1831.3799999999999, "end": 1833.74, "text": " y donde se lleva las doncellas"}, {"start": 1833.74, "end": 1835.78, "text": " a determinadas horas de la madrugada"}, {"start": 1835.78, "end": 1837.4199999999998, "text": " de la noche y donde se confunden"}, {"start": 1837.4199999999998, "end": 1838.62, "text": " las fiestas."}, {"start": 1838.62, "end": 1840.1, "text": " En todos los sitios hay"}, {"start": 1840.1, "end": 1841.8999999999999, "text": " una representaci\u00f3n del moan"}, {"start": 1841.8999999999999, "end": 1843.86, "text": " forma parte del universo m\u00edtico"}, {"start": 1843.86, "end": 1846.46, "text": " y simb\u00f3lico de las aguas de este pa\u00eds."}, {"start": 1846.46, "end": 1849.1, "text": " Entonces, pues se le rinde"}, {"start": 1849.1, "end": 1850.8999999999999, "text": " al m\u00edtico moan"}, {"start": 1850.8999999999999, "end": 1853.1, "text": " que nos atraviesa de un extremo al otro"}, {"start": 1853.1, "end": 1854.94, "text": " en todas las diferentes im\u00e1genes"}, {"start": 1854.94, "end": 1856.94, "text": " un moan que est\u00e9 desambasilio"}, {"start": 1856.94, "end": 1858.74, "text": " de palenque est\u00e1 puerto en ari\u00f1o"}, {"start": 1858.74, "end": 1861.22, "text": " y para el cual hay un tributo especial"}, {"start": 1861.22, "end": 1862.6200000000001, "text": " durante la subienda en onda,"}, {"start": 1862.6200000000001, "end": 1864.54, "text": " o sea, es un personaje"}, {"start": 1864.54, "end": 1867.22, "text": " de una capacidad de ubiquidad"}, {"start": 1867.22, "end": 1869.26, "text": " espiritual en las aguas de este pa\u00eds"}, {"start": 1869.26, "end": 1870.8600000000001, "text": " impresionante"}, {"start": 1870.8600000000001, "end": 1872.1000000000001, "text": " y le rendimos homenaje"}, {"start": 1872.1000000000001, "end": 1873.3400000000001, "text": " porque los pescadores"}, {"start": 1873.3400000000001, "end": 1876.02, "text": " le piden permiso al moan"}, {"start": 1876.02, "end": 1878.14, "text": " para poder pescar en el r\u00edo."}, {"start": 1878.14, "end": 1880.22, "text": " Los r\u00edos tienen due\u00f1os"}, {"start": 1880.22, "end": 1881.98, "text": " dicen las comunidades originarias"}, {"start": 1881.98, "end": 1883.3, "text": " y hay que pedir permiso"}, {"start": 1883.3, "end": 1884.8600000000001, "text": " para entrar en el r\u00edo"}, {"start": 1884.86, "end": 1886.3, "text": " hay que pedir respeto"}, {"start": 1886.3, "end": 1887.8999999999999, "text": " para entrar en el r\u00edo"}, {"start": 1887.8999999999999, "end": 1890.06, "text": " porque el r\u00edo es vivo."}, {"start": 1890.06, "end": 1892.3, "text": " Entonces, en el magdalena"}, {"start": 1892.3, "end": 1894.58, "text": " se le pide respeto al moan"}, {"start": 1894.58, "end": 1896.3, "text": " para poder entrar."}, {"start": 1896.3, "end": 1898.3, "text": " Entonces, durante la subienda"}, {"start": 1898.3, "end": 1900.54, "text": " el r\u00edo se convierte"}, {"start": 1900.54, "end": 1901.9799999999998, "text": " en el epicentro"}, {"start": 1901.9799999999998, "end": 1903.62, "text": " y todo est\u00e1 de cara al r\u00edo,"}, {"start": 1903.62, "end": 1905.06, "text": " las fiestas, la reuni\u00f3n,"}, {"start": 1905.06, "end": 1906.62, "text": " en los eventos"}, {"start": 1906.62, "end": 1907.8999999999999, "text": " durante el tiempo"}, {"start": 1907.8999999999999, "end": 1909.82, "text": " de la subienda el r\u00edo"}, {"start": 1909.82, "end": 1911.4199999999998, "text": " nunca nunca est\u00e1 solo"}, {"start": 1911.4199999999998, "end": 1913.1, "text": " y est\u00e1n ocurriendo conciertos"}, {"start": 1913.1, "end": 1914.78, "text": " y llegan los visitantes"}, {"start": 1914.78, "end": 1916.26, "text": " de las regiones"}, {"start": 1916.26, "end": 1918.26, "text": " y todo en post de la pesca"}, {"start": 1918.26, "end": 1920.46, "text": " y empieza todo este fen\u00f3meno"}, {"start": 1920.46, "end": 1921.46, "text": " de la subienda"}, {"start": 1921.46, "end": 1923.26, "text": " que va a llegar en febrero y marzo"}, {"start": 1923.26, "end": 1926.06, "text": " y que va a traer toda esta cantidad de gente"}, {"start": 1926.06, "end": 1929.06, "text": " y a partir de esta tradici\u00f3n"}, {"start": 1929.06, "end": 1932.06, "text": " y a partir de esta memoria"}, {"start": 1932.06, "end": 1934.06, "text": " de esta riqu\u00edsima memoria"}, {"start": 1934.06, "end": 1935.82, "text": " que supone onda"}, {"start": 1935.82, "end": 1938.22, "text": " y el magdalena en nuestra historia"}, {"start": 1938.22, "end": 1941.34, "text": " hay una forma de florecer"}, {"start": 1941.34, "end": 1942.7, "text": " y debe rever de ser"}, {"start": 1942.7, "end": 1944.7, "text": " que hoy est\u00e1 teniendo la ciudad de onda"}, {"start": 1944.7, "end": 1947.3, "text": " la ciudad de onda est\u00e1 recoperando su"}, {"start": 1947.3, "end": 1949.3, "text": " importancia cultural"}, {"start": 1949.3, "end": 1953.14, "text": " y ahora muchos festivales est\u00e1n alrededor de ella"}, {"start": 1953.14, "end": 1954.3, "text": " est\u00e1 el magdalena fest"}, {"start": 1954.3, "end": 1955.54, "text": " que ya lleva diez a\u00f1os"}, {"start": 1955.54, "end": 1957.5, "text": " hay un festival de Iv\u00e1n Asca"}, {"start": 1957.5, "end": 1959.78, "text": " que es un festival de mujeres moralistas"}, {"start": 1959.78, "end": 1963.1000000000001, "text": " que llegan a pintar la ciudad de onda"}, {"start": 1963.1000000000001, "end": 1965.3, "text": " hay todo un reconocimiento"}, {"start": 1965.3, "end": 1967.14, "text": " de la arquitectura de sus calles"}, {"start": 1967.14, "end": 1970.3400000000001, "text": " y uno de los eventos o de las situaciones"}, {"start": 1970.3400000000001, "end": 1972.3, "text": " que hace m\u00e1s tributo al magdalena"}, {"start": 1972.3, "end": 1974.02, "text": " que es bell\u00edsimo, que es el buceo"}, {"start": 1974.02, "end": 1975.66, "text": " del r\u00edo magdalena"}, {"start": 1975.66, "end": 1977.86, "text": " y el museo del r\u00edo magdalena"}, {"start": 1977.86, "end": 1980.94, "text": " es precisamente la idea de reconectar"}, {"start": 1980.94, "end": 1983.3799999999999, "text": " todo el r\u00edo con el pa\u00eds"}, {"start": 1983.3799999999999, "end": 1984.94, "text": " y con la regi\u00f3n"}, {"start": 1984.94, "end": 1986.42, "text": " y con los colombianos"}, {"start": 1986.42, "end": 1988.94, "text": " y es como una manera de visibilizar"}, {"start": 1988.94, "end": 1991.3799999999999, "text": " toda la importancia del r\u00edo"}, {"start": 1991.3799999999999, "end": 1995.26, "text": " en cada una de las etapas de nuestra historia"}, {"start": 1995.26, "end": 1997.54, "text": " ese museo en s\u00ed mismo tiene"}, {"start": 1997.54, "end": 1999.74, "text": " por un lado la mitolog\u00eda del r\u00edo"}, {"start": 1999.74, "end": 2001.74, "text": " est\u00e1 el mo\u00e1n y est\u00e1n las figuras del r\u00edo"}, {"start": 2001.74, "end": 2003.1399999999999, "text": " por el otro lado"}, {"start": 2003.14, "end": 2005.5400000000002, "text": " todo un recuerdo de la importancia"}, {"start": 2005.5400000000002, "end": 2007.0200000000002, "text": " de la navegaci\u00f3n en el magdalena"}, {"start": 2007.0200000000002, "end": 2009.18, "text": " para el desarrollo de nuestra historia"}, {"start": 2009.18, "end": 2010.98, "text": " entonces en el museo"}, {"start": 2010.98, "end": 2013.0200000000002, "text": " est\u00e1n todas las diferentes etapas"}, {"start": 2013.0200000000002, "end": 2015.7, "text": " del desarrollo de las cartas de navegaci\u00f3n"}, {"start": 2015.7, "end": 2018.0600000000002, "text": " la \u00e9poca de los capitanes"}, {"start": 2018.0600000000002, "end": 2020.1000000000001, "text": " que atravesaban en esos vapores el r\u00edo"}, {"start": 2020.1000000000001, "end": 2021.66, "text": " las bajillas que se encontraban"}, {"start": 2021.66, "end": 2022.9, "text": " en esa \u00e9poca"}, {"start": 2022.9, "end": 2024.5800000000002, "text": " los mapas"}, {"start": 2024.5800000000002, "end": 2026.98, "text": " todo lo que significaba el magdalena"}, {"start": 2026.98, "end": 2029.98, "text": " y tiene un espacio y tinerante"}, {"start": 2029.98, "end": 2031.66, "text": " de exposiciones"}, {"start": 2031.66, "end": 2032.98, "text": " en esta ocasi\u00f3n"}, {"start": 2032.98, "end": 2034.98, "text": " cuando estuvimos en la feria del libro"}, {"start": 2034.98, "end": 2037.14, "text": " y estuvimos mirando todo esta"}, {"start": 2037.14, "end": 2039.46, "text": " toda esta expectativa de onda"}, {"start": 2039.46, "end": 2041.8200000000002, "text": " con toda su milagro y su carnaval"}, {"start": 2041.8200000000002, "end": 2044.98, "text": " nos encontramos con una exposici\u00f3n"}, {"start": 2044.98, "end": 2047.1000000000001, "text": " sobre las brujas"}, {"start": 2047.1000000000001, "end": 2049.34, "text": " alrededor del r\u00edo magdalena"}, {"start": 2049.34, "end": 2050.94, "text": " una exposici\u00f3n"}, {"start": 2050.94, "end": 2054.3, "text": " que nos permit\u00eda encontrar los relatos"}, {"start": 2054.3, "end": 2057.06, "text": " entre las brujas y sus saberes"}, {"start": 2057.06, "end": 2058.9, "text": " tanto en Europa"}, {"start": 2058.9, "end": 2062.3, "text": " donde se castig\u00f3 fuertemente los saberes de las mujeres"}, {"start": 2062.3, "end": 2064.38, "text": " cuando se les sustituy\u00f3"}, {"start": 2064.38, "end": 2065.82, "text": " a las antiguas mujeres Celta"}, {"start": 2065.82, "end": 2067.34, "text": " si a las mujeres antiguas"}, {"start": 2067.34, "end": 2068.86, "text": " de las bikingas"}, {"start": 2068.86, "end": 2071.1800000000003, "text": " y se les quit\u00f3 el saber de las plantas"}, {"start": 2071.1800000000003, "end": 2073.06, "text": " el mismo fen\u00f3meno se produjo"}, {"start": 2073.06, "end": 2074.94, "text": " cuando lleg\u00f3 la inquisici\u00f3n"}, {"start": 2074.94, "end": 2076.98, "text": " a las tierras de Am\u00e9rica"}, {"start": 2076.98, "end": 2079.38, "text": " y persigu\u00f3 los saberes femeninos"}, {"start": 2079.38, "end": 2081.7000000000003, "text": " los saberes de las plantas, los saberes del bosque"}, {"start": 2081.7000000000003, "end": 2083.5, "text": " los saberes de la parter\u00eda"}, {"start": 2083.5, "end": 2085.86, "text": " todos los saberes que se cohab\u00edan concentrado"}, {"start": 2085.86, "end": 2087.82, "text": " en miles de a\u00f1os de tradici\u00f3n"}, {"start": 2087.82, "end": 2089.94, "text": " esta exposici\u00f3n que est\u00e1bamos viendo"}, {"start": 2089.94, "end": 2091.94, "text": " recupera esos saberes"}, {"start": 2091.94, "end": 2093.2200000000003, "text": " y la figura de las brujas"}, {"start": 2093.2200000000003, "end": 2094.98, "text": " como aquellas que se hab\u00edan revelado"}, {"start": 2094.98, "end": 2096.9, "text": " contra la dominaci\u00f3n espa\u00f1ola"}, {"start": 2096.9, "end": 2099.9, "text": " o como aquellas que pose\u00edan saberes"}, {"start": 2099.9, "end": 2101.38, "text": " muy antiguos"}, {"start": 2101.38, "end": 2102.7400000000002, "text": " que no eran comprendidos"}, {"start": 2102.7400000000002, "end": 2105.1800000000003, "text": " por esa nueva forma de mirar el mundo"}, {"start": 2105.1800000000003, "end": 2106.78, "text": " en donde todo esto"}, {"start": 2106.78, "end": 2108.9, "text": " se asimilaba a la brujer\u00eda"}, {"start": 2108.9, "end": 2110.6200000000003, "text": " es la exposici\u00f3n iterante"}, {"start": 2110.6200000000003, "end": 2112.3, "text": " que hay en este momento"}, {"start": 2112.3, "end": 2115.5, "text": " pero permanentemente hay exposiciones"}, {"start": 2115.5, "end": 2117.7000000000003, "text": " que van recuperando de muchas formas"}, {"start": 2117.7, "end": 2119.3399999999997, "text": " la memoria del r\u00edo"}, {"start": 2119.3399999999997, "end": 2120.98, "text": " porque la memoria del r\u00edo"}, {"start": 2120.98, "end": 2122.9399999999996, "text": " es completamente"}, {"start": 2122.9399999999996, "end": 2124.2599999999998, "text": " inagotable"}, {"start": 2124.2599999999998, "end": 2125.7799999999997, "text": " y tambi\u00e9n nos dec\u00eda wayday"}, {"start": 2125.7799999999997, "end": 2127.8599999999997, "text": " is que el r\u00edo nunca nos ha fallado"}, {"start": 2127.8599999999997, "end": 2129.58, "text": " que el r\u00edo siempre ha fluido ah\u00ed"}, {"start": 2129.58, "end": 2130.74, "text": " para nosotros"}, {"start": 2130.74, "end": 2132.1, "text": " que pase lo que pase"}, {"start": 2132.1, "end": 2133.5, "text": " los tiempos de lo olvid\u00f3"}, {"start": 2133.5, "end": 2135.14, "text": " los tiempos de recuerdo"}, {"start": 2135.14, "end": 2136.7, "text": " el amor en los tiempos del color"}, {"start": 2136.7, "end": 2138.1, "text": " a uno de los relatos literarios"}, {"start": 2138.1, "end": 2139.3799999999997, "text": " m\u00e1s maravillosos"}, {"start": 2139.3799999999997, "end": 2141.54, "text": " de nuestra identidad cultural"}, {"start": 2141.54, "end": 2143.18, "text": " que se se gran viaje"}, {"start": 2143.18, "end": 2144.66, "text": " por el magdalena"}, {"start": 2144.66, "end": 2146.3799999999997, "text": " que hacen estos dos personajes"}, {"start": 2146.38, "end": 2147.94, "text": " que el amor es posible en la veje"}, {"start": 2147.94, "end": 2149.7000000000003, "text": " sino lo fue en la juventud"}, {"start": 2149.7000000000003, "end": 2152.62, "text": " y que se entregan a la dicha"}, {"start": 2152.62, "end": 2154.06, "text": " de estar el uno al otro"}, {"start": 2154.06, "end": 2155.5, "text": " recorriendo el magdalena"}, {"start": 2155.5, "end": 2156.6600000000003, "text": " en un vapor"}, {"start": 2156.6600000000003, "end": 2158.86, "text": " cuando no pod\u00edan at llegar a ninguna orilla"}, {"start": 2158.86, "end": 2160.46, "text": " porque el pa\u00eds estaba tomado"}, {"start": 2160.46, "end": 2161.6600000000003, "text": " por el c\u00f3lera"}, {"start": 2161.6600000000003, "end": 2164.9, "text": " entonces todo esta gran memoria"}, {"start": 2164.9, "end": 2165.9, "text": " desde el mo\u00e1n"}, {"start": 2165.9, "end": 2167.46, "text": " hasta las bocas de ceniza"}, {"start": 2167.46, "end": 2169.7400000000002, "text": " desde el amor en los tiempos del c\u00f3lera"}, {"start": 2169.7400000000002, "end": 2171.7000000000003, "text": " hasta los vapores"}, {"start": 2171.7000000000003, "end": 2172.7000000000003, "text": " por el magdalena"}, {"start": 2172.7000000000003, "end": 2175.5, "text": " desde el florecimiento cultural"}, {"start": 2175.5, "end": 2176.94, "text": " de la regi\u00f3n ahora"}, {"start": 2176.94, "end": 2178.02, "text": " entre talleres"}, {"start": 2178.02, "end": 2179.02, "text": " entre historias"}, {"start": 2179.02, "end": 2180.54, "text": " entre imprentas"}, {"start": 2180.54, "end": 2181.98, "text": " entre serbeser\u00edas"}, {"start": 2181.98, "end": 2182.62, "text": " que fueron"}, {"start": 2182.62, "end": 2184.22, "text": " antes grandiosas"}, {"start": 2184.22, "end": 2185.9, "text": " industrias que ya no est\u00e1n"}, {"start": 2185.9, "end": 2187.5, "text": " pero que se recuerdan"}, {"start": 2187.5, "end": 2189.18, "text": " todo un florecimiento cultural"}, {"start": 2189.18, "end": 2190.1, "text": " que en este momento"}, {"start": 2190.1, "end": 2191.7, "text": " le devuelve a onda"}, {"start": 2191.7, "end": 2193.42, "text": " su antigua importancia"}, {"start": 2193.42, "end": 2195.74, "text": " en la memoria y en la historia"}, {"start": 2195.74, "end": 2197.62, "text": " y el compromiso fundamental"}, {"start": 2197.62, "end": 2198.82, "text": " de mirar al magdalena"}, {"start": 2198.82, "end": 2200.54, "text": " para poder nos mirar"}, {"start": 2200.54, "end": 2201.98, "text": " de entender el magdalena"}, {"start": 2201.98, "end": 2203.82, "text": " para poder nos entender"}, {"start": 2203.82, "end": 2205.82, "text": " de sanear el magdalena"}, {"start": 2205.82, "end": 2206.86, "text": " para poder sanear nuestro coraz\u00f3n"}, {"start": 2206.86, "end": 2208.1800000000003, "text": " y nuestra alma"}, {"start": 2208.1800000000003, "end": 2209.7000000000003, "text": " porque es ah\u00ed en ese r\u00edo"}, {"start": 2209.7000000000003, "end": 2212.42, "text": " donde reside el esp\u00edritu de este pueblo"}, {"start": 2212.42, "end": 2214.1800000000003, "text": " y es en ese r\u00edo donde sucede"}, {"start": 2214.1800000000003, "end": 2216.06, "text": " el milagro de la subienda"}, {"start": 2216.06, "end": 2217.5, "text": " y este manar riberenio"}, {"start": 2217.5, "end": 2220.02, "text": " nos cuenta la historia de los bocas"}, {"start": 2220.02, "end": 2221.3, "text": " y de todo esta gente"}, {"start": 2221.3, "end": 2222.7400000000002, "text": " que hace posible"}, {"start": 2222.7400000000002, "end": 2224.98, "text": " que el milagro se vuelva alimento"}, {"start": 2224.98, "end": 2226.38, "text": " es necesario"}, {"start": 2226.38, "end": 2227.5, "text": " cuidar el r\u00edo"}, {"start": 2227.5, "end": 2228.9, "text": " porque hay pesca industrial"}, {"start": 2228.9, "end": 2230.1000000000004, "text": " que hace que la subienda"}, {"start": 2230.1000000000004, "end": 2231.98, "text": " no llegue en la misma cantidad"}, {"start": 2231.98, "end": 2233.78, "text": " que llegaba en otras \u00e9pocas"}, {"start": 2233.78, "end": 2235.78, "text": " y no genere la misma prosperidad"}, {"start": 2235.78, "end": 2237.78, "text": " es un istos de inibilidad de vida"}, {"start": 2237.78, "end": 2239.46, "text": " que hab\u00eda en el pasado"}, {"start": 2239.46, "end": 2241.3, "text": " es necesario que haya relajos"}, {"start": 2241.3, "end": 2243.86, "text": " generacionales en el festival de la subienda"}, {"start": 2243.86, "end": 2246.34, "text": " para que esta tradici\u00f3n tan antigua"}, {"start": 2246.34, "end": 2249.34, "text": " y tan poderosa se mantenga en el tiempo"}, {"start": 2249.34, "end": 2250.5400000000004, "text": " la subienda del festival"}, {"start": 2250.5400000000004, "end": 2251.7000000000003, "text": " no se pudo hacer"}, {"start": 2251.7000000000003, "end": 2253.3, "text": " durante la pandemia"}, {"start": 2253.3, "end": 2255.1000000000004, "text": " y no se pudo hacer este a\u00f1o"}, {"start": 2255.1000000000004, "end": 2256.5800000000004, "text": " porque hubo un pico"}, {"start": 2256.5800000000004, "end": 2258.5800000000004, "text": " de COVID en el momento justo"}, {"start": 2258.5800000000004, "end": 2259.82, "text": " en que ven\u00eda el festival"}, {"start": 2259.82, "end": 2260.98, "text": " por el carnaval de Barrequilla"}, {"start": 2260.98, "end": 2262.6600000000003, "text": " se tuvo que posponer"}, {"start": 2262.66, "end": 2263.66, "text": " pero de la subienda"}, {"start": 2263.66, "end": 2264.66, "text": " pues no se puede posponer"}, {"start": 2264.66, "end": 2266.66, "text": " porque el de la subienda es cuando sube"}, {"start": 2266.66, "end": 2268.66, "text": " cuando suven los peces"}, {"start": 2268.66, "end": 2270.66, "text": " entonces no como est\u00e1 tan ligado"}, {"start": 2270.66, "end": 2271.66, "text": " al fen\u00f3meno natural"}, {"start": 2271.66, "end": 2272.66, "text": " pues no"}, {"start": 2272.66, "end": 2274.66, "text": " no podemos pasar la fecha por otro lado"}, {"start": 2274.66, "end": 2276.66, "text": " porque es cuando llegan los peces"}, {"start": 2276.66, "end": 2278.66, "text": " a diferencia de otros festivales"}, {"start": 2278.66, "end": 2279.8999999999996, "text": " que o sea"}, {"start": 2279.8999999999996, "end": 2281.66, "text": " trasladado en fechas"}, {"start": 2281.66, "end": 2283.66, "text": " o sucede en cualquier momento"}, {"start": 2283.66, "end": 2284.66, "text": " como el de la tigra"}, {"start": 2284.66, "end": 2285.66, "text": " que sucede cuando se puede"}, {"start": 2285.66, "end": 2287.66, "text": " pero durante el a\u00f1o"}, {"start": 2287.66, "end": 2289.66, "text": " este toca cuando es porque es por eso"}, {"start": 2289.66, "end": 2290.66, "text": " entonces no lo puede mostrar"}, {"start": 2290.66, "end": 2291.66, "text": " a la gran ning\u00fan otro lugar"}, {"start": 2291.66, "end": 2293.66, "text": " pero es un festival"}, {"start": 2293.66, "end": 2294.66, "text": " que nos acuerda"}, {"start": 2294.66, "end": 2296.66, "text": " la generosidad tan impresionante"}, {"start": 2296.66, "end": 2297.66, "text": " de la naturaleza con nosotros"}, {"start": 2297.66, "end": 2298.66, "text": " los egipcios"}, {"start": 2298.66, "end": 2300.66, "text": " cuando dominaron el r\u00edo"}, {"start": 2300.66, "end": 2302.66, "text": " ni se convirtieron en imperio"}, {"start": 2302.66, "end": 2304.66, "text": " ellos consideraban que el r\u00edo"}, {"start": 2304.66, "end": 2305.66, "text": " ni lo era sagrado"}, {"start": 2305.66, "end": 2307.66, "text": " porque algo que les daba"}, {"start": 2307.66, "end": 2308.66, "text": " tantas cosas"}, {"start": 2308.66, "end": 2309.66, "text": " tan maravillosas"}, {"start": 2309.66, "end": 2311.66, "text": " desde el limo"}, {"start": 2311.66, "end": 2312.66, "text": " las crecientes"}, {"start": 2312.66, "end": 2314.66, "text": " y todo ten\u00eda que ser un dios"}, {"start": 2314.66, "end": 2315.66, "text": " como de otra manera"}, {"start": 2315.66, "end": 2317.66, "text": " podr\u00edan interpretar los egipcios"}, {"start": 2317.66, "end": 2318.66, "text": " el milagro del dilo"}, {"start": 2318.66, "end": 2320.66, "text": " que hace posible todo el imperio"}, {"start": 2320.66, "end": 2322.66, "text": " pues el maigdalena es un milagro"}, {"start": 2322.66, "end": 2324.66, "text": " y hay que entenderlo como tal"}, {"start": 2324.66, "end": 2326.66, "text": " y el sitio donde se entiende"}, {"start": 2326.66, "end": 2328.66, "text": " plenamente"}, {"start": 2328.66, "end": 2330.66, "text": " el milagro del maigdalena"}, {"start": 2330.66, "end": 2332.66, "text": " como la mayor generosidad"}, {"start": 2332.66, "end": 2334.66, "text": " de la naturaleza con nosotros"}, {"start": 2334.66, "end": 2336.66, "text": " es en el carnaval de la subienda"}, {"start": 2336.66, "end": 2337.66, "text": " y por eso"}, {"start": 2337.66, "end": 2339.66, "text": " contamos las historias del carnaval"}, {"start": 2339.66, "end": 2340.66, "text": " de la subienda"}, {"start": 2340.66, "end": 2342.66, "text": " para mirar a los pescadores"}, {"start": 2342.66, "end": 2343.66, "text": " y para mirar"}, {"start": 2343.66, "end": 2344.66, "text": " desde el esp\u00edritu"}, {"start": 2344.66, "end": 2345.66, "text": " la geograf\u00eda"}, {"start": 2345.66, "end": 2346.66, "text": " la historia"}, {"start": 2346.66, "end": 2348.66, "text": " la misma resistencia"}, {"start": 2348.66, "end": 2350.66, "text": " que ha tenido"}, {"start": 2350.66, "end": 2351.66, "text": " la fidelidad"}, {"start": 2351.66, "end": 2352.66, "text": " y la compa\u00f1\u00eda"}, {"start": 2352.66, "end": 2353.66, "text": " que nos hace este r\u00edo"}, {"start": 2353.66, "end": 2355.66, "text": " cuya recuperaci\u00f3n"}, {"start": 2355.66, "end": 2356.66, "text": " es tambi\u00e9n la recuperaci\u00f3n"}, {"start": 2356.66, "end": 2358.66, "text": " de nuestro propio esp\u00edritu"}, {"start": 2358.66, "end": 2360.66, "text": " y de nuestra propia alma"}, {"start": 2360.66, "end": 2361.66, "text": " por lo tanto"}, {"start": 2361.66, "end": 2362.66, "text": " el r\u00edo maigdalena"}, {"start": 2362.66, "end": 2363.66, "text": " tiene una banda sonora"}, {"start": 2363.66, "end": 2365.66, "text": " tarea y menda"}, {"start": 2365.66, "end": 2367.66, "text": " y todo el mundo le ha cantado"}, {"start": 2367.66, "end": 2368.66, "text": " al maigdalena"}, {"start": 2368.66, "end": 2371.66, "text": " desde la piragua de Guillermo Cuvillos"}, {"start": 2371.66, "end": 2372.66, "text": " desde Jos\u00e9"}, {"start": 2372.66, "end": 2373.66, "text": " pues de Jos\u00e9 Varrio"}, {"start": 2373.66, "end": 2374.66, "text": " de Bejo"}, {"start": 2374.66, "end": 2375.66, "text": " Jorge Villamil"}, {"start": 2375.66, "end": 2376.66, "text": " bueno, toda la m\u00fasica"}, {"start": 2376.66, "end": 2377.66, "text": " del 8\u00ba Hermud\u00e9s"}, {"start": 2377.66, "end": 2378.66, "text": " inaugurando"}, {"start": 2378.66, "end": 2380.66, "text": " pa\u00eds por todo el maigdalena"}, {"start": 2380.66, "end": 2382.66, "text": " que fue donde se construy\u00f3"}, {"start": 2382.66, "end": 2384.66, "text": " todas las historias"}, {"start": 2384.66, "end": 2385.66, "text": " de los pescadores"}, {"start": 2385.66, "end": 2386.66, "text": " que hablan con la luna"}, {"start": 2386.66, "end": 2388.66, "text": " que hablan con la playa"}, {"start": 2388.66, "end": 2389.66, "text": " que no tienen fortuna"}, {"start": 2389.66, "end": 2390.66, "text": " solo su batarraya"}, {"start": 2390.66, "end": 2393.66, "text": " hay toda una secuencia"}, {"start": 2393.66, "end": 2395.66, "text": " musical de seguimiento"}, {"start": 2395.66, "end": 2397.66, "text": " del maigdalena"}, {"start": 2397.66, "end": 2399.66, "text": " que atraviesa la m\u00fasica"}, {"start": 2399.66, "end": 2401.66, "text": " de todos los departamentos"}, {"start": 2401.66, "end": 2402.66, "text": " a cuya vera"}, {"start": 2402.66, "end": 2404.66, "text": " fluye el r\u00edo"}, {"start": 2404.66, "end": 2405.66, "text": " entonces"}, {"start": 2405.66, "end": 2407.66, "text": " es tambi\u00e9n parte"}, {"start": 2407.66, "end": 2408.66, "text": " de nuestra m\u00fasica"}, {"start": 2408.66, "end": 2411.66, "text": " es parte de nuestra literatura"}, {"start": 2411.66, "end": 2414.66, "text": " es parte de nuestra historia"}, {"start": 2414.66, "end": 2416.66, "text": " es parte de nuestra geolog\u00eda"}, {"start": 2416.66, "end": 2419.66, "text": " es parte de nuestra geograf\u00eda"}, {"start": 2419.66, "end": 2421.66, "text": " es parte de nuestro relato"}, {"start": 2421.66, "end": 2423.66, "text": " como pa\u00eds"}, {"start": 2423.66, "end": 2426.66, "text": " y el que celebra todo eso"}, {"start": 2426.66, "end": 2428.66, "text": " es el festival de la subienda"}, {"start": 2428.66, "end": 2430.66, "text": " entonces esa fiesta"}, {"start": 2430.66, "end": 2431.66, "text": " es tan especial"}, {"start": 2431.66, "end": 2433.66, "text": " porque es una manera"}, {"start": 2433.66, "end": 2435.66, "text": " de poner nuestros ojos"}, {"start": 2435.66, "end": 2437.66, "text": " con fieras y fiestas"}, {"start": 2437.66, "end": 2439.66, "text": " sobre aquello que nos da"}, {"start": 2439.66, "end": 2441.66, "text": " el car\u00e1cter hist\u00f3rico"}, {"start": 2441.66, "end": 2443.66, "text": " de nuestra presencia"}, {"start": 2443.66, "end": 2445.66, "text": " en el mapa que es el r\u00edo"}, {"start": 2445.66, "end": 2447.66, "text": " maigdalena"}, {"start": 2447.66, "end": 2450.66, "text": " en el r\u00edo la naci\u00f3n r\u00edo"}, {"start": 2450.66, "end": 2453.66, "text": " la naci\u00f3n de Colombia"}, {"start": 2453.66, "end": 2455.66, "text": " en sus aguas de la insuido"}, {"start": 2455.66, "end": 2459.66, "text": " en el remarcial de Bogl\u00e1n"}, {"start": 2459.66, "end": 2462.66, "text": " con ellos el rey del r\u00edo"}, {"start": 2462.66, "end": 2464.66, "text": " maigle y roca tena"}, {"start": 2464.66, "end": 2467.66, "text": " es el mar el perejito"}, {"start": 2467.66, "end": 2470.66, "text": " y el r\u00edo es el mar de la r\u00edo"}, {"start": 2470.66, "end": 2473.66, "text": " antes en la sianista"}, {"start": 2473.66, "end": 2476.66, "text": " con los bienos pescadores"}, {"start": 2476.66, "end": 2479.66, "text": " tambi\u00e9n van a pescadores"}, {"start": 2479.66, "end": 2482.66, "text": " en el guante estarrejido"}, {"start": 2482.66, "end": 2485.66, "text": " y sencaros pescadores"}, {"start": 2485.66, "end": 2487.66, "text": " les escondi\u00f3 la terra rey"}, {"start": 2487.66, "end": 2490.66, "text": " por confiado por los bols"}, {"start": 2490.66, "end": 2493.66, "text": " y el tevaro su amala"}, {"start": 2507.66, "end": 2509.66, "text": " y los rey son del r\u00edo"}, {"start": 2509.66, "end": 2512.66, "text": " a parecia como maglia"}, {"start": 2512.66, "end": 2515.66, "text": " con un tava consentido"}, {"start": 2515.66, "end": 2518.66, "text": " y su padre guiera la r\u00edo"}, {"start": 2518.66, "end": 2521.66, "text": " la r\u00edo de la r\u00edo"}, {"start": 2521.66, "end": 2523.66, "text": " y su padre guiera la r\u00edo"}, {"start": 2523.66, "end": 2525.66, "text": " y su padre guiera la r\u00edo"}, {"start": 2525.66, "end": 2528.66, "text": " y su padre guiera la r\u00edo"}, {"start": 2528.66, "end": 2530.66, "text": " en sus r\u00edleros"}, {"start": 2530.66, "end": 2533.66, "text": " y su padre guiera la r\u00edo"}, {"start": 2533.66, "end": 2535.66, "text": " y su padre guiera la r\u00edo"}, {"start": 2535.66, "end": 2537.66, "text": " entonces desde el punto de vista musical"}, {"start": 2537.66, "end": 2539.66, "text": " pues esto es inagotable"}, {"start": 2539.66, "end": 2541.66, "text": " porque todo es de una u otra manera"}, {"start": 2541.66, "end": 2543.66, "text": " le cantan al maigdalena"}, {"start": 2543.66, "end": 2546.66, "text": " le cantan a todas su historia"}, {"start": 2546.66, "end": 2548.66, "text": " su m\u00edtico recorrido"}, {"start": 2548.66, "end": 2551.66, "text": " los puentes sobre el maigdalena"}, {"start": 2551.66, "end": 2554.66, "text": " eso para nosotros como comunica todo el pa\u00eds"}, {"start": 2554.66, "end": 2556.66, "text": " entonces uno se lo va a encontrar"}, {"start": 2556.66, "end": 2557.66, "text": " en todas partes"}, {"start": 2557.66, "end": 2558.66, "text": " vayad donde vayas"}, {"start": 2558.66, "end": 2559.66, "text": " se lo va a encontrar"}, {"start": 2559.66, "end": 2561.66, "text": " porque el maigdalena es muy grande"}, {"start": 2561.66, "end": 2563.66, "text": " tiene esta enorme cuenca con el cauca"}, {"start": 2563.66, "end": 2565.66, "text": " que ahora se considera"}, {"start": 2565.66, "end": 2567.66, "text": " en nuestra precisi\u00f3n geogr\u00e1fica"}, {"start": 2567.66, "end": 2569.66, "text": " una gran cuenca la del cauca y el maigdalena"}, {"start": 2569.66, "end": 2571.66, "text": " corren paralelos"}, {"start": 2571.66, "end": 2573.66, "text": " comunicando este pa\u00eds"}, {"start": 2573.66, "end": 2574.66, "text": " inarrando los historias"}, {"start": 2574.66, "end": 2576.66, "text": " el cauca tiene los suy"}, {"start": 2576.66, "end": 2577.66, "text": " eso ya es otra historia"}, {"start": 2577.66, "end": 2579.66, "text": " pero estamos en la suyenda"}, {"start": 2579.66, "end": 2580.66, "text": " que es en el maigdalena"}, {"start": 2580.66, "end": 2583.66, "text": " por eso nos concentramos en el r\u00edo maigdalena"}, {"start": 2583.66, "end": 2585.66, "text": " pero son los r\u00edos"}, {"start": 2585.66, "end": 2588.66, "text": " los que fluyen con nuestra manera"}, {"start": 2588.66, "end": 2591.66, "text": " de considerarnos pa\u00eds"}, {"start": 2591.66, "end": 2593.66, "text": " y de habernos articulado"}, {"start": 2593.66, "end": 2595.66, "text": " alrededor de toda esta"}, {"start": 2595.66, "end": 2596.66, "text": " milagro h\u00eddrico"}, {"start": 2596.66, "end": 2598.66, "text": " que nos atraviesa de un lado al otro"}, {"start": 2598.66, "end": 2599.66, "text": " y que est\u00e1 en escaso"}, {"start": 2599.66, "end": 2601.66, "text": " ni siquiera nuestra propia am\u00e9rica"}, {"start": 2601.66, "end": 2603.66, "text": " es as\u00ed de irrigada de grandes r\u00edos"}, {"start": 2603.66, "end": 2605.66, "text": " nosotros somos un privilegio"}, {"start": 2605.66, "end": 2608.66, "text": " dentro de pueblos que tienen grand\u00edsimos"}, {"start": 2608.66, "end": 2609.66, "text": " desiertos"}, {"start": 2609.66, "end": 2611.66, "text": " nosotros tenemos ac\u00e1 esta cantidad de r\u00edos"}, {"start": 2611.66, "end": 2613.66, "text": " y entre todos los r\u00edos"}, {"start": 2613.66, "end": 2615.66, "text": " el rey nuestro es el maigdalena"}, {"start": 2615.66, "end": 2618.66, "text": " entonces entre toda la m\u00fasica"}, {"start": 2618.66, "end": 2620.66, "text": " entre la literatura que les digo"}, {"start": 2620.66, "end": 2622.66, "text": " del amor y los tiempos del colera"}, {"start": 2622.66, "end": 2623.66, "text": " aprovechamos esta"}, {"start": 2623.66, "end": 2625.66, "text": " este programa para que se lo vuelvan a leer"}, {"start": 2625.66, "end": 2627.66, "text": " los que se lo leyaron"}, {"start": 2627.66, "end": 2629.66, "text": " o empiecen por disfrutarlo"}, {"start": 2629.66, "end": 2631.66, "text": " cuando dicen era inevitable"}, {"start": 2631.66, "end": 2633.66, "text": " el olor al vender a las amargas"}, {"start": 2633.66, "end": 2634.66, "text": " siempre le tra\u00eda la memoria"}, {"start": 2634.66, "end": 2636.66, "text": " a los amores contrariados"}, {"start": 2636.66, "end": 2638.66, "text": " y empezar por ah\u00ed"}, {"start": 2638.66, "end": 2640.66, "text": " a vivir el amor en los tiempos del colera"}, {"start": 2640.66, "end": 2642.66, "text": " y recorrer este maigdalena"}, {"start": 2642.66, "end": 2645.66, "text": " y recuperar en esa narrativa"}, {"start": 2645.66, "end": 2647.66, "text": " la memoria de este maravilloso r\u00edo"}, {"start": 2647.66, "end": 2649.66, "text": " y en toda la m\u00fasica"}, {"start": 2649.66, "end": 2650.66, "text": " la memoria musical"}, {"start": 2650.66, "end": 2652.66, "text": " que nos une al r\u00edo"}, {"start": 2652.66, "end": 2654.66, "text": " y en la mitolog\u00eda del mo\u00e1n"}, {"start": 2654.66, "end": 2657.66, "text": " y en la maravilla de su recorrido"}, {"start": 2657.66, "end": 2658.66, "text": " en sus historias"}, {"start": 2658.66, "end": 2660.66, "text": " a veces tenembrosas, a veces ser\u00f3icas"}, {"start": 2660.66, "end": 2662.66, "text": " siempre resilentes"}, {"start": 2662.66, "end": 2664.66, "text": " de un r\u00edo que nos ha visto"}, {"start": 2664.66, "end": 2666.66, "text": " en la serie y nos ha visto crecer"}, {"start": 2666.66, "end": 2668.66, "text": " y nos va a existir como pa\u00eds"}, {"start": 2668.66, "end": 2670.66, "text": " y a cuya vera y a cuya memoria"}, {"start": 2670.66, "end": 2672.66, "text": " y a cuya esp\u00edritu"}, {"start": 2672.66, "end": 2674.66, "text": " rinde particular honor"}, {"start": 2674.66, "end": 2676.66, "text": " el carnaval de la suyenda"}, {"start": 2676.66, "end": 2682.66, "text": " entonces desde el milagro de la naturaleza"}, {"start": 2682.66, "end": 2684.66, "text": " que es el r\u00edo maigdalena"}, {"start": 2684.66, "end": 2687.66, "text": " desde el manar riberenio"}, {"start": 2687.66, "end": 2688.66, "text": " que es la suyenda"}, {"start": 2688.66, "end": 2690.66, "text": " desde los pescadores"}, {"start": 2690.66, "end": 2692.66, "text": " desde las atarrallas"}, {"start": 2692.66, "end": 2694.66, "text": " desde los congos"}, {"start": 2694.66, "end": 2696.66, "text": " desde la m\u00fasica"}, {"start": 2696.66, "end": 2698.66, "text": " desde todo lo que significa"}, {"start": 2698.66, "end": 2700.66, "text": " para nosotros el maigdalena"}, {"start": 2700.66, "end": 2702.66, "text": " desde su gran esp\u00edritu"}, {"start": 2702.66, "end": 2704.66, "text": " desde la mirada de todos los que lo han recorrido"}, {"start": 2704.66, "end": 2706.66, "text": " y lo han amado desde los tiempos"}, {"start": 2706.66, "end": 2708.66, "text": " del gran desarrollo"}, {"start": 2708.66, "end": 2710.66, "text": " de la ciudad y de los pueblos"}, {"start": 2710.66, "end": 2712.66, "text": " y del pa\u00eds alrededor del maigdalena"}, {"start": 2712.66, "end": 2714.66, "text": " de las rutas de los vapores"}, {"start": 2714.66, "end": 2716.66, "text": " de todo lo que llegaba por la maigdalena"}, {"start": 2716.66, "end": 2718.66, "text": " de la gran migraci\u00f3n"}, {"start": 2718.66, "end": 2719.66, "text": " de los \u00e1rabes"}, {"start": 2719.66, "end": 2721.66, "text": " de las antig\u00fcas ciudades coloniales"}, {"start": 2721.66, "end": 2723.66, "text": " de la \u00e9poca del imperio espa\u00f1ol"}, {"start": 2723.66, "end": 2724.66, "text": " del tiempo de la rep\u00fablica"}, {"start": 2724.66, "end": 2726.66, "text": " del nacimiento de las ideas"}, {"start": 2726.66, "end": 2728.66, "text": " de la belleza de la ciudad de Onda"}, {"start": 2728.66, "end": 2732.66, "text": " y del milagro fant\u00e1stico de la suyenda"}, {"start": 2732.66, "end": 2734.66, "text": " \u00fanica y particular"}, {"start": 2734.66, "end": 2737.66, "text": " situaci\u00f3n geogr\u00e1fica y geol\u00f3gica"}, {"start": 2737.66, "end": 2739.66, "text": " que nos brinda en su infinita riqueza"}, {"start": 2739.66, "end": 2740.66, "text": " el maigdalena"}, {"start": 2740.66, "end": 2743.66, "text": " y desde todo lo que esto se reconoce"}, {"start": 2743.66, "end": 2745.66, "text": " en el carnaval de la suyenda"}, {"start": 2745.66, "end": 2747.66, "text": " en la narraci\u00f3n de Anaur\u00edbe"}, {"start": 2759.66, "end": 2761.66, "text": " Este podcast fue posible"}, {"start": 2761.66, "end": 2763.66, "text": " gracias al equipo de la Casa de Historia"}, {"start": 2763.66, "end": 2765.66, "text": " de Anasoares, Milenabel Tr\u00e1n,"}, {"start": 2765.66, "end": 2767.66, "text": " Arturo Jim\u00e9nez Vigna,"}, {"start": 2767.66, "end": 2769.66, "text": " Daniel Moreno Franco,"}, {"start": 2769.66, "end": 2771.66, "text": " grabado en los gatos estudio"}, {"start": 2771.66, "end": 2773.66, "text": " la edici\u00f3n y la musicalizaci\u00f3n"}, {"start": 2773.66, "end": 2775.66, "text": " de Eduardo Corredor Fonseca"}, {"start": 2775.66, "end": 2777.66, "text": " de Rueda sonido"}, {"start": 2777.66, "end": 2779.66, "text": " y contamos con Daniel Schradz"}, {"start": 2779.66, "end": 2781.66, "text": " que est\u00e1 con nosotros"}, {"start": 2781.66, "end": 2783.66, "text": " acompa\u00f1\u00e1ndonos de Aquena adelante"}, {"start": 2783.66, "end": 2784.66, "text": " en ferias y fiestas"}, {"start": 2784.66, "end": 2785.66, "text": " y que lo introducimos"}, {"start": 2785.66, "end": 2787.66, "text": " en nuestro relato con mucha alegr\u00eda"}, {"start": 2787.66, "end": 2789.66, "text": " para construir esta historia"}, {"start": 2789.66, "end": 2792.66, "text": " contamos con el testimonio"}, {"start": 2792.66, "end": 2794.66, "text": " del maestro Tiberio Murcia"}, {"start": 2794.66, "end": 2796.66, "text": " con las historias de \u00c1ngel Imoreno"}, {"start": 2796.66, "end": 2798.66, "text": " con Germ\u00e1n Ferro"}, {"start": 2798.66, "end": 2800.66, "text": " y su creaci\u00f3n del museo"}, {"start": 2800.66, "end": 2802.66, "text": " que esto es un relato del maigdalena"}, {"start": 2802.66, "end": 2804.66, "text": " con Wade Aves"}, {"start": 2804.66, "end": 2805.66, "text": " que nos hizo semejante"}, {"start": 2805.66, "end": 2807.66, "text": " charla tan maravillosa"}, {"start": 2807.66, "end": 2809.66, "text": " con todas estas historias de la gente"}, {"start": 2809.66, "end": 2811.66, "text": " que est\u00e1 construyendo en este momento"}, {"start": 2811.66, "end": 2814.66, "text": " patrimonio, ferias, actividades,"}, {"start": 2814.66, "end": 2816.66, "text": " historias y que nos ayudan desde los murales"}, {"start": 2816.66, "end": 2818.66, "text": " desde las ferias del libro"}, {"start": 2818.66, "end": 2819.66, "text": " desde todo esto"}, {"start": 2819.66, "end": 2822.66, "text": " a mirar ese punto tan rico"}, {"start": 2822.66, "end": 2825.66, "text": " de nuestra historia, de nuestra geograf\u00eda"}, {"start": 2825.66, "end": 2827.66, "text": " y de nuestro mito y nuestra leyenda"}, {"start": 2827.66, "end": 2829.66, "text": " y de nuestro forclero"}, {"start": 2829.66, "end": 2831.66, "text": " y de nuestras ferias que es el carnaval"}, {"start": 2831.66, "end": 2833.66, "text": " y siempre con la ayuda fuerte"}, {"start": 2833.66, "end": 2836.66, "text": " y poderosa de Santiago Espinoza Uribe"}, {"start": 2836.66, "end": 2837.66, "text": " y Laura Rojas Aponte"}, {"start": 2837.66, "end": 2862.66, "text": " de las pocas cosas de internet"}]
Diana Uribe
https://www.youtube.com/watch?v=kcP8uzJ-OHc
Festival Iberoamericano de Teatro
#podcastdianauribe # Bogotá celebra alrededor de la cultura. El invitado en esta ocasión es el Festival Iberoamericano de Teatro, una fiesta que cada dos años conovca a la ciudad y al país. Hablaremos de las historias que hay alrededor de un evento que verdaderamente expresó «un acto de fe». Contaremos del teatro en Colombia, de los duros años 80 en nuestro país, de un evento que despertó la esperanza en una ciudad y de la labor titánica de Fanny Mikey. Notas del episodio La Fiesta de Corpus, «el carnaval bogotano» que el tiempo olvidó →https://www.senalmemoria.co/articulos/corpus-christi-fiesta-religiosa Algunas historias del teatro en Colombia →https://www.banrepcultural.org/biblioteca-virtual/credencial-historia/numero-198/el-teatro-en-colombia-en-el-siglo-xx Fanny Mikey, una historia de vida en las tablas →https://www.cultura.gob.ar/fanny-mikey-la-reina-de-las-tablas_5222/ 7 hitos del Festival Iberoamericano de Teatro →https://www.senalcolombia.tv/cultura/festival-iberroamericano-teatro-historia La vuelta al mundo a través del teatro →https://librepensador.uexternado.edu.co/la-vuelta-al-mundo-con-el-festival-de-teatro/ Y aquí les dejamos la página oficial del Festival Iberoamericano de Teatro →https://festivaldeteatro.com.co/ ¡Síguenos en nuestras Redes Sociales! Facebook: https://www.facebook.com/DianaUribe.fm/ Instagram: https://www.instagram.com/dianauribefm/?hl=es-la Twitter: https://twitter.com/dianauribefm?lang=es Pagina web: https://www.dianauribe.fm
Hoy vamos a ver uno de los eventos con los que, digamos, más personalmente estoy ligada de todas las ferias y fiestas porque lo he vivido desde el principio ha sido una parte fundamental de la vida mía y de la vida de la ciudad, el Festival Europea de Teatro, de Bogotá. Primero llamado, ya pueden empezar a la sala. Esto tiene una serie de antecedentes. Por un lado, el hecho de que nosotros fuéramos entre comillas unas ciudades fiestas porque tuvimos un Festival de Corpus Christi, una fiesta de San Pedro, un fiesta de la Vigia de la Candelaria, pero todas esas fiestas poco a poco fueron desapareciendo con la llegada del siglo XX, nos quedamos infiestas y eso es una pérdida lamentable para cualquier ciudad después de todas las cosas maravillosas que hemos visto. Último llamado, la obra está pronto de comenzar. Fálic Miqui y la Viroz Orio empiezan a crear el Festival Europea de Teatro y nosotros vamos a tener dos grandes fiestas, el Festival Europea de Teatro y Roca al Parque. Y eso irrumpe en la ciudad de una manera increíble. Por un lado, está el hecho de que Colombia como país y Bogotá como ciudad tienen una profunda y gran de tradición de Teatro. Nosotros hemos tenido una tradición de Teatro muy grande con Teatros como la Candelaria Teatros como el Teatro libre Bogotá, que era la facultad de los estudiantes de la Facultad de Filosofía y letras de la Universidad de los Antes que fundaron este Teatro después de una vuelta del Grenés 73, como el Teatro TPP que fue tan importante, digamos como los Teatros originales en Bogotá, un también el Tec que va a estar muy ligado también a esta historia por la participación de Fan y Miqui en el Tec, Teatro Experimental de Cali enrique buena aventura. También vamos a tener el Teatro Matacán de las de Medellín importantísimo, el Festival de Teatro de Manizales, fundado en 1968. Entonces, el Teatro aquí es una tradición profunda y es una tradición que permeó mucho tiempo la televisión, el nacimiento de la televisión estuvo muy ligado al Teatro. Durante mucho tiempo, un programa que se llama Teatro Popular Caracol que fue donde Jaime Botero Gomes, el papá de María Cecilia Botero, fue el que presentó las obras de Tennessee Williams, el Sológico de Cristal y muchos de los grandes clásicos del Teatro, fueron importantes en ese momento, Fausto Cabrera, recitaba la poesía de la Guerra Civil Española y los Podemos de León Felipe, digamos, y el radio Teatro, el Teatro siempre ha formado parte de nuestra escena, de nuestra cultura, de nuestra percepción del arte, siempre ha estado presente. En un Festival de Libro Americano de Teatro, nuestra era algo completamente ajeno, en país que tiene y ha tenido una importante tradición teatral, el acto latino, o sea, bueno, muchísimos grupos de esa época, después surgirían otros como Mapa Teatro, y hay unos que son incunables, maravillosos y fantásticos, la nivel uladora. Los nombres, porque hay que rendir una menaje muy importante con este capítulo al Teatro en Colombia, porque ellos nos han inspirado desde siempre. Entonces llega un Festival de Teatro, tampoco es que uno diga que fue la primera, es que nos metimos en una sala, no? Aquí había un Teatro importante, pero hay una conflucción de factores que hace el que el Festival de Americano del Teatro se convierte en un milagro, y es que cuando llega el Festival de Teatro, cuando Fanny McKee de quien hablaremos profusamente ya quien está dedicado este capítulo con todo el amor, respeto y memoria, porque esa mujer nos cambió la vida. Esa mujer, Mauricio, tiene una huella absolutamente indeleble entre nosotros. Llega y organiza el primer Festival de Teatro, en ese momento nos nos estábamos en una coyuntura, que eran las guerras del narcotráfico. Las guerras del narcotráfico tenían varias aristas, las guerras entre el Cartel de Cali y el Cartel de Medellín y las guerras del Cartel de Medellín, controlizado con un biario. Lo que se había traducido en una gran cantidad de bombas en Bogotá y en Medellín, que llevaron a confinarnos en el terror durante una época en que uno no sabía si iba a volver vivo una vez que salía a la calle. Esto había hecho que los espacios públicos y que los encuentros se volvieran terriblemente miedosos, la rumba era lo que nos acaba adelante, real la unicospacia donde podíamos sobrevivir o resolvernos a la que fuera. Pero de todas maneras, el espacio público estaba condicionado por un terror colectivo de lo que estamos viviendo para nosotros, las guerras del narcotráfico no fueron miliceries, laborosas. Esto fue terror puro y duro y la vivimos muy gravemente en este país. Entonces, en ese escenario, a dos años, de lo que ha caused el Palacio de Justicia, de la toma y la retoma del Palacio de Justicia y de la tragedia de Armero, en unas condiciones históricas particularmente dramáticas y aterradoras que vivimos nosotros en los años 80 y 90. Se le ocurría a Fanny, que hace un festival euro medicano de Teatro en Bogotá, o sea, esto es increíble en semana Santa. Lo que además le generó en un principio en los primeros festivales una confrontación con la Iglesia Católica, porque consideraba que no era idoneo, el carácter del festival con la conmemoración de la semana mayor. Entonces, eso era otro problema que había ahí. Fanny, mi Kirch, y Ramíros, hoy obtuvieron bastantes amenazas. Cuando se hizo el festival, porque estábamos en una época de terror, inclusive una bomba en el Teatro Nacional, en la presentación del jeepeto del Grupo de Argentina, y la bomba no solamente no dejo heridos por la protección poderosa de los dioses del Teatro, sino que incentivó al público a salir masivamente a ocupar las alas para derrotar el terror, para que no nos confinaran al miedo donde tantas veces hemos vivido. Entonces, es muy dramático esto, y resulta que en el primer festival y del americanos de Teatro, va a ocurrir una cosa que es completamente inimaginable, y que solamente puede darse en el decenario y en el espacio del Teatro, los lemas de cada uno de estos festivales, tienen mucho que ver con lo que estábamos viviendo. El primer festival y del americanos de Teatro tenía como lema un acto F. Y en ese festival empiezan a llegar los grandes grupos del Teatro Mundial. Originalmente esto se hacía en Caracas, Caracas era una potencia cultural y era a la época del complejo Teresa Carreño, y había una movida en Caracas muy grande de cultura, y además había mucho billete para eso. Entonces, lo que hacía a Fanimiki era contratar los grupos desde Caracas, y luego traer los a Bogotá, que en ese momento era acsequible, porque era directamente el país vecino, más adelante el festival y el americanos de Teatro va a crecer tanto, que Fanimiki va a poder traer los grupos del mundo entero, y va a tener la figura de Adela Donadillo, el personaje que tenía a mi juicio, uno de los trabajos más maravillosos que puede tener un ser humano, ir por los festivales del mundo y escoger las obras que vendrían al festival y del americanos de Teatro. Entonces resulta que en la edición de 1988, en el cierre, los cierre normalmente eran en la plaza de Bolívar, pero eran, primero, la apertura es por la séptima con el Teatro taller de Colombia, que va en los sancos, que siempre ellos entrenaban en Sanco subiendo por la circunvalar, que esto en Bogotá es una cosa bastante alta, en una ciudad que ya es muy alta, 2.600 metros de altura sobre el nivel del mar. Entonces eso suvisió en la montaña en Sancos, aquí es un ejercicio teatral muy fuerte, taller de Colombia siempre presidía las aperturas del Teatro con una gran cantidad de actores de todas partes en galanando la ciudad y llenando la de alegría. El cierre, en la plaza de Bolívar, otras veces en el parque Simón Bolívar, pero ese primer cierre, tuvo una cosa que nosotros todavía recordamos con el asombró mayor, en la plaza de Bolívar, en donde tuvo lugar el holocausto por la toma y la retoma del Palacio de Justicia, donde el Palacio de Justicia todavía estaba quemado, quedó en llamas, y todavía estaba la terradora imagen de la bala de un tanque que abrió hueco en la fachada, y toda la muerte de los magistrados, más eméritos de este país y los profesores de la Universidad externado, y toda la tragedia que eso sigue siendo todavía, en una de las grandes heridas en la historia de esta nación, vino un grupo catalán, llamado El Comedianz, y les dio por hacer un exorcismo a dos años de haber vivido este horror, y llenar la plaza de Bolívar de fuego artificiales y vestirse de diálogos, y de una manera absolutamente silcense y maravillosa, empezar a llenar esto de fuego artificiales, o sea, de luce de colores rojas, que nos recordaban las llamas de un Palacio donde se un día el piso jurídico de nuestra democracia en esa época, y ver eso como un exorcismo, como un acto lúdico, como la magia del teatro exorcizando los dolores del alma, eso fue primero, el estar en una multitud en la plaza de Bolívar, en la época de las bombas, era desafiar un terror profundamente metido entre nosotros, nosotros que amo delicaisimos, eso no puede sonar, cuando sonado una pistola en escena todo el mundo se asustaba, que amo delicaisimos, no nos pueden ni hablar duro, después de todo lo que vivimos, entonces llegar y ver esta gente en Sancos, por toda la plaza de Bolívar y por frente al Palacio llenándolo de luces artificiales, exorcizándolo invitándolo a todos, a exorcizar los demonios del dolor, del terror, del estupor y de todo lo que significó para nosotros he hecho en la historia, es una de las cosas que nos muestra la magia impresionante del teatro, entonces ya con eso que amo en un estado de asombro increíble y se toma el teatro a Bogotá, y nos empieza a hacer sentir la magia, de empezar a conocer grupos de todas partes del mundo, este festival formó un público de teatro, no se educó en las artes escénicas y nos ha educado, somos un público que conoce teatro en esta ciudad, y el festival va creciendo, va creciendo, llegan cosas tan impresionantes, yo veía y veo en la media lo posible promedio de diez obras por festival, porque dedico mi vida al festival y del americano de teatro cada vez que ocurre, y entonces resulta que en los primeros vino una versión polaca de crimenicastigo de tres horas en donde se trataba del momento en que el comisario de la policía de la novela crimenicastigo se enciera con rascón y cofo hasta que lo quebra y lo hace confesar, ese momento cuando logra metersele en el alma, esa era la versión de crimenicastigo que llegó acá, llegó una versión rusa de las criadas de genet representada por hombres en un universo trans que en esa época era inimaginable, llegaron la gente más impresionante y empiezan a venir de todos los grupos de todas las partes del mundo, de todos los idiomas, de todas las culturas, el planeta entero a una ciudad encerrada por el terror, porque en esa época no venía nadie, y nosotros estábamos verdaderamente aislados, entonces cuando empiezan a llegar todos estos grupos de atro griego, que empieza a traer un prometebo encadenado en un hombre templado en escena durante hora y media, mientras le deburaba las entrañas, el ave que era el castigo por haberledado a los hombres el fuego, llegaban las historias más impresionantes del grupo Sherezaa de Slovenia, grupos que se fueron haciendo increíblemente famosos, teatro Catona, de un gría, teatro No, del Japón, teatro Kabuki, de Japón, empezamos a conocer un nivel de teatro absolutamente maravilloso, los lemas eran cada vez más bello, los lenguajes del teatro del mundo, encuentro de los mundos cuando se hizo el quinto centenario del encuentro de las dos culturas o de la colonización de nuestro mundo, eso también fue una cosa impresionante, otra era Bogotá un escenario del mundo, 10 años de fe en Colombia, el estreno del siglo, la vuelta al mundo en 80 obras, un mundo para ver, un mundo en escena, Bogotá ciudad de teatro del mundo, en Monaje, bueno ya después cuando muere fan y Mickey pero todavía no hemos llegado allá porque todavía no le hemos introducido en todo su valor y su dignidad, la fiesta de las milcaras, todos tenemos que ver, el teatro está de fiesta, comienza el teatro, era cada uno de estos escenarios llegó a tener un nivel más alto, llegamos a tener 800 funciones de 100 compañías internacionales, 170 compañías colombianas la muestra de teatro más importante del mundo, entonces esto surgió por los 450 años de la Fundación de Bogotá, ahí es cuando surge el Euroamericano de teatro, y esto empieza a traer compañías que van a traer una cantidad de nuevas miradas, llegaron de Sur África, cuando Mandela había salido de la cárcel y habían derrotado en la parte, tiene una compañía donde hacían una mezcla de marionetas, con imágenes de video, en escena, empezamos a conocer una gran cantidad de lenguajes teatrales, empezamos a conocer las multimedia, empezamos a conocer todo lo que era la combinación de todas las artes escénicas, en un momento dado en un escenario que era íbano desde el teatro negro, pasando por la marioneta, pasando por el video, en una combinación de lenguajes, para contar lo que significaba la caída de la parteite, que fue uno de los momentos más impresionantes, o siempre estaban ligadas a la historia después de que Caél Murdo de Berlín viene una obra romana con un montaje de Julio César, en donde se mostraba que después del magnicidio no tenían en realidad un proyecto que pudiera reemplazar lo que les había pasado durante ese tiempo, era una extraña sombra la quedaban los romanos, después de la muerte de Chauchesco, que lo dejaba uno muy pensativo sobre cómo percibían ellos su propia historia, llegaban de todas partes contándonos, el mundo entero era como, uno mira un festival de teatro y está viendo el mundo, porque llega teatro africano, teatro de furquina, fazo, teatro del esperpento, de ramón del valle inclan, la Saranda llegamos a verte atro del esperpento, todas las formas teatrales había un bar de marionetas en una obra inglesa, un bar de marionetas fracasadas, cuyas vidas ya habían pasado y que estaban en el olvido y el alcoholismo, eran marionetas existencialmente vencidas por los golpes de la vida en un bar sordido unas marionetas, aquello era una cosa absolutamente impresionante, en los villanos de Shakespeare una obra inglesa, en donde un hombre empieza a codificar cómo son los villanos de la obra de Shakespeare, y uno de ellos decía, por ejemplo, llago es un villano mediocre, ese medio crecimiento es un crimen y miraba el escenario hacia la mayoría de ustedes lo son, el problema es ser villano, y después habla de regalo tercero como un brillano brillante, brillante, porque manipulaba psicologicamente a los personajes para hacerlos sentir que ellos eran culpables de su propia victimización, y también hablaba de Hamlet como uno de los villanos, porque al no tomar ninguna decisión era responsable de la muerte en una persona de nesena, de Olivia que se suicidó de sus dos amigos Guiltenstein y Rosent Crank, porque no hacían nada, sino deliberar y pensar mientras algo estaba realmente podido en Dinamarca. O va, discibil, a tristese, y pise sirmoquero, a bua, le repete y presse, percibiremos que, o de ferro, satereche, percibiremos que, yo tu serí que, al ríter Rosent Crank, percibiremos que, al ríter Rosent Crank, percibiremos que, yo tu serí que, yo tu serí que, yo tu serí que, llegan las historias más impresionantes, llegan a una de las obras más grandes, porque son todas las gente que es realmente grande en el teatro, Nuria Expert, la grandama del teatro catalán, va a venir con una obra que se llama la violación de Lucrecia, y es una obra donde una suena mujer interpreta un trama de Shakespeare sobre un hombre que hace gala de la alegría tan grande que tenía de cosar de una maravillosa relación con su mujer, y de una sexualidad clena, y el rey siente en vida de la alegría de este hombre y decidir violar a Lucrecia, Lucrecia le advierte que eso va a ser la desgracia de todo, ella hace el papel de Lucrecia, hace el papel del rey, hace el papel del marido, hace el papel del pueblo, que juzga al rey Tarquinu, hace el papel del actriz que está preparando la obra, y hace el papel de las mujeres que están en el cuadro, en donde se emula la violación de las mujeres en el templo, en el drama de las Trojanas, una sola mujer en escena. Vimos las cosas más impresionantes, hemos visto el teatro más grandioso, y hemos aprendido de teatro a lo largo de eso, entonces este personaje de Fanimiki lo vamos a mirar muy detenidamente, Elisa Fanimiki Olanski, nacen buenos aires en 1930, era hija de Camilo Miki y de Monika Olanski, ella era de origen lituanos, la mayoría de los argentinos vienen de los barcos, entonces tienen origenes europeos como lo hemos contado muchas veces, y quería que ella fuera abogada o contadora pública, ella huye de la casa por maltrato, por violencia intrafamiliar por maltrato, huye, y empieza a trabajar por ahí tratando de sobrevivir, va a ser contadora de empresa de juguetes, va a participar en programas de televisión, como un ejercicio muy pequeños, luego se empieza a formar como actrice en la sociedad hebrae y cargentina, y desde ahí conoce a Pedro Martínez, y viene a Colombia por la cual mucha gente ha venido a Colombia, por amor. Es una de las razones por las cuales se han quedado las personas acá, Pedro Martínez y el dramaturgo, enrique buena aventura que va a ser muy importante porque con él va a entrar en el teatro experimental de Cali, y en el teatro experimental de Cali ella va a aprender, no solamente las artes escénicas, sino la producción de las obras, va a aprender todo lo que va a ser para ella, un recorrido por el teatro, en todas sus formas hasta llegar a declararse ella un animal del teatro, ya decía que era una bestia del teatro, estaba dispuesta de devarrer el piso hasta vender boleta por boleta, ella tiene un sueño por el teatro, una alucinación por el teatro, una magia por el teatro, y ese sueño y esa alucinación lo va a convertir en un milagro, en una ciudad que vive de su genialidad con una sede infinita. El festivo haceó un exámo muy grande, el teatro callejero, muy importante, la muestra de teatro callejero en todas las plazas públicas, donde llegaba gente como Donner's Theatre, un grupo inglés que retrataba los terrícolas como los seres más extraños a más vistos, ellos vestidos distractorrestes nos hacían sentir tan extraños a sus ojos, que nos hacían sentirnos extraños ante nosotros mismos, el teatro empieza a traer toda clase de emociones porque finalmente el teatro es eso, son emociones vivas, son personas vivrando para hacernos entender la profundidad de la alma humana. Esto es a volviendo una cosa más grande, ella como tal va a ser pionera en todo, primero el tec fue de los primeros teatros que empezaron a tener realmente apoyo gubernamental porque antes el teatro era muy precario, no tenía apoyo del gobierno, la gente tenía que trabajar en muchas cosas, después se toma la decisión de profesionalizar el teatro en Colombia y se sube en las taquillas para que la gente pueda vivir del teatro y no tenga que estar haciendo cualquier otra cosa por el privilegio de poder ser actor en Colombia. Esta mujer va a crear el primer café con Certe Bogotá que se llama la gata caliente, ella siempre fue una mujer desafiante y fue desafiante de todas las formas de doble moral, y aquí pues han sido bastantes, entonces eso que se llamara la gata caliente que ella hiciera un bodewil era una cosa muy loca, ella decía en la época en que nadie decía eso, que ella creía en la pareja, le preguntaba, ¿usted creen el amor si yo creo en la pareja? Hombre mujer, mujer, hombre hombre, bueno pues esto que te digo en los ochentas, era de Aralario nadie hablaba así, entonces ella siempre abrió los horizontes mentales de nuestro país en su amor por el teatro, entonces primero creó el teatro nacional y tenía unos modelos de negocios increíbles porque ella vendía una silla en el teatro nacional, usted quiere comprar una silla, usted compró una silla y le ponemos su nombre y con el nombre de su silla montó el teatro nacional, pero el teatro nacional se le quedó pequeño, entonces tuvo que pasar al teatro a la castellana, porque el teatro nacional ya no cabía en todo lo que ya estaba haciendo, entonces en la casa al teatro, teatro nacional, teatro a la castellana, el café concierto, y luego el festival y el americano de teatro es que uno no termina con esta mujer tan impresionante esa energía, luego fue convenciendo a toda la empresa privada y a las empresas estatales y creó un modelo mixto para poder patrocinar el teatro y eso fue creciendo, creciendo, creciendo, porque luego llegaban las obras de los africanos y llegaban las obras de los japoneses, había cosas tan increíbles como un montaje de la divina comedia de un grupo alemán en el Jorge Lieser-Gaitan, donde tuvieron que poner un enorme piscina, un gran tanque de agua, porque es que es la otra, traiga a ser de todos los países del mundo, los andamíajes para montar estas obras y crear un enorme escenario que era un tanque de agua que era un címil de los círculos del infierno, contados por los alemánes en una puesta en escena de la divina comedia de Dante que lo dejaba unos inaliento. Aquí empieza el espacio comercial. El teatro es una enorme fiesta y es una fiesta multicolor maravillosa que en el caso de Bogotá nos ha sanado muchísimo, por eso el festival Iberoamericano de teatro de Bogotá tiene un capítulo en esta temporada, de ferias y fiestas de Colombia, nosotros no tenemos un carnaval, propiamente, pero tenemos el Iberoamericano. Y esto ha sido un evento de tanta alegría, de tanta esperanza, vamos a contar cómo llegó a la ciudad en los tiempos tan duros en que llegó, vamos a contar de toda la maravilla de Fanimiki, vamos a ver cómo el teatro nos iluminó las calles, las ciudades, las salas, la cabeza, y nos trajo y nos ha traído ventanas de todas partes del mundo, vamos a contar una historia increíble. Por eso lo simpito a que escuchen esta historia de este maravillosa y esplendorosa fiesta Bogotána a través de Radio Nacional de Colombia, y luego a disfrutarlo todas las veces que quieran en rtbcplay.co. Y siempre decíamos, Con Richie me he esposo que en algún momento, en cada festival de teatro iba a haber una obra que tiba cambiar la vida, y nunca sabíamos cual era. Por eso había que ir a todas las obras a la mayor cantidad de horas posibles, porque cualquiera de las obras que tuviera en un festival de teatro podía cambiarte la vida. Y muchas veces vimos muchas obras que nos cambiaron la vida, la obra del festival, siempre había una obra que era tan extraordinaria que no sería posible nunca olvidar a haber la visto. El teatro es inolvidable, y todo lo que es imaginable ha llegado al festival libre americano convirtiendo la ciudad en un espacio público, nosotros que como lo digo, en Bogotá no hemos ido de Ciudad de Carnavales, que a diferencia de todas las fiestas magníficas y maravillosas que hemos narrado, nosotros no teníamos esa cultura de fiestas ni ese compromiso con un carnaval que hace tan importante la vida con el de los de requilleros o como el de riosucio, o como tantos otros fiestas que hemos recorrido donde admiramos con cierta envidia, el compromiso incondicional de la gente con sus carnavales, pues nosotros tenemos el festival libre americano de teatro, y eso nos abrió la mente, venció el terror, nos llevó a las plazas, exorcizó nuestros demonios, nos hizo valientes para que el arte derrotar el terror, y nos ha traído toda clase de cosas, entonces venía, ha venido mucho teatro de América Latina, teatro Argentino, teatro mexicano, teatro del Uruguay, pues todas las muestras del teatro que aquí tenemos nosotros, entonces esto se fue volviendo cada vez más y más grande, y cada vez subo más festivales, y cada vez fue más importante, entonces eso incentivo el teatro universitario, todo digamos toda nuestra historia de teatro desde el budo del teatro universitario, desde la época de Stanislavski, que fue tan indecisivo en la formación de teatral de los actores en Colombia, todo eso va a servir de cargo de cultivo y de potencial para darle al festival libre americano un piso de valor histórico del teatro, y el festival libre americano se va tomando toda la ciudad, y después ya se empieza a poder exportar obras iranas o otras partes, y algunas veces después de las semanas antas se han quedado las obras que más grande han hecho el favoritismo del público, lo que digo la creación de un público culto educado capaz de entender teatro de sala, de una calidad impresionante pero teatro de calle, teatro de calle de la mejor calidad, lo que hacía que toda la ciudad en todas las diferentes plazas se viera y se haya visto involucada, en el teatro o sea la ciudad, Bogotá se vuelve a ser teatro, Corferias se vuelve a ser teatro, y en Corferias todo, todo era teatro, la sobre todo ha sido teatro desde la entrada a las obras, otra cosa que se va a crear es la Carpa Cabaret, para que después de las obras la ente fuera a tomarse un trago, y a encontrarse muchas veces con los actores que acababan de hacer presentaciones absolutamente inimaginables, y allá también había espectáculos musicales, entonces esto es un compendio de teatro callejero, teatro de sala, Carpa Cabaret, ciudad teatro en Corferias, digamos una propuesta de ciudad verdaderamente enorme, trayendo el mundo, a un país que durante mucho tiempo ha estado encerrado y aislado por la adureza de su historia, era como si los ojos del planeta vinieran a mirarnos y a través de sus ojos pudiéramos ver el planeta, eso es un híbero americano de teatro. En festival y euroamericano de teatro, tiene el problema que después de la muerte de Fanimique, una mujer tan absolutamente multifacética, incansable, talentosa, como actriz, como productora, como gestora cultural, como una mujer capaz de movilizar una ciudad entorno a la figura del teatro, pues es muy difícil reemplazar ese espíritu tan impatible, sin embargo el festival y euroamericano de teatro continuará con una gran calidad, con el homenaje a ella, y años después de la muerte de Fanimique, el festival ha continuado trayendo grandísimos huestras del teatro del mundo, que les decía el teatro de Valleinclan del grupo La Saranda, o sea, todo el mundo que ha tenido que ver con la excelencia del teatro ha venido a un festival y euroamericano de teatro de todas partes, teatro de Croacia, teatro de un gría, teatro de todas las compañías más impresionantes, una vez un montaje de los japoneses que hacían una interpretación de unas gaysas, ante un grupo de hombres representando a medea, hemos visto unas medias, la cosa más impresionante desde mujeres de Burkina Faso, hasta espectáculos multimedia, que muestran todas las diferentes formas de medea teatro clásico, Shakespeare, teatro, griego, hemos visto realmente los grupos más impresionantes, el festival de teatro, pues como todas las ferias y las fiestas, va a través de la pandemia, y con la pandemia, pues se vio tremendamente debilitado, entonces ahorita el festival euroamericano de teatro está renaciendo, poco a poco, con gran dificultad, porque la pandemia supuso para todos los festivales, un obstáculo terrible, porque no es que no me puede ver la gente, entonces además que en la época de la virtualidad, usted se imagina lo valioso que es el teatro, verséres humanos de carne hueso hablando, le haceres humanos de carne hueso sentados en butacas en frente a una distancia donde se les pueden ver los rostros y los gestos a los tractores, la magia del teatro, uno de los compañías del teatro dinden a marca, una vez vino con una propuesta de rotarla muerte, un hombre que quería de rotarla muerte y descubre que el único lugar donde se puede rotarla muerte es en el teatro, porque en el teatro no existe la muerte, entonces en el teatro se resuelven los dilemas existenciales de la alma humana, que fuera de las artes escénicas y de los tablados, no tienen como contarse, entonces resucitar, digamos, porque después de la pandemia, todo es una resurrección, después de la pandemia, todos volverán a ser, porque la pandemia nos quitó la presencialidad, nos quitó la mirada y nos convirtió en pantallas, en pantallas abstractas de personas puestas en cuadritos, frente a un computador o a un celular, que no se parece nada a lo que huele en sienten, transpieran, aula, ni miran los seres humanos cuando se encuentran los unos a los otros, eso es lo que es el teatro, entonces claro, estamos intentando volver a ser teatro sin embargo llegar a un cosas tan hermosas como un pin ocho, de Aralarios, compañía inglesa que trajo un pin ocho, completamente divino, divino al Jorge Luis Sergaytan en una maestría del teatro del cuerpo, verdaderamente poética con una música que le daba una proyección onírica y casi alucinante a este teatro del cuerpo maravilloso, que ha sido pin ocho, entonces siguen llegando las grandes obras, es un tiempo de reconstrucción, la pandemia lo implican todo sentido, para el festival y el americano de teatro también es un tiempo de reconstrucción, además porque como es una propuesta es en yca tan grande y requiere ese nivel de apoyo y de capital mixto para mantenerse en el nivel de grandeza que nos ha acostumbrado a tener y en la calidad de obras que hemos visto del mundo intero y de la historia del teatro, pues hay que volver a tener el acto de fe, que tuvimos para poder creer en el teatro en la época de las bombas y atrevernos a ir a la plaza de Bolívar, a riesgo de lo que fuera para aprender a confiar en el derecho a los espacios públicos y en el derecho a soñar y a derrotar el miedo, hay que volver a tener ese acto de fe para crear de nuevo el teatro que nos ha hecho una ciudad onírica en un festival y un americano de teatro la gente no está hablando de nada más y no de obras, entonces todas las intrigas cotidianas, las rencillas políticas, todo aquello que nos aqueja incluso el tráfico pasa a segundo plano la gente solamente está hablando de las obras y se que vio una obra de Belhika absolutamente maravillosa que se vio te austra australiano feminista donde una mujer hablaba en una casa donde estaba un hombre que era una figura de papel machelle y en del periódico y ella le hablaba y le hablaba y le hablaba y en una cosa estaba leyendo el periódico y ella iba a salir de la casa y la puerta estaba tapiada, las ventanas estaban tapiadas y ella hablaba con un espectro que leía el periódico historias absolutamente increíbles hemos visto en el festival de teatro el festival y un americano de teatro aparte de una gran producción artística cultural gigantesca es ante todo y fundamentalmente un espíritu es el espíritu de fanimiki es el espíritu invencible de una sociedad que vence el miedo a través del arte y saca de nosotros uno de los elementos más impresionantes que tenemos en Colombia la resiliencia el ser capaz de vencer todos todos los obstáculos de la historia que se nos han puesto a nosotros particularmente duros entonces el festival y un americano de teatro nos pone en la sintonía de la maravilla del arte humano había una obra de un grupo croata que registraba la decadencia de un grupo que había pasado ya por los mejores tiempos y se hizo en un teatro que en ese momento estaba para ser remodelado el faig de la insa pero en el momento que se hizo la obra el teatro estaba casi en ruenas y como el grupo era un grupo de cadencia la sintonía de la historia del grupo que ya ha perdido sus mejores tiempos con un teatro que otrora fue grandioso y que en ese momento estaba es muy muy a las condiciones y después sería remodelado y una manera maravillosa generaba una reverberación de la decadencia y el fracaso y en los tiempos y dos que en mi representaban la compañía de teatro hemos visto teatro del cuerpo hemos visto teatro de la más pura y clásica expresión hemos visto teatro de la mayor cantidad de innovación obras de irra y logras de australia que nos han traído una nueva mirada de todas las posibilidades de combinación de artes escénicas lo vimos evolucionar a lo largo de los festivales vimos como cada una de las posibilidades del teatro se fueron haciendo cada vez más diversas más complejas más poderosas más profusas toda la memorabilia del teatro o sea esto es una epopeia de ciudad y de vida y por eso nosotros ponemos un acto de fe en la época del teatro después de la pandemia para volver a sacar adelante algo que ha sido el canto más grande de resistencia de arte de cosmopolitanismo porque en ese momento nos volvemos una ciudad cosmopolita se hablan miles de idiomas se requieren cualquier cantidad de interpretes la infraestructura de estos teatros se requiere cualquier cantidad de gente ayudando en los escenarios porque hay que adaptar escenarios enormes para el quinto centenario se hicieron con gruas una gran gran gran celebración del grupo de juzglar de Catalunya en la plaza de boliva los cierres más impresionantes con los juegos de luces de sancos de teatro de todas las maravillas hemos visto el asombro de lo que el arte podía hacer en el espíritu humano a través de uno de los milagros más grandes que ha sido el festival y vero americano de teatro de Bogotá por eso con todo el cariño a fan y mi quiarra mi nosorio a todos los teatreros que han hecho de nosotros una posibilidad vital a toda la gente que ha recorrido las calles en busca del teatro callejero a la gente que sigue con el espíritu de mantener viva la fe en el teatro y que hace su mejor esfuerzo para resucitarlo de la pandemia y volver a evocar los espíritus más poderosos que se encuentran en el mundo de teatro a todos los actores que se quedaron en Colombia vinieron por las obras y se quedaron en Colombia porque pues el que se enamora de nosotros no lo puede remediar o sea eso es una cosa que no después ya nada les sabe cuando nos aprenden a querer muchos actores se quedaron acá llegaron en un híbero americano y quedaron absolutamente inlutizados vimos el berlínere en sable en las primeras ediciones traer la ópera de tres centavos que eso es como ver el original de un cuadro de diga o demoné la primera vez que el arte se puede ver de primera mano y no en dibujitos o en la minitas o en libritos sino de verdad las texturas del arte eso era ver el el berlínere en sable con la ópera de los tres centavos cuando yo llegaran todavía no hay acá y el muro de orlín ese tipo de miserias no la tenían en la alemanía oriental tenían otras pero no esas así que para ellos ver la cantidad y habitantes de la calle que se parecían tanto a la puesta en escena que se volvallaba una sombró inesperado la ópera de tres centavos de ver tolprec con la música de curvele entonces todo lo que les puede contar de lo que hemos visto sigue quedándose corto para el espectro de arte pluralidad diversidad multiculturalidad y cosmopolitanismo que este festival ha traído a nuestro alma y con la fe de recuperarlo de los tiempos de la pandemia para volver a sentir la poesía y el teatro en escena y volver a llenarnos de toda la fuerza que sólo el teatro nos puede traer en el alma y más en estos tiempos que se hacen tan sombríos y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y y a y y y y y y y y y y y los ojos y las miradas del acto más existencial, poderoso, fuerte y mágico que es el teatro desde la fantástica magia inagotable de Fanemiki desde el milagro que el teatro trajo a una ciudad intimidad por el terror que a partir del arte aprendió a soñar y a ser capas de proyectarse con la valentía, quedan los tablados en los lugares donde la muerte no existe porque todo se representa en el teatro en la narración de Anaurib, y para ustedes feliz día cualquiera que se deshacera. Este podcast fue posible gracias al equipo de la Casa de Historia, y a las Suárez, Milenabel Trán, Arturo Jiménez Vigna, Daniel Moreno Franco, grabado en los gatos estudio la adición y la musicalización de Eduardo Corredor Fonseca, de rueda sonido y siempre con la ayuda fuerte y poderosa de Santiago Espinoza Uribe y Laura Rojas
[{"start": 0.0, "end": 9.72, "text": " Hoy vamos a ver uno de los eventos con los que, digamos, m\u00e1s personalmente estoy ligada"}, {"start": 9.72, "end": 16.2, "text": " de todas las ferias y fiestas porque lo he vivido desde el principio ha sido una parte"}, {"start": 16.2, "end": 22.0, "text": " fundamental de la vida m\u00eda y de la vida de la ciudad, el Festival Europea de Teatro,"}, {"start": 22.0, "end": 23.0, "text": " de Bogot\u00e1."}, {"start": 23.0, "end": 31.0, "text": " Primero llamado, ya pueden empezar a la sala."}, {"start": 35.0, "end": 43.0, "text": " Esto tiene una serie de antecedentes. Por un lado, el hecho de que nosotros fu\u00e9ramos entre comillas"}, {"start": 43.0, "end": 50.0, "text": " unas ciudades fiestas porque tuvimos un Festival de Corpus Christi, una fiesta de San Pedro,"}, {"start": 50.0, "end": 56.0, "text": " un fiesta de la Vigia de la Candelaria, pero todas esas fiestas poco a poco fueron desapareciendo"}, {"start": 56.0, "end": 62.0, "text": " con la llegada del siglo XX, nos quedamos infiestas y eso es una p\u00e9rdida lamentable para"}, {"start": 62.0, "end": 82.0, "text": " cualquier ciudad despu\u00e9s de todas las cosas maravillosas que hemos visto."}, {"start": 82.0, "end": 106.0, "text": " \u00daltimo llamado, la obra est\u00e1 pronto de comenzar."}, {"start": 106.0, "end": 113.0, "text": " F\u00e1lic Miqui y la Viroz Orio empiezan a crear el Festival Europea de Teatro y nosotros vamos"}, {"start": 113.0, "end": 118.0, "text": " a tener dos grandes fiestas, el Festival Europea de Teatro y Roca al Parque."}, {"start": 118.0, "end": 122.0, "text": " Y eso irrumpe en la ciudad de una manera incre\u00edble."}, {"start": 122.0, "end": 132.0, "text": " Por un lado, est\u00e1 el hecho de que Colombia como pa\u00eds y Bogot\u00e1 como ciudad tienen una profunda"}, {"start": 132.0, "end": 139.0, "text": " y gran de tradici\u00f3n de Teatro. Nosotros hemos tenido una tradici\u00f3n de Teatro muy grande con Teatros"}, {"start": 139.0, "end": 146.0, "text": " como la Candelaria Teatros como el Teatro libre Bogot\u00e1, que era la facultad de los estudiantes"}, {"start": 146.0, "end": 151.0, "text": " de la Facultad de Filosof\u00eda y letras de la Universidad de los Antes que fundaron este Teatro"}, {"start": 151.0, "end": 159.0, "text": " despu\u00e9s de una vuelta del Gren\u00e9s 73, como el Teatro TPP que fue tan importante, digamos como"}, {"start": 159.0, "end": 165.0, "text": " los Teatros originales en Bogot\u00e1, un tambi\u00e9n el Tec que va a estar muy ligado tambi\u00e9n"}, {"start": 165.0, "end": 170.0, "text": " a esta historia por la participaci\u00f3n de Fan y Miqui en el Tec, Teatro Experimental de Cali"}, {"start": 170.0, "end": 176.0, "text": " enrique buena aventura. Tambi\u00e9n vamos a tener el Teatro Matac\u00e1n de las de Medell\u00edn"}, {"start": 176.0, "end": 182.0, "text": " important\u00edsimo, el Festival de Teatro de Manizales, fundado en 1968."}, {"start": 182.0, "end": 189.0, "text": " Entonces, el Teatro aqu\u00ed es una tradici\u00f3n profunda y es una tradici\u00f3n que perme\u00f3 mucho tiempo"}, {"start": 189.0, "end": 193.0, "text": " la televisi\u00f3n, el nacimiento de la televisi\u00f3n estuvo muy ligado al Teatro."}, {"start": 193.0, "end": 201.0, "text": " Durante mucho tiempo, un programa que se llama Teatro Popular Caracol que fue donde Jaime Botero Gomes,"}, {"start": 201.0, "end": 207.0, "text": " el pap\u00e1 de Mar\u00eda Cecilia Botero, fue el que present\u00f3 las obras de Tennessee Williams,"}, {"start": 207.0, "end": 214.0, "text": " el Sol\u00f3gico de Cristal y muchos de los grandes cl\u00e1sicos del Teatro, fueron importantes"}, {"start": 214.0, "end": 221.0, "text": " en ese momento, Fausto Cabrera, recitaba la poes\u00eda de la Guerra Civil Espa\u00f1ola y los Podemos"}, {"start": 221.0, "end": 227.0, "text": " de Le\u00f3n Felipe, digamos, y el radio Teatro, el Teatro siempre ha formado parte de nuestra"}, {"start": 227.0, "end": 233.0, "text": " escena, de nuestra cultura, de nuestra percepci\u00f3n del arte, siempre ha estado presente."}, {"start": 233.0, "end": 238.0, "text": " En un Festival de Libro Americano de Teatro, nuestra era algo completamente ajeno,"}, {"start": 238.0, "end": 244.0, "text": " en pa\u00eds que tiene y ha tenido una importante tradici\u00f3n teatral, el acto latino,"}, {"start": 244.0, "end": 249.0, "text": " o sea, bueno, much\u00edsimos grupos de esa \u00e9poca, despu\u00e9s surgir\u00edan otros como Mapa Teatro,"}, {"start": 249.0, "end": 255.0, "text": " y hay unos que son incunables, maravillosos y fant\u00e1sticos, la nivel uladora."}, {"start": 255.0, "end": 263.0, "text": " Los nombres, porque hay que rendir una menaje muy importante con este cap\u00edtulo al Teatro en Colombia,"}, {"start": 263.0, "end": 268.0, "text": " porque ellos nos han inspirado desde siempre. Entonces llega un Festival de Teatro,"}, {"start": 268.0, "end": 272.0, "text": " tampoco es que uno diga que fue la primera, es que nos metimos en una sala, no?"}, {"start": 272.0, "end": 279.0, "text": " Aqu\u00ed hab\u00eda un Teatro importante, pero hay una conflucci\u00f3n de factores que hace el que el Festival"}, {"start": 279.0, "end": 287.0, "text": " de Americano del Teatro se convierte en un milagro, y es que cuando llega el Festival de Teatro,"}, {"start": 287.0, "end": 293.0, "text": " cuando Fanny McKee de quien hablaremos profusamente ya quien est\u00e1 dedicado este cap\u00edtulo con todo el amor,"}, {"start": 293.0, "end": 297.0, "text": " respeto y memoria, porque esa mujer nos cambi\u00f3 la vida."}, {"start": 297.0, "end": 303.0, "text": " Esa mujer, Mauricio, tiene una huella absolutamente indeleble entre nosotros."}, {"start": 303.0, "end": 311.0, "text": " Llega y organiza el primer Festival de Teatro, en ese momento nos nos est\u00e1bamos en una coyuntura,"}, {"start": 311.0, "end": 319.0, "text": " que eran las guerras del narcotr\u00e1fico. Las guerras del narcotr\u00e1fico ten\u00edan varias aristas,"}, {"start": 319.0, "end": 324.0, "text": " las guerras entre el Cartel de Cali y el Cartel de Medell\u00edn y las guerras del Cartel de Medell\u00edn,"}, {"start": 324.0, "end": 326.0, "text": " controlizado con un biario."}, {"start": 326.0, "end": 334.0, "text": " Lo que se hab\u00eda traducido en una gran cantidad de bombas en Bogot\u00e1 y en Medell\u00edn,"}, {"start": 334.0, "end": 344.0, "text": " que llevaron a confinarnos en el terror durante una \u00e9poca en que uno no sab\u00eda si iba a volver vivo una vez que sal\u00eda a la calle."}, {"start": 344.0, "end": 353.0, "text": " Esto hab\u00eda hecho que los espacios p\u00fablicos y que los encuentros se volvieran terriblemente miedosos,"}, {"start": 353.0, "end": 361.0, "text": " la rumba era lo que nos acaba adelante, real la unicospacia donde pod\u00edamos sobrevivir o resolvernos a la que fuera."}, {"start": 361.0, "end": 370.0, "text": " Pero de todas maneras, el espacio p\u00fablico estaba condicionado por un terror colectivo de lo que estamos viviendo para nosotros,"}, {"start": 370.0, "end": 381.0, "text": " las guerras del narcotr\u00e1fico no fueron miliceries, laborosas. Esto fue terror puro y duro y la vivimos muy gravemente en este pa\u00eds."}, {"start": 381.0, "end": 394.0, "text": " Entonces, en ese escenario, a dos a\u00f1os, de lo que ha caused el Palacio de Justicia, de la toma y la retoma del Palacio de Justicia y de la tragedia de Armero,"}, {"start": 394.0, "end": 404.0, "text": " en unas condiciones hist\u00f3ricas particularmente dram\u00e1ticas y aterradoras que vivimos nosotros en los a\u00f1os 80 y 90."}, {"start": 404.0, "end": 411.0, "text": " Se le ocurr\u00eda a Fanny, que hace un festival euro medicano de Teatro en Bogot\u00e1, o sea, esto es incre\u00edble en semana Santa."}, {"start": 411.0, "end": 420.0, "text": " Lo que adem\u00e1s le gener\u00f3 en un principio en los primeros festivales una confrontaci\u00f3n con la Iglesia Cat\u00f3lica,"}, {"start": 420.0, "end": 429.0, "text": " porque consideraba que no era idoneo, el car\u00e1cter del festival con la conmemoraci\u00f3n de la semana mayor."}, {"start": 429.0, "end": 436.0, "text": " Entonces, eso era otro problema que hab\u00eda ah\u00ed. Fanny, mi Kirch, y Ram\u00edros, hoy obtuvieron bastantes amenazas."}, {"start": 436.0, "end": 443.0, "text": " Cuando se hizo el festival, porque est\u00e1bamos en una \u00e9poca de terror, inclusive una bomba en el Teatro Nacional,"}, {"start": 443.0, "end": 452.0, "text": " en la presentaci\u00f3n del jeepeto del Grupo de Argentina, y la bomba no solamente no dejo heridos por la protecci\u00f3n poderosa de los dioses del Teatro,"}, {"start": 452.0, "end": 460.0, "text": " sino que incentiv\u00f3 al p\u00fablico a salir masivamente a ocupar las alas para derrotar el terror,"}, {"start": 460.0, "end": 489.0, "text": " para que no nos confinaran al miedo donde tantas veces hemos vivido."}, {"start": 550.0, "end": 569.0, "text": " Entonces, es muy dram\u00e1tico esto, y resulta que en el primer festival y del americanos de Teatro,"}, {"start": 569.0, "end": 574.0, "text": " va a ocurrir una cosa que es completamente inimaginable,"}, {"start": 574.0, "end": 582.0, "text": " y que solamente puede darse en el decenario y en el espacio del Teatro, los lemas de cada uno de estos festivales,"}, {"start": 582.0, "end": 589.0, "text": " tienen mucho que ver con lo que est\u00e1bamos viviendo. El primer festival y del americanos de Teatro ten\u00eda como lema un acto F."}, {"start": 589.0, "end": 595.0, "text": " Y en ese festival empiezan a llegar los grandes grupos del Teatro Mundial."}, {"start": 595.0, "end": 604.0, "text": " Originalmente esto se hac\u00eda en Caracas, Caracas era una potencia cultural y era a la \u00e9poca del complejo Teresa Carre\u00f1o,"}, {"start": 604.0, "end": 611.0, "text": " y hab\u00eda una movida en Caracas muy grande de cultura, y adem\u00e1s hab\u00eda mucho billete para eso."}, {"start": 611.0, "end": 625.0, "text": " Entonces, lo que hac\u00eda a Fanimiki era contratar los grupos desde Caracas, y luego traer los a Bogot\u00e1, que en ese momento era acsequible,"}, {"start": 625.0, "end": 631.0, "text": " porque era directamente el pa\u00eds vecino, m\u00e1s adelante el festival y el americanos de Teatro va a crecer tanto,"}, {"start": 631.0, "end": 641.0, "text": " que Fanimiki va a poder traer los grupos del mundo entero, y va a tener la figura de Adela Donadillo, el personaje que ten\u00eda a mi juicio,"}, {"start": 641.0, "end": 649.0, "text": " uno de los trabajos m\u00e1s maravillosos que puede tener un ser humano, ir por los festivales del mundo y escoger las obras que vendr\u00edan al festival y del americanos de Teatro."}, {"start": 649.0, "end": 659.0, "text": " Entonces resulta que en la edici\u00f3n de 1988, en el cierre, los cierre normalmente eran en la plaza de Bol\u00edvar,"}, {"start": 659.0, "end": 671.0, "text": " pero eran, primero, la apertura es por la s\u00e9ptima con el Teatro taller de Colombia, que va en los sancos, que siempre ellos entrenaban en Sanco subiendo por la circunvalar,"}, {"start": 671.0, "end": 679.0, "text": " que esto en Bogot\u00e1 es una cosa bastante alta, en una ciudad que ya es muy alta, 2.600 metros de altura sobre el nivel del mar."}, {"start": 679.0, "end": 694.0, "text": " Entonces eso suvisi\u00f3 en la monta\u00f1a en Sancos, aqu\u00ed es un ejercicio teatral muy fuerte, taller de Colombia siempre presid\u00eda las aperturas del Teatro con una gran cantidad de actores de todas partes en galanando la ciudad y llenando la de alegr\u00eda."}, {"start": 694.0, "end": 701.0, "text": " El cierre, en la plaza de Bol\u00edvar, otras veces en el parque Sim\u00f3n Bol\u00edvar, pero ese primer cierre,"}, {"start": 701.0, "end": 715.0, "text": " tuvo una cosa que nosotros todav\u00eda recordamos con el asombr\u00f3 mayor, en la plaza de Bol\u00edvar, en donde tuvo lugar el holocausto por la toma y la retoma del Palacio de Justicia,"}, {"start": 715.0, "end": 735.0, "text": " donde el Palacio de Justicia todav\u00eda estaba quemado, qued\u00f3 en llamas, y todav\u00eda estaba la terradora imagen de la bala de un tanque que abri\u00f3 hueco en la fachada, y toda la muerte de los magistrados, m\u00e1s em\u00e9ritos de este pa\u00eds y los profesores de la Universidad externado,"}, {"start": 735.0, "end": 758.0, "text": " y toda la tragedia que eso sigue siendo todav\u00eda, en una de las grandes heridas en la historia de esta naci\u00f3n, vino un grupo catal\u00e1n, llamado El Comedianz, y les dio por hacer un exorcismo a dos a\u00f1os de haber vivido este horror, y llenar la plaza de Bol\u00edvar de fuego artificiales y vestirse de di\u00e1logos,"}, {"start": 758.0, "end": 777.0, "text": " y de una manera absolutamente silcense y maravillosa, empezar a llenar esto de fuego artificiales, o sea, de luce de colores rojas, que nos recordaban las llamas de un Palacio donde se un d\u00eda el piso jur\u00eddico de nuestra democracia en esa \u00e9poca,"}, {"start": 777.0, "end": 797.0, "text": " y ver eso como un exorcismo, como un acto l\u00fadico, como la magia del teatro exorcizando los dolores del alma, eso fue primero, el estar en una multitud en la plaza de Bol\u00edvar, en la \u00e9poca de las bombas, era desafiar un terror profundamente metido entre nosotros,"}, {"start": 797.0, "end": 807.0, "text": " nosotros que amo delicaisimos, eso no puede sonar, cuando sonado una pistola en escena todo el mundo se asustaba, que amo delicaisimos, no nos pueden ni hablar duro, despu\u00e9s de todo lo que vivimos,"}, {"start": 807.0, "end": 819.0, "text": " entonces llegar y ver esta gente en Sancos, por toda la plaza de Bol\u00edvar y por frente al Palacio llen\u00e1ndolo de luces artificiales, exorciz\u00e1ndolo invit\u00e1ndolo a todos,"}, {"start": 819.0, "end": 832.0, "text": " a exorcizar los demonios del dolor, del terror, del estupor y de todo lo que signific\u00f3 para nosotros he hecho en la historia, es una de las cosas que nos muestra la magia impresionante del teatro,"}, {"start": 832.0, "end": 845.0, "text": " entonces ya con eso que amo en un estado de asombro incre\u00edble y se toma el teatro a Bogot\u00e1, y nos empieza a hacer sentir la magia,"}, {"start": 845.0, "end": 857.0, "text": " de empezar a conocer grupos de todas partes del mundo, este festival form\u00f3 un p\u00fablico de teatro, no se educ\u00f3 en las artes esc\u00e9nicas y nos ha educado,"}, {"start": 857.0, "end": 867.0, "text": " somos un p\u00fablico que conoce teatro en esta ciudad, y el festival va creciendo, va creciendo, llegan cosas tan impresionantes,"}, {"start": 867.0, "end": 878.0, "text": " yo ve\u00eda y veo en la media lo posible promedio de diez obras por festival, porque dedico mi vida al festival y del americano de teatro cada vez que ocurre,"}, {"start": 878.0, "end": 894.0, "text": " y entonces resulta que en los primeros vino una versi\u00f3n polaca de crimenicastigo de tres horas en donde se trataba del momento en que el comisario de la polic\u00eda"}, {"start": 894.0, "end": 903.0, "text": " de la novela crimenicastigo se enciera con rasc\u00f3n y cofo hasta que lo quebra y lo hace confesar,"}, {"start": 903.0, "end": 920.0, "text": " ese momento cuando logra metersele en el alma, esa era la versi\u00f3n de crimenicastigo que lleg\u00f3 ac\u00e1, lleg\u00f3 una versi\u00f3n rusa de las criadas de genet representada por hombres en un universo trans que en esa \u00e9poca era inimaginable,"}, {"start": 920.0, "end": 931.0, "text": " llegaron la gente m\u00e1s impresionante y empiezan a venir de todos los grupos de todas las partes del mundo, de todos los idiomas, de todas las culturas,"}, {"start": 931.0, "end": 942.0, "text": " el planeta entero a una ciudad encerrada por el terror, porque en esa \u00e9poca no ven\u00eda nadie, y nosotros est\u00e1bamos verdaderamente aislados,"}, {"start": 942.0, "end": 952.0, "text": " entonces cuando empiezan a llegar todos estos grupos de atro griego, que empieza a traer un prometebo encadenado en un hombre templado en escena durante hora y media,"}, {"start": 952.0, "end": 965.0, "text": " mientras le deburaba las entra\u00f1as, el ave que era el castigo por haberledado a los hombres el fuego, llegaban las historias m\u00e1s impresionantes del grupo Sherezaa de Slovenia,"}, {"start": 965.0, "end": 974.0, "text": " grupos que se fueron haciendo incre\u00edblemente famosos, teatro Catona, de un gr\u00eda, teatro No, del Jap\u00f3n, teatro Kabuki, de Jap\u00f3n,"}, {"start": 974.0, "end": 983.0, "text": " empezamos a conocer un nivel de teatro absolutamente maravilloso, los lemas eran cada vez m\u00e1s bello, los lenguajes del teatro del mundo,"}, {"start": 983.0, "end": 995.0, "text": " encuentro de los mundos cuando se hizo el quinto centenario del encuentro de las dos culturas o de la colonizaci\u00f3n de nuestro mundo, eso tambi\u00e9n fue una cosa impresionante,"}, {"start": 995.0, "end": 1005.0, "text": " otra era Bogot\u00e1 un escenario del mundo, 10 a\u00f1os de fe en Colombia, el estreno del siglo, la vuelta al mundo en 80 obras, un mundo para ver, un mundo en escena,"}, {"start": 1005.0, "end": 1015.0, "text": " Bogot\u00e1 ciudad de teatro del mundo, en Monaje, bueno ya despu\u00e9s cuando muere fan y Mickey pero todav\u00eda no hemos llegado all\u00e1 porque todav\u00eda no le hemos introducido en todo su valor y su dignidad,"}, {"start": 1015.0, "end": 1027.0, "text": " la fiesta de las milcaras, todos tenemos que ver, el teatro est\u00e1 de fiesta, comienza el teatro, era cada uno de estos escenarios lleg\u00f3 a tener un nivel m\u00e1s alto,"}, {"start": 1027.0, "end": 1039.0, "text": " llegamos a tener 800 funciones de 100 compa\u00f1\u00edas internacionales, 170 compa\u00f1\u00edas colombianas la muestra de teatro m\u00e1s importante del mundo,"}, {"start": 1039.0, "end": 1047.0, "text": " entonces esto surgi\u00f3 por los 450 a\u00f1os de la Fundaci\u00f3n de Bogot\u00e1, ah\u00ed es cuando surge el Euroamericano de teatro,"}, {"start": 1047.0, "end": 1059.0, "text": " y esto empieza a traer compa\u00f1\u00edas que van a traer una cantidad de nuevas miradas, llegaron de Sur \u00c1frica, cuando Mandela hab\u00eda salido de la c\u00e1rcel y hab\u00edan derrotado en la parte,"}, {"start": 1059.0, "end": 1071.0, "text": " tiene una compa\u00f1\u00eda donde hac\u00edan una mezcla de marionetas, con im\u00e1genes de video, en escena, empezamos a conocer una gran cantidad de lenguajes teatrales,"}, {"start": 1071.0, "end": 1080.0, "text": " empezamos a conocer las multimedia, empezamos a conocer todo lo que era la combinaci\u00f3n de todas las artes esc\u00e9nicas,"}, {"start": 1080.0, "end": 1088.0, "text": " en un momento dado en un escenario que era \u00edbano desde el teatro negro, pasando por la marioneta, pasando por el video,"}, {"start": 1088.0, "end": 1099.0, "text": " en una combinaci\u00f3n de lenguajes, para contar lo que significaba la ca\u00edda de la parteite, que fue uno de los momentos m\u00e1s impresionantes,"}, {"start": 1099.0, "end": 1109.0, "text": " o siempre estaban ligadas a la historia despu\u00e9s de que Ca\u00e9l Murdo de Berl\u00edn viene una obra romana con un montaje de Julio C\u00e9sar,"}, {"start": 1109.0, "end": 1121.0, "text": " en donde se mostraba que despu\u00e9s del magnicidio no ten\u00edan en realidad un proyecto que pudiera reemplazar lo que les hab\u00eda pasado durante ese tiempo,"}, {"start": 1121.0, "end": 1133.0, "text": " era una extra\u00f1a sombra la quedaban los romanos, despu\u00e9s de la muerte de Chauchesco, que lo dejaba uno muy pensativo sobre c\u00f3mo percib\u00edan ellos su propia historia,"}, {"start": 1133.0, "end": 1142.0, "text": " llegaban de todas partes cont\u00e1ndonos, el mundo entero era como, uno mira un festival de teatro y est\u00e1 viendo el mundo,"}, {"start": 1142.0, "end": 1154.0, "text": " porque llega teatro africano, teatro de furquina, fazo, teatro del esperpento, de ram\u00f3n del valle inclan, la Saranda llegamos a verte atro del esperpento,"}, {"start": 1154.0, "end": 1167.0, "text": " todas las formas teatrales hab\u00eda un bar de marionetas en una obra inglesa, un bar de marionetas fracasadas, cuyas vidas ya hab\u00edan pasado y que estaban en el olvido y el alcoholismo,"}, {"start": 1167.0, "end": 1178.0, "text": " eran marionetas existencialmente vencidas por los golpes de la vida en un bar sordido unas marionetas, aquello era una cosa absolutamente impresionante,"}, {"start": 1178.0, "end": 1189.0, "text": " en los villanos de Shakespeare una obra inglesa, en donde un hombre empieza a codificar c\u00f3mo son los villanos de la obra de Shakespeare, y uno de ellos dec\u00eda,"}, {"start": 1189.0, "end": 1203.0, "text": " por ejemplo, llago es un villano mediocre, ese medio crecimiento es un crimen y miraba el escenario hacia la mayor\u00eda de ustedes lo son, el problema es ser villano, y despu\u00e9s habla de regalo tercero como un brillano brillante, brillante,"}, {"start": 1203.0, "end": 1212.0, "text": " porque manipulaba psicologicamente a los personajes para hacerlos sentir que ellos eran culpables de su propia victimizaci\u00f3n,"}, {"start": 1212.0, "end": 1225.0, "text": " y tambi\u00e9n hablaba de Hamlet como uno de los villanos, porque al no tomar ninguna decisi\u00f3n era responsable de la muerte en una persona de nesena, de Olivia que se suicid\u00f3 de sus dos amigos Guiltenstein y Rosent Crank,"}, {"start": 1225.0, "end": 1233.0, "text": " porque no hac\u00edan nada, sino deliberar y pensar mientras algo estaba realmente podido en Dinamarca."}, {"start": 1233.0, "end": 1253.0, "text": " O va, discibil, a tristese, y pise sirmoquero, a bua, le repete y presse, percibiremos que,"}, {"start": 1253.0, "end": 1268.0, "text": " o de ferro, satereche, percibiremos que, yo tu ser\u00ed que,"}, {"start": 1268.0, "end": 1289.0, "text": " al r\u00edter Rosent Crank, percibiremos que, al r\u00edter Rosent Crank, percibiremos que, yo tu ser\u00ed que, yo tu ser\u00ed que, yo tu ser\u00ed que,"}, {"start": 1289.0, "end": 1304.0, "text": " llegan las historias m\u00e1s impresionantes, llegan a una de las obras m\u00e1s grandes, porque son todas las gente que es realmente grande en el teatro, Nuria Expert, la grandama del teatro catal\u00e1n,"}, {"start": 1304.0, "end": 1318.0, "text": " va a venir con una obra que se llama la violaci\u00f3n de Lucrecia, y es una obra donde una suena mujer interpreta un trama de Shakespeare sobre un hombre que hace gala de la alegr\u00eda tan grande que ten\u00eda de cosar de una maravillosa"}, {"start": 1318.0, "end": 1330.0, "text": " relaci\u00f3n con su mujer, y de una sexualidad clena, y el rey siente en vida de la alegr\u00eda de este hombre y decidir violar a Lucrecia, Lucrecia le advierte que eso va a ser la desgracia de todo,"}, {"start": 1330.0, "end": 1345.0, "text": " ella hace el papel de Lucrecia, hace el papel del rey, hace el papel del marido, hace el papel del pueblo, que juzga al rey Tarquinu, hace el papel del actriz que est\u00e1 preparando la obra, y hace el papel de las mujeres que est\u00e1n en el cuadro,"}, {"start": 1345.0, "end": 1354.0, "text": " en donde se emula la violaci\u00f3n de las mujeres en el templo, en el drama de las Trojanas, una sola mujer en escena."}, {"start": 1354.0, "end": 1364.0, "text": " Vimos las cosas m\u00e1s impresionantes, hemos visto el teatro m\u00e1s grandioso, y hemos aprendido de teatro a lo largo de eso,"}, {"start": 1364.0, "end": 1376.0, "text": " entonces este personaje de Fanimiki lo vamos a mirar muy detenidamente, Elisa Fanimiki Olanski, nacen buenos aires en 1930,"}, {"start": 1376.0, "end": 1389.0, "text": " era hija de Camilo Miki y de Monika Olanski, ella era de origen lituanos, la mayor\u00eda de los argentinos vienen de los barcos, entonces tienen origenes europeos como lo hemos contado muchas veces,"}, {"start": 1389.0, "end": 1401.0, "text": " y quer\u00eda que ella fuera abogada o contadora p\u00fablica, ella huye de la casa por maltrato, por violencia intrafamiliar por maltrato, huye,"}, {"start": 1401.0, "end": 1412.0, "text": " y empieza a trabajar por ah\u00ed tratando de sobrevivir, va a ser contadora de empresa de juguetes, va a participar en programas de televisi\u00f3n, como un ejercicio muy peque\u00f1os,"}, {"start": 1412.0, "end": 1423.0, "text": " luego se empieza a formar como actrice en la sociedad hebrae y cargentina, y desde ah\u00ed conoce a Pedro Mart\u00ednez, y viene a Colombia por la cual mucha gente ha venido a Colombia, por amor."}, {"start": 1423.0, "end": 1435.0, "text": " Es una de las razones por las cuales se han quedado las personas ac\u00e1, Pedro Mart\u00ednez y el dramaturgo, enrique buena aventura que va a ser muy importante porque con \u00e9l va a entrar en el teatro experimental de Cali,"}, {"start": 1435.0, "end": 1448.0, "text": " y en el teatro experimental de Cali ella va a aprender, no solamente las artes esc\u00e9nicas, sino la producci\u00f3n de las obras, va a aprender todo lo que va a ser para ella, un recorrido por el teatro,"}, {"start": 1448.0, "end": 1460.0, "text": " en todas sus formas hasta llegar a declararse ella un animal del teatro, ya dec\u00eda que era una bestia del teatro, estaba dispuesta de devarrer el piso hasta vender boleta por boleta,"}, {"start": 1460.0, "end": 1477.0, "text": " ella tiene un sue\u00f1o por el teatro, una alucinaci\u00f3n por el teatro, una magia por el teatro, y ese sue\u00f1o y esa alucinaci\u00f3n lo va a convertir en un milagro, en una ciudad que vive de su genialidad con una sede infinita."}, {"start": 1477.0, "end": 1496.0, "text": " El festivo hace\u00f3 un ex\u00e1mo muy grande, el teatro callejero, muy importante, la muestra de teatro callejero en todas las plazas p\u00fablicas, donde llegaba gente como Donner's Theatre, un grupo ingl\u00e9s que retrataba los terr\u00edcolas como los seres m\u00e1s extra\u00f1os a m\u00e1s vistos,"}, {"start": 1496.0, "end": 1505.0, "text": " ellos vestidos distractorrestes nos hac\u00edan sentir tan extra\u00f1os a sus ojos, que nos hac\u00edan sentirnos extra\u00f1os ante nosotros mismos,"}, {"start": 1505.0, "end": 1520.0, "text": " el teatro empieza a traer toda clase de emociones porque finalmente el teatro es eso, son emociones vivas, son personas vivrando para hacernos entender la profundidad de la alma humana."}, {"start": 1520.0, "end": 1538.0, "text": " Esto es a volviendo una cosa m\u00e1s grande, ella como tal va a ser pionera en todo, primero el tec fue de los primeros teatros que empezaron a tener realmente apoyo gubernamental porque antes el teatro era muy precario, no ten\u00eda apoyo del gobierno,"}, {"start": 1538.0, "end": 1554.0, "text": " la gente ten\u00eda que trabajar en muchas cosas, despu\u00e9s se toma la decisi\u00f3n de profesionalizar el teatro en Colombia y se sube en las taquillas para que la gente pueda vivir del teatro y no tenga que estar haciendo cualquier otra cosa por el privilegio de poder ser actor en Colombia."}, {"start": 1554.0, "end": 1569.0, "text": " Esta mujer va a crear el primer caf\u00e9 con Certe Bogot\u00e1 que se llama la gata caliente, ella siempre fue una mujer desafiante y fue desafiante de todas las formas de doble moral, y aqu\u00ed pues han sido bastantes,"}, {"start": 1569.0, "end": 1580.0, "text": " entonces eso que se llamara la gata caliente que ella hiciera un bodewil era una cosa muy loca, ella dec\u00eda en la \u00e9poca en que nadie dec\u00eda eso, que ella cre\u00eda en la pareja,"}, {"start": 1580.0, "end": 1591.0, "text": " le preguntaba, \u00bfusted creen el amor si yo creo en la pareja? Hombre mujer, mujer, hombre hombre, bueno pues esto que te digo en los ochentas, era de Aralario nadie hablaba as\u00ed,"}, {"start": 1591.0, "end": 1608.0, "text": " entonces ella siempre abri\u00f3 los horizontes mentales de nuestro pa\u00eds en su amor por el teatro, entonces primero cre\u00f3 el teatro nacional y ten\u00eda unos modelos de negocios incre\u00edbles porque ella vend\u00eda una silla en el teatro nacional,"}, {"start": 1608.0, "end": 1622.0, "text": " usted quiere comprar una silla, usted compr\u00f3 una silla y le ponemos su nombre y con el nombre de su silla mont\u00f3 el teatro nacional, pero el teatro nacional se le qued\u00f3 peque\u00f1o, entonces tuvo que pasar al teatro a la castellana,"}, {"start": 1622.0, "end": 1637.0, "text": " porque el teatro nacional ya no cab\u00eda en todo lo que ya estaba haciendo, entonces en la casa al teatro, teatro nacional, teatro a la castellana, el caf\u00e9 concierto, y luego el festival y el americano de teatro es que uno no termina"}, {"start": 1637.0, "end": 1655.0, "text": " con esta mujer tan impresionante esa energ\u00eda, luego fue convenciendo a toda la empresa privada y a las empresas estatales y cre\u00f3 un modelo mixto para poder patrocinar el teatro y eso fue creciendo, creciendo, creciendo,"}, {"start": 1655.0, "end": 1678.0, "text": " porque luego llegaban las obras de los africanos y llegaban las obras de los japoneses, hab\u00eda cosas tan incre\u00edbles como un montaje de la divina comedia de un grupo alem\u00e1n en el Jorge Lieser-Gaitan, donde tuvieron que poner un enorme piscina, un gran tanque de agua, porque es que es la otra, traiga a ser de todos los pa\u00edses del mundo,"}, {"start": 1678.0, "end": 1699.0, "text": " los andam\u00edajes para montar estas obras y crear un enorme escenario que era un tanque de agua que era un c\u00edmil de los c\u00edrculos del infierno, contados por los alem\u00e1nes en una puesta en escena de la divina comedia de Dante que lo dejaba unos inaliento."}, {"start": 1699.0, "end": 1706.0, "text": " Aqu\u00ed empieza el espacio comercial."}, {"start": 1706.0, "end": 1735.0, "text": " El teatro es una enorme fiesta y es una fiesta multicolor maravillosa que en el caso de Bogot\u00e1 nos ha sanado much\u00edsimo, por eso el festival Iberoamericano de teatro de Bogot\u00e1 tiene un cap\u00edtulo en esta temporada,"}, {"start": 1735.0, "end": 1743.0, "text": " de ferias y fiestas de Colombia, nosotros no tenemos un carnaval, propiamente, pero tenemos el Iberoamericano."}, {"start": 1743.0, "end": 1752.0, "text": " Y esto ha sido un evento de tanta alegr\u00eda, de tanta esperanza, vamos a contar c\u00f3mo lleg\u00f3 a la ciudad en los tiempos tan duros en que lleg\u00f3,"}, {"start": 1752.0, "end": 1762.0, "text": " vamos a contar de toda la maravilla de Fanimiki, vamos a ver c\u00f3mo el teatro nos ilumin\u00f3 las calles, las ciudades, las salas, la cabeza,"}, {"start": 1762.0, "end": 1769.0, "text": " y nos trajo y nos ha tra\u00eddo ventanas de todas partes del mundo, vamos a contar una historia incre\u00edble."}, {"start": 1769.0, "end": 1779.0, "text": " Por eso lo simpito a que escuchen esta historia de este maravillosa y esplendorosa fiesta Bogot\u00e1na a trav\u00e9s de Radio Nacional de Colombia,"}, {"start": 1779.0, "end": 1796.0, "text": " y luego a disfrutarlo todas las veces que quieran en rtbcplay.co."}, {"start": 1796.0, "end": 1809.0, "text": " Y siempre dec\u00edamos, Con Richie me he esposo que en alg\u00fan momento, en cada festival de teatro iba a haber una obra que tiba cambiar la vida, y nunca sab\u00edamos cual era."}, {"start": 1809.0, "end": 1821.0, "text": " Por eso hab\u00eda que ir a todas las obras a la mayor cantidad de horas posibles, porque cualquiera de las obras que tuviera en un festival de teatro pod\u00eda cambiarte la vida."}, {"start": 1821.0, "end": 1833.0, "text": " Y muchas veces vimos muchas obras que nos cambiaron la vida, la obra del festival, siempre hab\u00eda una obra que era tan extraordinaria que no ser\u00eda posible nunca olvidar a haber la visto."}, {"start": 1833.0, "end": 1845.0, "text": " El teatro es inolvidable, y todo lo que es imaginable ha llegado al festival libre americano convirtiendo la ciudad en un espacio p\u00fablico,"}, {"start": 1845.0, "end": 1854.0, "text": " nosotros que como lo digo, en Bogot\u00e1 no hemos ido de Ciudad de Carnavales, que a diferencia de todas las fiestas magn\u00edficas y maravillosas que hemos narrado,"}, {"start": 1854.0, "end": 1867.0, "text": " nosotros no ten\u00edamos esa cultura de fiestas ni ese compromiso con un carnaval que hace tan importante la vida con el de los de requilleros o como el de riosucio,"}, {"start": 1867.0, "end": 1879.0, "text": " o como tantos otros fiestas que hemos recorrido donde admiramos con cierta envidia, el compromiso incondicional de la gente con sus carnavales,"}, {"start": 1879.0, "end": 1892.0, "text": " pues nosotros tenemos el festival libre americano de teatro, y eso nos abri\u00f3 la mente, venci\u00f3 el terror, nos llev\u00f3 a las plazas, exorciz\u00f3 nuestros demonios,"}, {"start": 1892.0, "end": 1899.0, "text": " nos hizo valientes para que el arte derrotar el terror, y nos ha tra\u00eddo toda clase de cosas,"}, {"start": 1899.0, "end": 1905.0, "text": " entonces ven\u00eda, ha venido mucho teatro de Am\u00e9rica Latina, teatro Argentino, teatro mexicano, teatro del Uruguay,"}, {"start": 1905.0, "end": 1914.0, "text": " pues todas las muestras del teatro que aqu\u00ed tenemos nosotros, entonces esto se fue volviendo cada vez m\u00e1s y m\u00e1s grande,"}, {"start": 1914.0, "end": 1922.0, "text": " y cada vez subo m\u00e1s festivales, y cada vez fue m\u00e1s importante, entonces eso incentivo el teatro universitario,"}, {"start": 1922.0, "end": 1929.0, "text": " todo digamos toda nuestra historia de teatro desde el budo del teatro universitario, desde la \u00e9poca de Stanislavski,"}, {"start": 1929.0, "end": 1934.0, "text": " que fue tan indecisivo en la formaci\u00f3n de teatral de los actores en Colombia,"}, {"start": 1934.0, "end": 1944.0, "text": " todo eso va a servir de cargo de cultivo y de potencial para darle al festival libre americano un piso de valor hist\u00f3rico del teatro,"}, {"start": 1944.0, "end": 1953.0, "text": " y el festival libre americano se va tomando toda la ciudad, y despu\u00e9s ya se empieza a poder exportar obras iranas o otras partes,"}, {"start": 1953.0, "end": 1962.0, "text": " y algunas veces despu\u00e9s de las semanas antas se han quedado las obras que m\u00e1s grande han hecho el favoritismo del p\u00fablico,"}, {"start": 1962.0, "end": 1970.0, "text": " lo que digo la creaci\u00f3n de un p\u00fablico culto educado capaz de entender teatro de sala,"}, {"start": 1970.0, "end": 1976.0, "text": " de una calidad impresionante pero teatro de calle, teatro de calle de la mejor calidad,"}, {"start": 1976.0, "end": 1982.0, "text": " lo que hac\u00eda que toda la ciudad en todas las diferentes plazas se viera y se haya visto involucada,"}, {"start": 1982.0, "end": 1992.0, "text": " en el teatro o sea la ciudad, Bogot\u00e1 se vuelve a ser teatro, Corferias se vuelve a ser teatro, y en Corferias todo,"}, {"start": 1992.0, "end": 1996.0, "text": " todo era teatro, la sobre todo ha sido teatro desde la entrada a las obras,"}, {"start": 1996.0, "end": 2004.0, "text": " otra cosa que se va a crear es la Carpa Cabaret, para que despu\u00e9s de las obras la ente fuera a tomarse un trago,"}, {"start": 2004.0, "end": 2012.0, "text": " y a encontrarse muchas veces con los actores que acababan de hacer presentaciones absolutamente inimaginables,"}, {"start": 2012.0, "end": 2019.0, "text": " y all\u00e1 tambi\u00e9n hab\u00eda espect\u00e1culos musicales, entonces esto es un compendio de teatro callejero, teatro de sala,"}, {"start": 2019.0, "end": 2028.0, "text": " Carpa Cabaret, ciudad teatro en Corferias, digamos una propuesta de ciudad verdaderamente enorme,"}, {"start": 2028.0, "end": 2036.0, "text": " trayendo el mundo, a un pa\u00eds que durante mucho tiempo ha estado encerrado y aislado por la adureza de su historia,"}, {"start": 2036.0, "end": 2043.0, "text": " era como si los ojos del planeta vinieran a mirarnos y a trav\u00e9s de sus ojos pudi\u00e9ramos ver el planeta,"}, {"start": 2043.0, "end": 2060.0, "text": " eso es un h\u00edbero americano de teatro."}, {"start": 2103.0, "end": 2128.0, "text": " En festival y euroamericano de teatro,"}, {"start": 2128.0, "end": 2141.0, "text": " tiene el problema que despu\u00e9s de la muerte de Fanimique, una mujer tan absolutamente multifac\u00e9tica,"}, {"start": 2141.0, "end": 2152.0, "text": " incansable, talentosa, como actriz, como productora, como gestora cultural, como una mujer capaz de movilizar"}, {"start": 2152.0, "end": 2160.0, "text": " una ciudad entorno a la figura del teatro, pues es muy dif\u00edcil reemplazar ese esp\u00edritu tan"}, {"start": 2160.0, "end": 2168.0, "text": " impatible, sin embargo el festival y euroamericano de teatro continuar\u00e1 con una gran calidad, con el homenaje a ella,"}, {"start": 2168.0, "end": 2178.0, "text": " y a\u00f1os despu\u00e9s de la muerte de Fanimique, el festival ha continuado trayendo grand\u00edsimos huestras del teatro del mundo,"}, {"start": 2178.0, "end": 2186.0, "text": " que les dec\u00eda el teatro de Valleinclan del grupo La Saranda, o sea, todo el mundo que ha tenido que ver con la excelencia"}, {"start": 2186.0, "end": 2194.0, "text": " del teatro ha venido a un festival y euroamericano de teatro de todas partes, teatro de Croacia, teatro de un gr\u00eda,"}, {"start": 2194.0, "end": 2204.0, "text": " teatro de todas las compa\u00f1\u00edas m\u00e1s impresionantes, una vez un montaje de los japoneses que hac\u00edan una interpretaci\u00f3n de unas gaysas,"}, {"start": 2204.0, "end": 2214.0, "text": " ante un grupo de hombres representando a medea, hemos visto unas medias, la cosa m\u00e1s impresionante desde mujeres de Burkina Faso,"}, {"start": 2214.0, "end": 2224.0, "text": " hasta espect\u00e1culos multimedia, que muestran todas las diferentes formas de medea teatro cl\u00e1sico, Shakespeare, teatro, griego,"}, {"start": 2224.0, "end": 2236.0, "text": " hemos visto realmente los grupos m\u00e1s impresionantes, el festival de teatro, pues como todas las ferias y las fiestas, va a trav\u00e9s de la pandemia,"}, {"start": 2236.0, "end": 2248.0, "text": " y con la pandemia, pues se vio tremendamente debilitado, entonces ahorita el festival euroamericano de teatro est\u00e1 renaciendo,"}, {"start": 2248.0, "end": 2256.0, "text": " poco a poco, con gran dificultad, porque la pandemia supuso para todos los festivales, un obst\u00e1culo terrible,"}, {"start": 2256.0, "end": 2261.0, "text": " porque no es que no me puede ver la gente, entonces adem\u00e1s que en la \u00e9poca de la virtualidad,"}, {"start": 2261.0, "end": 2268.0, "text": " usted se imagina lo valioso que es el teatro, vers\u00e9res humanos de carne hueso hablando, le haceres humanos de carne hueso"}, {"start": 2268.0, "end": 2275.0, "text": " sentados en butacas en frente a una distancia donde se les pueden ver los rostros y los gestos a los tractores,"}, {"start": 2275.0, "end": 2284.0, "text": " la magia del teatro, uno de los compa\u00f1\u00edas del teatro dinden a marca, una vez vino con una propuesta de rotarla muerte,"}, {"start": 2284.0, "end": 2290.0, "text": " un hombre que quer\u00eda de rotarla muerte y descubre que el \u00fanico lugar donde se puede rotarla muerte es en el teatro,"}, {"start": 2290.0, "end": 2298.0, "text": " porque en el teatro no existe la muerte, entonces en el teatro se resuelven los dilemas existenciales de la alma humana,"}, {"start": 2298.0, "end": 2308.0, "text": " que fuera de las artes esc\u00e9nicas y de los tablados, no tienen como contarse, entonces resucitar, digamos,"}, {"start": 2308.0, "end": 2314.0, "text": " porque despu\u00e9s de la pandemia, todo es una resurrecci\u00f3n, despu\u00e9s de la pandemia, todos volver\u00e1n a ser,"}, {"start": 2314.0, "end": 2320.0, "text": " porque la pandemia nos quit\u00f3 la presencialidad, nos quit\u00f3 la mirada y nos convirti\u00f3 en pantallas,"}, {"start": 2320.0, "end": 2329.0, "text": " en pantallas abstractas de personas puestas en cuadritos, frente a un computador o a un celular,"}, {"start": 2329.0, "end": 2339.0, "text": " que no se parece nada a lo que huele en sienten, transpieran, aula, ni miran los seres humanos cuando se encuentran los unos a los otros,"}, {"start": 2339.0, "end": 2349.0, "text": " eso es lo que es el teatro, entonces claro, estamos intentando volver a ser teatro sin embargo llegar a un cosas tan hermosas como un pin ocho,"}, {"start": 2349.0, "end": 2360.0, "text": " de Aralarios, compa\u00f1\u00eda inglesa que trajo un pin ocho, completamente divino, divino al Jorge Luis Sergaytan en una maestr\u00eda del teatro del cuerpo,"}, {"start": 2360.0, "end": 2372.0, "text": " verdaderamente po\u00e9tica con una m\u00fasica que le daba una proyecci\u00f3n on\u00edrica y casi alucinante a este teatro del cuerpo maravilloso,"}, {"start": 2372.0, "end": 2379.0, "text": " que ha sido pin ocho, entonces siguen llegando las grandes obras, es un tiempo de reconstrucci\u00f3n,"}, {"start": 2379.0, "end": 2386.0, "text": " la pandemia lo implican todo sentido, para el festival y el americano de teatro tambi\u00e9n es un tiempo de reconstrucci\u00f3n,"}, {"start": 2386.0, "end": 2396.0, "text": " adem\u00e1s porque como es una propuesta es en yca tan grande y requiere ese nivel de apoyo y de capital mixto para mantenerse"}, {"start": 2396.0, "end": 2407.0, "text": " en el nivel de grandeza que nos ha acostumbrado a tener y en la calidad de obras que hemos visto del mundo intero y de la historia del teatro,"}, {"start": 2407.0, "end": 2418.0, "text": " pues hay que volver a tener el acto de fe, que tuvimos para poder creer en el teatro en la \u00e9poca de las bombas y atrevernos a ir a la plaza de Bol\u00edvar,"}, {"start": 2418.0, "end": 2426.0, "text": " a riesgo de lo que fuera para aprender a confiar en el derecho a los espacios p\u00fablicos y en el derecho a so\u00f1ar y a derrotar el miedo,"}, {"start": 2426.0, "end": 2437.0, "text": " hay que volver a tener ese acto de fe para crear de nuevo el teatro que nos ha hecho una ciudad on\u00edrica en un festival y un americano de teatro"}, {"start": 2437.0, "end": 2445.0, "text": " la gente no est\u00e1 hablando de nada m\u00e1s y no de obras, entonces todas las intrigas cotidianas, las rencillas pol\u00edticas,"}, {"start": 2445.0, "end": 2458.0, "text": " todo aquello que nos aqueja incluso el tr\u00e1fico pasa a segundo plano la gente solamente est\u00e1 hablando de las obras y se que vio una obra de Belhika absolutamente maravillosa que se vio"}, {"start": 2458.0, "end": 2470.0, "text": " te austra australiano feminista donde una mujer hablaba en una casa donde estaba un hombre que era una figura de papel machelle y en del peri\u00f3dico y ella le hablaba y le hablaba y le hablaba"}, {"start": 2470.0, "end": 2481.0, "text": " y en una cosa estaba leyendo el peri\u00f3dico y ella iba a salir de la casa y la puerta estaba tapiada, las ventanas estaban tapiadas y ella hablaba con un espectro que le\u00eda el peri\u00f3dico"}, {"start": 2481.0, "end": 2509.0, "text": " historias absolutamente incre\u00edbles hemos visto en el festival de teatro"}, {"start": 2541.0, "end": 2561.0, "text": " el festival y un americano de teatro aparte de una gran producci\u00f3n art\u00edstica cultural gigantesca es ante todo y fundamentalmente un esp\u00edritu"}, {"start": 2561.0, "end": 2577.0, "text": " es el esp\u00edritu de fanimiki es el esp\u00edritu invencible de una sociedad que vence el miedo a trav\u00e9s del arte y saca de nosotros uno de los elementos m\u00e1s impresionantes que tenemos en Colombia la resiliencia"}, {"start": 2577.0, "end": 2607.0, "text": " el ser capaz de vencer todos todos los obst\u00e1culos de la historia que se nos han puesto a nosotros particularmente duros entonces el festival y un americano de teatro nos pone en la sinton\u00eda de la maravilla del arte humano hab\u00eda una obra de un grupo croata que registraba la decadencia de un grupo que hab\u00eda pasado ya por los mejores tiempos y se hizo en un teatro que en ese momento estaba para ser remodelado el faig"}, {"start": 2607.0, "end": 2636.0, "text": " de la insa pero en el momento que se hizo la obra el teatro estaba casi en ruenas y como el grupo era un grupo de cadencia la sinton\u00eda de la historia del grupo que ya ha perdido sus mejores tiempos con un teatro que otrora fue grandioso y que en ese momento estaba es muy muy a las condiciones y despu\u00e9s ser\u00eda remodelado y una manera maravillosa generaba una reverberaci\u00f3n de la decadencia y el fracaso y en los tiempos"}, {"start": 2636.0, "end": 2664.0, "text": " y dos que en mi representaban la compa\u00f1\u00eda de teatro hemos visto teatro del cuerpo hemos visto teatro de la m\u00e1s pura y cl\u00e1sica expresi\u00f3n hemos visto teatro de la mayor cantidad de innovaci\u00f3n obras de irra y logras de australia que nos han tra\u00eddo una nueva mirada de todas las posibilidades de combinaci\u00f3n de artes esc\u00e9nicas"}, {"start": 2664.0, "end": 2679.0, "text": " lo vimos evolucionar a lo largo de los festivales vimos como cada una de las posibilidades del teatro se fueron haciendo cada vez m\u00e1s diversas m\u00e1s complejas m\u00e1s poderosas m\u00e1s profusas"}, {"start": 2679.0, "end": 2698.0, "text": " toda la memorabilia del teatro o sea esto es una epopeia de ciudad y de vida y por eso nosotros ponemos un acto de fe en la \u00e9poca del teatro despu\u00e9s de la pandemia para volver a sacar adelante algo que ha sido"}, {"start": 2698.0, "end": 2721.0, "text": " el canto m\u00e1s grande de resistencia de arte de cosmopolitanismo porque en ese momento nos volvemos una ciudad cosmopolita se hablan miles de idiomas se requieren cualquier cantidad de interpretes la infraestructura de estos teatros se requiere cualquier cantidad de gente ayudando en los escenarios"}, {"start": 2721.0, "end": 2741.0, "text": " porque hay que adaptar escenarios enormes para el quinto centenario se hicieron con gruas una gran gran gran celebraci\u00f3n del grupo de juzglar de Catalunya en la plaza de boliva los cierres m\u00e1s impresionantes con los juegos de luces de sancos de teatro de todas las maravillas"}, {"start": 2741.0, "end": 2770.0, "text": " hemos visto el asombro de lo que el arte pod\u00eda hacer en el esp\u00edritu humano a trav\u00e9s de uno de los milagros m\u00e1s grandes que ha sido el festival y vero americano de teatro de Bogot\u00e1 por eso con todo el cari\u00f1o a fan y mi quiarra mi nosorio a todos los teatreros que han hecho de nosotros una posibilidad vital a toda la gente que ha recorrido las calles en busca del teatro callejero"}, {"start": 2770.0, "end": 2790.0, "text": " a la gente que sigue con el esp\u00edritu de mantener viva la fe en el teatro y que hace su mejor esfuerzo para resucitarlo de la pandemia y volver a evocar los esp\u00edritus m\u00e1s poderosos que se encuentran en el mundo de teatro a todos los actores que se quedaron en Colombia vinieron por las obras y se quedaron en Colombia"}, {"start": 2790.0, "end": 2805.0, "text": " porque pues el que se enamora de nosotros no lo puede remediar o sea eso es una cosa que no despu\u00e9s ya nada les sabe cuando nos aprenden a querer muchos actores se quedaron ac\u00e1 llegaron en un h\u00edbero americano y quedaron absolutamente inlutizados"}, {"start": 2805.0, "end": 2821.0, "text": " vimos el berl\u00ednere en sable en las primeras ediciones traer la \u00f3pera de tres centavos que eso es como ver el original de un cuadro de diga o demon\u00e9 la primera vez que el arte se puede ver de primera mano y no en dibujitos"}, {"start": 2821.0, "end": 2834.0, "text": " o en la minitas o en libritos sino de verdad las texturas del arte eso era ver el el berl\u00ednere en sable con la \u00f3pera de los tres centavos cuando yo llegaran todav\u00eda no hay ac\u00e1 y el muro de orl\u00edn"}, {"start": 2834.0, "end": 2847.0, "text": " ese tipo de miserias no la ten\u00edan en la aleman\u00eda oriental ten\u00edan otras pero no esas as\u00ed que para ellos ver la cantidad y habitantes de la calle que se parec\u00edan tanto a la puesta en escena"}, {"start": 2847.0, "end": 2855.0, "text": " que se volvallaba una sombr\u00f3 inesperado la \u00f3pera de tres centavos de ver tolprec con la m\u00fasica de curvele"}, {"start": 2855.0, "end": 2868.0, "text": " entonces todo lo que les puede contar de lo que hemos visto sigue qued\u00e1ndose corto para el espectro de arte pluralidad diversidad multiculturalidad"}, {"start": 2868.0, "end": 2892.0, "text": " y cosmopolitanismo que este festival ha tra\u00eddo a nuestro alma y con la fe de recuperarlo de los tiempos de la pandemia para volver a sentir la poes\u00eda y el teatro en escena y volver a llenarnos de toda la fuerza que s\u00f3lo el teatro nos puede traer en el alma y m\u00e1s en estos tiempos que se hacen tan sombr\u00edos"}, {"start": 2928.0, "end": 2933.0, "text": " y"}, {"start": 2933.0, "end": 2938.0, "text": " y"}, {"start": 2938.0, "end": 2943.0, "text": " y"}, {"start": 2943.0, "end": 2948.0, "text": " y"}, {"start": 2948.0, "end": 2952.0, "text": " y"}, {"start": 2952.0, "end": 2959.0, "text": " y"}, {"start": 2959.0, "end": 2960.0, "text": " y"}, {"start": 2960.0, "end": 2963.0, "text": " y"}, {"start": 2963.0, "end": 2967.0, "text": " y"}, {"start": 2967.0, "end": 2972.0, "text": " y"}, {"start": 2972.0, "end": 2976.0, "text": " y"}, {"start": 2976.0, "end": 2983.0, "text": " y"}, {"start": 2983.0, "end": 2988.0, "text": " y"}, {"start": 2988.0, "end": 2993.0, "text": " y"}, {"start": 2993.0, "end": 2998.0, "text": " y"}, {"start": 2998.0, "end": 3005.0, "text": " y"}, {"start": 3005.0, "end": 3008.0, "text": " y"}, {"start": 3008.0, "end": 3015.0, "text": " y"}, {"start": 3015.0, "end": 3018.0, "text": " y"}, {"start": 3018.0, "end": 3022.0, "text": " y"}, {"start": 3022.0, "end": 3029.0, "text": " y"}, {"start": 3029.0, "end": 3032.0, "text": " y"}, {"start": 3032.0, "end": 3039.0, "text": " y"}, {"start": 3039.0, "end": 3042.0, "text": " y"}, {"start": 3042.0, "end": 3045.0, "text": " y"}, {"start": 3045.0, "end": 3048.0, "text": " y"}, {"start": 3048.0, "end": 3051.0, "text": " y"}, {"start": 3051.0, "end": 3052.0, "text": " y"}, {"start": 3052.0, "end": 3055.0, "text": " y"}, {"start": 3055.0, "end": 3058.0, "text": " y"}, {"start": 3058.0, "end": 3060.0, "text": " y"}, {"start": 3060.0, "end": 3063.0, "text": " y"}, {"start": 3063.0, "end": 3066.0, "text": " y"}, {"start": 3066.0, "end": 3067.0, "text": " y"}, {"start": 3067.0, "end": 3070.0, "text": " y"}, {"start": 3070.0, "end": 3073.0, "text": " y"}, {"start": 3073.0, "end": 3075.0, "text": " y"}, {"start": 3075.0, "end": 3079.0, "text": " y"}, {"start": 3079.0, "end": 3082.0, "text": " y"}, {"start": 3082.0, "end": 3085.0, "text": " y"}, {"start": 3085.0, "end": 3088.0, "text": " y"}, {"start": 3088.0, "end": 3090.0, "text": " y"}, {"start": 3090.0, "end": 3093.0, "text": " y"}, {"start": 3093.0, "end": 3095.0, "text": " y"}, {"start": 3095.0, "end": 3098.0, "text": " y"}, {"start": 3098.0, "end": 3100.0, "text": " y"}, {"start": 3100.0, "end": 3102.0, "text": " y"}, {"start": 3102.0, "end": 3107.0, "text": " y"}, {"start": 3107.0, "end": 3109.0, "text": " y"}, {"start": 3109.0, "end": 3112.0, "text": " y"}, {"start": 3112.0, "end": 3116.0, "text": " y"}, {"start": 3116.0, "end": 3119.0, "text": " y"}, {"start": 3119.0, "end": 3122.0, "text": " y"}, {"start": 3122.0, "end": 3126.0, "text": " y"}, {"start": 3126.0, "end": 3130.0, "text": " y"}, {"start": 3130.0, "end": 3133.0, "text": " a"}, {"start": 3133.0, "end": 3140.0, "text": " y"}, {"start": 3140.0, "end": 3144.0, "text": " y"}, {"start": 3144.0, "end": 3145.0, "text": " y"}, {"start": 3145.0, "end": 3147.0, "text": " y"}, {"start": 3147.0, "end": 3149.56, "text": " y"}, {"start": 3149.56, "end": 3149.0, "text": " y"}, {"start": 3149.0, "end": 3152.0, "text": " y"}, {"start": 3152.0, "end": 3153.0, "text": " y"}, {"start": 3153.0, "end": 3156.0, "text": " y"}, {"start": 3156.0, "end": 3156.0, "text": " y"}, {"start": 3158.0, "end": 3159.0, "text": " y"}, {"start": 3159.0, "end": 3166.68, "text": " los ojos y las miradas del acto m\u00e1s existencial, poderoso, fuerte y m\u00e1gico que es el teatro"}, {"start": 3166.68, "end": 3176.6, "text": " desde la fant\u00e1stica magia inagotable de Fanemiki desde el milagro que el teatro trajo"}, {"start": 3176.6, "end": 3183.36, "text": " a una ciudad intimidad por el terror que a partir del arte aprendi\u00f3 a so\u00f1ar y a ser capas"}, {"start": 3183.36, "end": 3190.36, "text": " de proyectarse con la valent\u00eda, quedan los tablados en los lugares donde la muerte no existe"}, {"start": 3190.36, "end": 3195.52, "text": " porque todo se representa en el teatro en la narraci\u00f3n de Anaurib, y para ustedes"}, {"start": 3195.52, "end": 3199.52, "text": " feliz d\u00eda cualquiera que se deshacera."}, {"start": 3199.52, "end": 3214.44, "text": " Este podcast fue posible gracias al equipo de la Casa de Historia, y a las Su\u00e1rez,"}, {"start": 3214.44, "end": 3220.68, "text": " Milenabel Tr\u00e1n, Arturo Jim\u00e9nez Vigna, Daniel Moreno Franco, grabado en los gatos"}, {"start": 3220.68, "end": 3226.4, "text": " estudio la adici\u00f3n y la musicalizaci\u00f3n de Eduardo Corredor Fonseca, de rueda sonido"}, {"start": 3226.4, "end": 3231.92, "text": " y siempre con la ayuda fuerte y poderosa de Santiago Espinoza Uribe y Laura Rojas"}]
Diana Uribe
https://www.youtube.com/watch?v=_e2AY70KrKU
Adiós, señor Haffmann
#miercolesdecine Adiós, señor Haffmann París, 1942. François Mercier es un hombre corriente que solo aspira a formar una familia con la mujer que ama, Blanche. También es el empleado de un talentoso joyero, el señor Haffmann. Pero ante la ocupación alemana, los dos hombres no tendrán más remedio que concluir un acuerdo cuyas consecuencias, a lo largo de los meses, alterarán el destino de nuestros tres personajes. (FILMAFFINITY) ¡Síguenos en nuestras Redes Sociales! Facebook: https://www.facebook.com/DianaUribe.fm/ Instagram: https://www.instagram.com/dianauribefm/?hl=es-la Twitter: https://twitter.com/dianauribefm?lang=es Pagina web: https://www.dianauribe.fm
Buenas, hoy en mi árcoles de cine les tenemos una cantidad de dilemas éticos, es una película francesa sobre una historia durante la ocupación alemana que es un capítulo muy muy complejo de la historia de los franceses y que siempre que se hace algo sobre ese tema entramos en una cantidad de aristas y de terrenos movilizos porque las osea francesas expuse a una cantidad de cosas que trae la guerra los nazis entran en europa y francia en ese momento decide bajo el mando del mariscal petal que fue el héroe de la primera guerra mundial y el héroe de verdum decide rendirse a los alemanes es una de las decisiones más difíciles del mundo porque pues lo consideraron un traidor el hombre que había sido el héroe de la batalla de verdum pero él decía que era eso la destrucción total y absoluta de francia que siempre es una decisión que un dirigente no quiere echarse a los hombros total francia se rinde y entonces cuando francia se rinde hay una francia ocupada que es la francia de bichi y de gol va a crear una resistencia se va a ir a Londres y va a decir que el que no estoy acuerdo con la rendición de francia que haya haga coler una resistencia y esa va a ser la resistencia a la francia ocupada entonces francia a tener dos sectores un sector digamos libre que es un sector que está en combate y un sector ocupado en ese sector ocupado los nazis hacen lo que quieren porque es que lo más cuestionable de todo fue la forma como se les entregó el país la forma como entrar a una controlar absolutamente todo y como el gobierno perdió realmente la soberanía y la capacidad de decidir sobre los destinos de francia y como el antisemitismo se dio garra ya y los judíos fueron llevados a los campos de concentración como en toda Europa pero en francia la cosa fue muy fuerte porque había un antisemitismo anterior y esto esto fue grave entonces hubo quienes colaboraron quienes se plegaron a los nazis y cuando termine la guerra eso va a ser el estigma más grande de todos quienes colaboraron y quienes resistieron ese es como el marco general quien se fue paulado quien se fue para el otro porque eso determina quien resistió o quien aceptó la ocupación nazi y toda la villanía que con ella vino en ese contexto macro hay una cantidad de pequeñas historias porque además los alemanes estuvieron en las casas en la vida en el campo estaban dentro de las casas de las mujeres francesas mientras los hombres estaban en la guerra hubo muchas circunstancias que hicieron que el hilo no fuera tan claro entre una cosa y otra pero en la ciudad de 100 parís estamos hablando de un hombre próspero que tiene una joyería que les un gran joyero un tipo con muy buena habilidad y es un nombre de negocios tiene una familia de su esposa y cuatro hijos tiene un empleado un trabajador que tiene una esposa con la que quiere ser feliz y que es un hombre digamos modesto el trabajador y tiene su esposa toda bonita y ya y no resulta que llegan los nazis y resulta que ocupan la ciudad y van por todos los negocios judíos entonces este hombre el señor jafman porque nuestra película se llama el adiós del señor jafman o adiós señor jafman si en ese adiós y es que es y es jafman resulta que este hombre le propone al trabajador un trueque le dice yo le hago una compra venta de la joyería usted me la compra la ponía su nombre cuando termine la guerra usted me la devuelve y yo le permito usted que haga un negocio independiente le doy las bases para hacer su propio negocio para no perder el trabajo de toda una vida y para no perder el trabajo de generaciones de joyeros porque si no simplemente llegan los nazis destruyan el negocio aquí avan todo que man eso y ya entonces era un recurso para poder salvar el trabajo de toda una vida entonces él hace eso logra que la mujer y los cuatro hijos salgan pero en el momento en que él va a salir ya no puede ya tarde entonces le toca quedarse en el sotano y al quedarse en el sotano mientras el trabajador el antiguo trabajador y su esposa van a vivir a la casa y a la joyería se cambian los papeles y el otro se vuelve el propietario y este se vuelve un hombre refugiado en el sotano y ahí empiezan a cambiarse los papeles y se juegan todos los dilemas que tienen los franceses con ese capítulo de su historia la colaboración porque al trabajador le va divinamente con los nazis que han encantados con las joyas y empiezan al llevarlo a este nivel social de las fuerzas de ocupación que tenían acceso a todo al champana los grandes eventos que estaban rodeados siempre de mujeres muy hermosas que estuvieron con ellos que eso sería una historia baravicina a las mujeres que estuvieron con los alemanes y que se acostaron con los alemanes durante la guerra después la reparían y les harían escázmios públicos terribles o sea lo que se dan a cobrar los franceses los unos a los otros cuando termine la guerra y paricia liberada es uno de los capítulos más duros de la historia esos han a una edición que todavía levanta ampolla con el le dijo y esto se va forjando en que asusté durante la ocupación que hacía usted ahí entonces este señor humilde como dueño de la joyería como un hombre frecuentado por los nazis que en ese momento son el poder empieza a subirsele a la cabeza todo el dinero el poder el el prestigio social todas aquellas cosas que él siempre creyó que merecía y la vida le había negado porque además era un hombre que tenía una discapacidad entonces no podía hacer por ese motivo no podía hacer movilizado mientras tanto el señor javman va pasando por toda la pobreza el confinamiento incluso casi la esclavitud y se voltea toda la fortuna se voltea toda la posición de la vida y el uno queda arriba y el otro queda abajo pero sin embargo la situación está en delicada tan explosiva y tan móvil que eso no se va a quedar desde tamaño entonces hay un pacto que es el que nos les va a contar porque eso se trata la película si hay un pacto que le propone el trabajador al señor javman con respecto a su esposa con quien quiere tener un hijo que hace que las cosas se compliquen infinitamente entonces esta esposa esta mujer frágil huesuda delicada que se ve abocada a todas las circunstancias en las las cosas se van desembolviendo sin que a ella nadie le esté preguntando realmente nada todas las decisiones que se van tomando sobre ella y sobre su vida sin que ella realmente sea consultada con todo lo que se está proponiendo planeando ahí esta mujer termina siendo aquella que realmente desida el destino de todos esta mujer va a terminar cumpliendo un papel importantísimo poniendo su principio y su ética frente a la situación de ventaja en la que se vio en vuelta y todo se vuelve una paradoja y todo se voltea y toda la suerte empieza a darle la espalda y una cantidad de situaciones se van a debilar en un momento cumbre cuando va pasando el tiempo mientras este hombre está confinado en el sópano de la casa que fue la joyería y la vida de él antes de la ocupación alemana entonces qué pasa que la guerra pone a la gente en dilemas imprevistos que las personas en tiempos de guerra hacen piensan y se ven abocadas a situaciones que nunca en la vida se hubieran imaginado que tendrían que decidir que la guerra en lo que es todo y esto que esto no es la violencia de la guerra sino la ocupación que es el sometimiento de las ciudades a la voluntad total y a la arbitrariedad de los nazis y es este tiempo tan álgido de la ocupación de parís y estos personajes que se ven en el azar de la historia de una ciudad ocupada y eso les mueve toda la ética y toda la posibilidad que en colabora quien resiste quien entiende quien mantiene la dignidad bueno ahí entran el juega todos los dilemas que los franceses todavía se preguntan por qué ese capítulo ellos nunca lo terminaron de cerrar cada vez que se hace una película esta los franceses se rascan la cabeza porque son cosas que todavía los extremesen aún después de tanto tiempo es una película bien interesante intimista una gran actuación y es una de esas historias donde no está pendiente de lo que está pasando porque a cada minuto la situación va cambiando y va tomando rumbo completamente inesperados el adiós al señor Halfman es nuestro película hoy el miércoles de cine
[{"start": 0.0, "end": 18.48, "text": " Buenas, hoy en mi \u00e1rcoles de cine les tenemos una cantidad de dilemas \u00e9ticos, es una pel\u00edcula"}, {"start": 18.48, "end": 32.28, "text": " francesa sobre una historia durante la ocupaci\u00f3n alemana que es un cap\u00edtulo muy muy complejo de la"}, {"start": 32.28, "end": 39.44, "text": " historia de los franceses y que siempre que se hace algo sobre ese tema entramos en una cantidad"}, {"start": 39.44, "end": 46.56, "text": " de aristas y de terrenos movilizos porque las osea francesas expuse a una cantidad de cosas que"}, {"start": 46.56, "end": 55.040000000000006, "text": " trae la guerra los nazis entran en europa y francia en ese momento decide bajo el mando del"}, {"start": 55.040000000000006, "end": 61.56, "text": " mariscal petal que fue el h\u00e9roe de la primera guerra mundial y el h\u00e9roe de verdum decide"}, {"start": 61.56, "end": 70.0, "text": " rendirse a los alemanes es una de las decisiones m\u00e1s dif\u00edciles del mundo porque pues lo consideraron"}, {"start": 70.0, "end": 78.0, "text": " un traidor el hombre que hab\u00eda sido el h\u00e9roe de la batalla de verdum pero \u00e9l dec\u00eda que era"}, {"start": 78.0, "end": 83.88, "text": " eso la destrucci\u00f3n total y absoluta de francia que siempre es una decisi\u00f3n que un dirigente no"}, {"start": 83.88, "end": 91.72, "text": " quiere echarse a los hombros total francia se rinde y entonces cuando francia se rinde hay una"}, {"start": 91.72, "end": 100.03999999999999, "text": " francia ocupada que es la francia de bichi y de gol va a crear una resistencia se va a ir a"}, {"start": 100.03999999999999, "end": 105.92, "text": " Londres y va a decir que el que no estoy acuerdo con la rendici\u00f3n de francia que haya haga coler una"}, {"start": 105.92, "end": 111.03999999999999, "text": " resistencia y esa va a ser la resistencia a la francia ocupada entonces francia a tener dos"}, {"start": 111.03999999999999, "end": 118.88, "text": " sectores un sector digamos libre que es un sector que est\u00e1 en combate y un sector ocupado en"}, {"start": 118.88, "end": 126.11999999999999, "text": " ese sector ocupado los nazis hacen lo que quieren porque es que lo m\u00e1s cuestionable de todo fue"}, {"start": 126.11999999999999, "end": 134.88, "text": " la forma como se les entreg\u00f3 el pa\u00eds la forma como entrar a una controlar absolutamente todo y"}, {"start": 134.88, "end": 142.6, "text": " como el gobierno perdi\u00f3 realmente la soberan\u00eda y la capacidad de decidir sobre los destinos de"}, {"start": 142.6, "end": 150.4, "text": " francia y como el antisemitismo se dio garra ya y los jud\u00edos fueron llevados a los campos de"}, {"start": 150.4, "end": 157.6, "text": " concentraci\u00f3n como en toda Europa pero en francia la cosa fue muy fuerte porque hab\u00eda un antisemitismo"}, {"start": 157.6, "end": 167.12, "text": " anterior y esto esto fue grave entonces hubo quienes colaboraron quienes se plegaron a los nazis"}, {"start": 167.12, "end": 173.92000000000002, "text": " y cuando termine la guerra eso va a ser el estigma m\u00e1s grande de todos quienes colaboraron y"}, {"start": 173.92000000000002, "end": 179.96, "text": " quienes resistieron ese es como el marco general quien se fue paulado quien se fue para el otro"}, {"start": 179.96, "end": 187.88, "text": " porque eso determina quien resisti\u00f3 o quien acept\u00f3 la ocupaci\u00f3n nazi y toda la villan\u00eda que con"}, {"start": 187.88, "end": 195.64000000000001, "text": " ella vino en ese contexto macro hay una cantidad de peque\u00f1as historias porque adem\u00e1s los alemanes"}, {"start": 195.64, "end": 202.51999999999998, "text": " estuvieron en las casas en la vida en el campo estaban dentro de las casas de las mujeres francesas"}, {"start": 202.51999999999998, "end": 208.6, "text": " mientras los hombres estaban en la guerra hubo muchas circunstancias que hicieron que el hilo no"}, {"start": 208.6, "end": 215.44, "text": " fuera tan claro entre una cosa y otra pero en la ciudad de 100 par\u00eds estamos hablando de un hombre"}, {"start": 215.44, "end": 223.79999999999998, "text": " pr\u00f3spero que tiene una joyer\u00eda que les un gran joyero un tipo con muy buena habilidad y es un"}, {"start": 223.8, "end": 232.76000000000002, "text": " nombre de negocios tiene una familia de su esposa y cuatro hijos tiene un empleado un trabajador que"}, {"start": 232.76000000000002, "end": 242.72000000000003, "text": " tiene una esposa con la que quiere ser feliz y que es un hombre digamos modesto el trabajador y"}, {"start": 242.72000000000003, "end": 250.08, "text": " tiene su esposa toda bonita y ya y no resulta que llegan los nazis y resulta que ocupan la ciudad"}, {"start": 250.08, "end": 258.32, "text": " y van por todos los negocios jud\u00edos entonces este hombre el se\u00f1or jafman porque nuestra pel\u00edcula"}, {"start": 258.32, "end": 266.1, "text": " se llama el adi\u00f3s del se\u00f1or jafman o adi\u00f3s se\u00f1or jafman si en ese adi\u00f3s y es que es"}, {"start": 266.1, "end": 276.04, "text": " y es jafman resulta que este hombre le propone al trabajador un trueque le dice yo le hago una"}, {"start": 276.04, "end": 283.20000000000005, "text": " compra venta de la joyer\u00eda usted me la compra la pon\u00eda su nombre cuando termine la guerra usted"}, {"start": 283.20000000000005, "end": 288.6, "text": " me la devuelve y yo le permito usted que haga un negocio independiente le doy las bases para"}, {"start": 288.6, "end": 294.72, "text": " hacer su propio negocio para no perder el trabajo de toda una vida y para no perder el trabajo de"}, {"start": 294.72, "end": 300.96000000000004, "text": " generaciones de joyeros porque si no simplemente llegan los nazis destruyan el negocio aqu\u00ed"}, {"start": 300.96, "end": 307.71999999999997, "text": " avan todo que man eso y ya entonces era un recurso para poder salvar el trabajo de toda una vida"}, {"start": 307.71999999999997, "end": 315.68, "text": " entonces \u00e9l hace eso logra que la mujer y los cuatro hijos salgan pero en el momento en que \u00e9l"}, {"start": 315.68, "end": 322.35999999999996, "text": " va a salir ya no puede ya tarde entonces le toca quedarse en el sotano y al quedarse en el"}, {"start": 322.35999999999996, "end": 328.64, "text": " sotano mientras el trabajador el antiguo trabajador y su esposa van a vivir a la casa y a la"}, {"start": 328.64, "end": 335.59999999999997, "text": " joyer\u00eda se cambian los papeles y el otro se vuelve el propietario y este se vuelve un hombre"}, {"start": 335.59999999999997, "end": 345.2, "text": " refugiado en el sotano y ah\u00ed empiezan a cambiarse los papeles y se juegan todos los dilemas que"}, {"start": 345.2, "end": 351.47999999999996, "text": " tienen los franceses con ese cap\u00edtulo de su historia la colaboraci\u00f3n porque al trabajador le va"}, {"start": 351.47999999999996, "end": 358.59999999999997, "text": " divinamente con los nazis que han encantados con las joyas y empiezan al llevarlo a este"}, {"start": 358.6, "end": 366.52000000000004, "text": " nivel social de las fuerzas de ocupaci\u00f3n que ten\u00edan acceso a todo al champana los grandes eventos"}, {"start": 366.52000000000004, "end": 371.84000000000003, "text": " que estaban rodeados siempre de mujeres muy hermosas que estuvieron con ellos que eso"}, {"start": 371.84000000000003, "end": 376.96000000000004, "text": " ser\u00eda una historia baravicina a las mujeres que estuvieron con los alemanes y que se acostaron"}, {"start": 376.96000000000004, "end": 381.88, "text": " con los alemanes durante la guerra despu\u00e9s la repar\u00edan y les har\u00edan esc\u00e1zmios p\u00fablicos"}, {"start": 381.88, "end": 388.56, "text": " terribles o sea lo que se dan a cobrar los franceses los unos a los otros cuando termine la guerra"}, {"start": 388.56, "end": 394.52, "text": " y paricia liberada es uno de los cap\u00edtulos m\u00e1s duros de la historia esos han a una"}, {"start": 394.52, "end": 399.92, "text": " edici\u00f3n que todav\u00eda levanta ampolla con el le dijo y esto se va forjando en que asust\u00e9"}, {"start": 399.92, "end": 406.32, "text": " durante la ocupaci\u00f3n que hac\u00eda usted ah\u00ed entonces este se\u00f1or humilde como due\u00f1o de la"}, {"start": 406.32, "end": 414.56, "text": " joyer\u00eda como un hombre frecuentado por los nazis que en ese momento son el poder empieza a"}, {"start": 414.56, "end": 423.03999999999996, "text": " subirsele a la cabeza todo el dinero el poder el el prestigio social todas aquellas cosas que \u00e9l"}, {"start": 423.03999999999996, "end": 428.6, "text": " siempre crey\u00f3 que merec\u00eda y la vida le hab\u00eda negado porque adem\u00e1s era un hombre que ten\u00eda una"}, {"start": 428.6, "end": 435.64, "text": " discapacidad entonces no pod\u00eda hacer por ese motivo no pod\u00eda hacer movilizado mientras tanto el"}, {"start": 435.64, "end": 445.36, "text": " se\u00f1or javman va pasando por toda la pobreza el confinamiento incluso casi la esclavitud y se"}, {"start": 445.36, "end": 453.32, "text": " voltea toda la fortuna se voltea toda la posici\u00f3n de la vida y el uno queda arriba y el otro"}, {"start": 453.32, "end": 460.36, "text": " queda abajo pero sin embargo la situaci\u00f3n est\u00e1 en delicada tan explosiva y tan m\u00f3vil que eso"}, {"start": 460.36, "end": 465.96000000000004, "text": " no se va a quedar desde tama\u00f1o entonces hay un pacto que es el que nos les va a contar porque eso"}, {"start": 465.96000000000004, "end": 473.88, "text": " se trata la pel\u00edcula si hay un pacto que le propone el trabajador al se\u00f1or javman con respecto a"}, {"start": 473.88, "end": 482.36, "text": " su esposa con quien quiere tener un hijo que hace que las cosas se compliquen infinitamente entonces"}, {"start": 482.36, "end": 491.28000000000003, "text": " esta esposa esta mujer fr\u00e1gil huesuda delicada que se ve abocada a todas las circunstancias en las"}, {"start": 491.28000000000003, "end": 498.24, "text": " las cosas se van desembolviendo sin que a ella nadie le est\u00e9 preguntando realmente nada todas las"}, {"start": 498.24, "end": 505.04, "text": " decisiones que se van tomando sobre ella y sobre su vida sin que ella realmente sea consultada con"}, {"start": 505.04, "end": 514.64, "text": " todo lo que se est\u00e1 proponiendo planeando ah\u00ed esta mujer termina siendo aquella que realmente"}, {"start": 514.64, "end": 523.6, "text": " desida el destino de todos esta mujer va a terminar cumpliendo un papel important\u00edsimo poniendo"}, {"start": 523.6, "end": 531.76, "text": " su principio y su \u00e9tica frente a la situaci\u00f3n de ventaja en la que se vio en vuelta y todo se vuelve"}, {"start": 531.76, "end": 541.48, "text": " una paradoja y todo se voltea y toda la suerte empieza a darle la espalda y una cantidad de situaciones"}, {"start": 541.48, "end": 547.4399999999999, "text": " se van a debilar en un momento cumbre cuando va pasando el tiempo mientras este hombre est\u00e1"}, {"start": 547.4399999999999, "end": 554.6, "text": " confinado en el s\u00f3pano de la casa que fue la joyer\u00eda y la vida de \u00e9l antes de la ocupaci\u00f3n"}, {"start": 554.6, "end": 564.6, "text": " alemana entonces qu\u00e9 pasa que la guerra pone a la gente en dilemas imprevistos que las personas"}, {"start": 564.6, "end": 570.48, "text": " en tiempos de guerra hacen piensan y se ven abocadas a situaciones que nunca en la vida se"}, {"start": 570.48, "end": 575.96, "text": " hubieran imaginado que tendr\u00edan que decidir que la guerra en lo que es todo y esto que esto no"}, {"start": 575.96, "end": 583.6800000000001, "text": " es la violencia de la guerra sino la ocupaci\u00f3n que es el sometimiento de las ciudades a la voluntad"}, {"start": 583.68, "end": 590.8399999999999, "text": " total y a la arbitrariedad de los nazis y es este tiempo tan \u00e1lgido de la ocupaci\u00f3n de"}, {"start": 590.8399999999999, "end": 598.9, "text": " par\u00eds y estos personajes que se ven en el azar de la historia de una ciudad ocupada y eso les"}, {"start": 598.9, "end": 605.7199999999999, "text": " mueve toda la \u00e9tica y toda la posibilidad que en colabora quien resiste quien entiende quien"}, {"start": 605.7199999999999, "end": 611.7199999999999, "text": " mantiene la dignidad bueno ah\u00ed entran el juega todos los dilemas que los franceses todav\u00eda se"}, {"start": 611.72, "end": 616.28, "text": " preguntan por qu\u00e9 ese cap\u00edtulo ellos nunca lo terminaron de cerrar cada vez que se hace una"}, {"start": 616.28, "end": 621.6800000000001, "text": " pel\u00edcula esta los franceses se rascan la cabeza porque son cosas que todav\u00eda los extremesen"}, {"start": 621.6800000000001, "end": 630.12, "text": " a\u00fan despu\u00e9s de tanto tiempo es una pel\u00edcula bien interesante intimista una gran actuaci\u00f3n y es"}, {"start": 630.12, "end": 635.6, "text": " una de esas historias donde no est\u00e1 pendiente de lo que est\u00e1 pasando porque a cada minuto la"}, {"start": 635.6, "end": 642.64, "text": " situaci\u00f3n va cambiando y va tomando rumbo completamente inesperados el adi\u00f3s al se\u00f1or"}, {"start": 642.64, "end": 668.36, "text": " Halfman es nuestro pel\u00edcula hoy el mi\u00e9rcoles de cine"}]
Diana Uribe
https://www.youtube.com/watch?v=UhgH09SGk-w
Festival del Porro y Festival Nacional de Gaitas
#podcastdianauribe #festivaldelporro En este tercer capítulo de la segunda temporada de nuestra serie de Ferias y Fiestas de Colombia, les traemos dos fiestas en un mismo capítulo: El Festival de Porro de San Pelayo (Córdoba) y el Festival de Gaitas de Ovejas (Sucre). Nos vamos para la región de los Sinú, el “país encantado de las aguas”, tierra de ríos, ciénagas, sabanas y montes. Hablaremos de fandangos, del mar Caribe, de música de gaitas, del porro, de María Barilla, de Francisco Llirene, de los Gaiteros de San Jacinto y de una región que ha resistido a través del pensamiento, la música y la cultura. Notas del episodio: Una descripción del mundo Sinú "el lugar encantado de las aguas" →https://www.banrep.gov.co/es/lugar-encantado-las-aguas-aspectos-economicos-cienaga-grande-del-bajo-sinu Las tradiciones del Festival Nacional del Porro en San Pelayo →https://www.colombia.com/turismo/ferias-y-fiestas/festival-nacional-del-porro/ Un poco de historia del Festival Nacional de Gaitas en Ovejas, Sucre →https://www.calendariodecolombia.com/fiestas-nacionales/festival-nacional-de-gaitas-en-ovejas La difusión de la música del Caribe colombiano a través de Lucho Bermúdez →https://www.radionacional.co/musica/lucho-bermudez-canciones-que-baila-colombia Aquí las diferencias y particularidades de la gaita colombiana →https://musicalcedar.com/que-tanto-conoces-la-gaita-colombiana/ Una increíble historia: la leyenda de María Barilla →https://revistadiners.com.co/cultura/7909_la-leyenda-de-maria-barilla/#:~:text=Mar%C3%ADa%20Barilla%20fue%20una%20fandanguera,a%20principios%20del%20siglo%20XX. ¡Síguenos en nuestras Redes Sociales! Facebook: https://www.facebook.com/DianaUribe.fm/ Instagram: https://www.instagram.com/dianauribefm/?hl=es-la Twitter: https://twitter.com/dianauribefm?lang=es Pagina web: https://www.dianauribe.fm
Buenas, bienvenidos y bienvenidas a uno de los festivales más maravillosos, más mágicos y ese festival de porro de San Pelayo. En realidad son dos festivales, los que vamos a narrar en un solo espacio con el relato de estos dos festivales, el festival de porro de San Pelayo y el festival nacional de gallitas de ovejas terminamos en nuestra serie de ferias y fiestas la región Caribe que la hemos estado visitando, gozando, bailando o yendo a través de sus múltiples manifestaciones lo hemos hecho con el Guín Mún Festival a la parte isleña de la archipiélago lo hemos hecho con el carnaval de barranquilla con el festival ballenato y ahora con estos dos con el festival de porro y con el festival nacional de gallitas de ovejas terminamos nuestra visita por el Caribe que es interminable pero pues son muchísimas más regiones las que nos habitan y las que nos suenan en la música entonces hoy estamos ahí para poder hablar de esto como nos vamos a meter con los ancestros como nos vamos a meter con el agua como nos vamos a meter con los enúves como vamos a estar en muchísimos territorios sagrados toca pedir permiso y respeto y honrar las tradiciones de todo lo que vamos a invocar acá porque esto más que un relato es una invocación porque aquí va a salir de todo aquí van a salir aguas van a salir espíritus van a salir mitos van a salir rebeldías van a salir diálogos van a salir espíritus poderosos e indomables pueblos resilientes que han aguantado absolutamente todo por el diálogo de la gaita y por su capacidad de ser pensantes y ser pueblos de diálogo y de pensamiento y de música y de danza estos relatos están atravesados por cualquier cantidad de historias porque pues es parte de todo lo nuestro no lo que lo que canta y lo que duele que se encuentran tantas veces en nuestros relatos ciféreas y fiestas vamos a empezar por la geografía vamos a empezar por la región donde ocurren estos dos festivales porque la región en sí misma es una magia infinita esta región es la savana y esos los montes de maría son los valles del San Jorge del sinú es la región que se conoce como el país encantado de las aguas es la región donde habitan los inúos o los inúos enúes un pueblo indígena prehispánico de tiempos inmemoriales que habitó los valles del río sinú del San Jorge y llegan incluso hasta elitoral caribe en todos los alrededores del golfo de morrosquillo y en los actuales departamentos que hoy son cordo a sucre y sur de bolivar entonces una región enorme este es pueblo de caos a la alfabrería y a la cerámica que tienen toda esa cantidad de formas antropomórgicas y de animales de todo tipo de animales de sonáquaticos y son terrestres era lo que en una época orlando fars borda nuestro gran sociólogo analista que tanto nos con los dijo y nos estudió llamada las comunidades anfibias y son culturas que serán reconocer a partir de la relación con las aguas aquí hay dos protagonistas que van a estar estrechamente ligados las aguas y la gaita la gaita es nuestra invitada fundamental y la gaita también se relaciona profundamente con las aguas estamos en una región absolutamente mágica y una región completamente maravillosa donde hay ríos y cienagas llenas de leyendas y de cosmogonías y los ríos y las cienagas también nos van a conectar con el litoral caribe entonces estamos en una plenitud de aguas que es muy particular dentro del planeta tierra yo siempre les he insistido que el mundo no es verde colombia el verde o sea cuando uno muchísimas regiones a las que no son inimaginables toda esta cantidad de aguas y más esta mezcla de aguas dulces de diferentes vertientes que se van a encontrar con el litoral de la Atlántico y van a crear también una parte de ese mundo Atlántico es decir aquí estamos entre cruzados en muchas formas de geografías de historias de mitos y de influencias y esto nos va a llevar a una riqueza impresionante y a unas formaciones de identidad histórica maravillosa entonces los enúes como tal eran ingenieros eran ingenieros es decir pueblos que sabían manejar las aguas y pueblos que sabían construir caminos y entender la naturaleza de la región que habitaron algo que hoy sigue siendo una cosa muy muy necesaria y están también en la región de los montes de maría es el lugar donde llaman el sitio donde las nubes se besan con la punta del ser entonces en todo este espectro cosmico geográfico acuático anfibio ancestral en toda la región del sino hay una voz digamos hay un sonido que es el que nos va a dar todo esto que es la guaita y la guaita nos va a meter en los dos festivales porque ella es como la señora como la dueña de todo lo que va a pasar de aquí en adelante aquaticas Yo siempre vivo en cantar, yo siempre vivo en cantar, yo les acabo al cuento que ovejas original, porque se baila el salosa, o la gaita la gaita. Entonces la gaita como tal, que es, digamos, nuestra gran invitada, aquella la que estamos invocando, es un instrumento que tiene su origen indígena, pero también tiene una gran significación dentro del mundo afro, y estos dos pueblos, estos dos elementos se van a juntar, y más adelante va a ver en el caso del Festival de Porro, Instrumentos de Viento Provenientes de Europa, que le van a dar una particularidad, sonido del Festival, y que nos va a juntar una cantidad de ritmos acá de cotonos, que llamaban, es decir, de, de esta, los tonos de los diferentes ecosistemas, que están representados en la diversidad de instrumentos, que se entrelazan para llegar a formar, unas harmonías musicales ancestrales míticas, que nos llevan al contacto con todos estos pueblos y toda esta riqueza. Entonces, nosotros sabemos de las gaitas que tocan las culturas de la Sierra Nevada, las culturas de los pueblos, Coguis y Arbacos, y también sabemos de las gaitas que se tocan en África, en el Museo de Búrquina Faso, según la región donde estén, se van dando las posibilidades de los instrumentos, y las regiones ganaderas son los tambores, son los cueros, si las regiones de juncos y es acuática son las flautas que ellos tienen y las gaitas, si las regiones en la zona árabe van a ser las darbucas, entonces ellos, digamos, tienen su propia ancestralidad, su propia naturaleza y su propio sonido, que se va a encontrar acá con el sonido de los pueblos originarios, y que van a crear una sonoridad conjunta, que se hace de estos dos encuentros de elementos acá. Y vamos a tener esta gaita, la gaita está elaborada con el tallo del cardo de la pitalla, y la cabeza de la gaita es hecha de sera de avejas, y de carbon de palo, y la boquilla, por la cual se toca, es del cañón de la pluma del pato. La gaita tiene dos tipos, es la gaita corta y la gaita larga. La gaita corta tiene seis orificios, y por allá se pueden sacar ritmos de cumbia, porro y púlla, que son los del primer festival que vamos a narrar, y la gaita larga se divide en dos, la gaita hembra y la gaita macho. La gaita hembra y la gaita macho tienen un diálogo permanente, porque la hembra lleva la armonía, y esa tiene cinco orificios, un que se van digitando, y la otra, la gaita macho va complementando con los compases y el contrapunto y las notas graves. Las gaitas se acompañan de los tambores, de las maracas, y ahí es donde nos vamos encontrando con todas las diferentes culturas, en la forma como se va acompañando la gaita. Y la gaita está con nosotros desde siempre, y va a estar también en la época de la colonia, y va a estar en la época de las fusiones de todos estos pueblos, y va a formar una cantidad de bailes y de ritmos, y va a tener una cantidad de variaciones, y entre ellos es una historia ancestral que nosotros tenemos que atraviesa toda nuestro devenir histórico, y resulta que la gaita nos va a llevar a la primera parte que es el porro, y el porro va a ser una de las características fundamentales de nuestros ritmos. Candelario Besón, o desea que el porro tiene la media exacta de la pasión, tiene el ritmo de la vida y la té con el corazón. Y los orígenes de estas fiestas siempre se disputan que hay donde fue primer, pero es que en toda la región se está tocando. Entonces igual que la música ballenata, que si fue primero aquí, guayap, aquí siempre también hay toda una discusión sobre los orígenes que los pasa muchísimo en las festividades. Pero antes del porro como tal que va a ser un ritmo tan característico de todo esto, que es nuestro invitado, exicia el pandango. Y ese baile aparece aquí en la época colonial. Existen referencias del pandango en España y en América. O algunos dicen que el baile español se hacía muy similar a los bundes, que eran bailes africanos. Aquí entre las diferentes momentos, porque esto nos lleó a ritmos de bailes que se han bailado en IT, en Cuba, en Cartagena, donde se han a los bundes, y donde ocurrían en las playas y en las plazas. Entonces vamos a entrar aquí una gran cantidad de influencias y además de caminos, porque además Cartagena, después vamos a ver que por la bloqueo de los piratas también hubo que hacer caminos alternos rutas terrestres y esas rutas terrestres van a desarrollar una cantidad de poblaciones y por esas poblaciones va a poderse encontrar una cantidad de ritmos que van a confluir. Porque estas historias son historias, digamos que se van entrelazando entre todo lo que va pasando ahí, entre las aguas, entre los tiempos coloniales, entre las llegadas de los pueblos de la África, entre la presencia ancestral de nuestros habitantes originales, entre el mundo anfibio. Todo eso nos va dando un marco y nos va dando una entrada a este mundo impresionante. Entonces, teniendo todos estos elementos ahí, vamos a convertir esto en festivales. Y vamos a convertir esto en festivales para qué? Para poder de alguna manera en marcar, institutionalizar, darle continuidad, darle espacios, darle poderes temporales a todos estos instrumentos. Porque cuando se meten en un festival o se meten en un carnaval, se meten en una organización que hace que toda esta magia tenga un canal que permita aglutinar a la gente para lo más importante de todo, que es la razón por la que hace hace todo esto para gozársela. Porque el objetivo de los carnavales y de los vestibales, que es lo que es más impresionante de todo, es que la gente de la goce. Todo esto se hace en Colombia para eso. Entonces resulta que empiezan a construir el festival de San Pelayo del porro, porque dicen que el porro tenía un periodo clásico que fue super productivo, super fértil que fue hacia los años 30. Y alrededor del porro está la leyenda de María Varilla, que se habla de la mujer que mejor bailaba el pandango, que era una mujer extraordinaria que era capaz de alborotar la naturaleza con sus ritmos y sus bailes. Y alrededor del mito de ella se va construyendo también la leyenda de las guaitas. Entonces, de la magia de la geografía, de la leyenda, de las invocaciones y de la estralidad, nace en las materias primas que se han a volver, festivales y los festivales es lo que nos permite remitirnos a momentos en los cuales irrumpe la maravilla y la felicidad y el goce y la música en la vida de la gente, que es de lo que se trata el recorrido que nosotros estamos haciendo. Entonces, pues el porro tiene su historia y tiene la historia de las bandas, porque resulta que también viene de muchísimos instrumentos europeos de viento, que le van a dar una sonoridad también que se va entre las arcolas que ya existen. Entonces, hablan de un tiempo en los años 30 en que después del periodo clásico consideran que hay una especie de sequía en la generación del porro por las guerras en Europa, que hace que los instrumentos nos lleguen con la regularidad con que llegaban antes y por el otro lado, por lo que en Colombia llamamos la violencia con Ben Mayuskula, que es un periodo de guerra civil y un periodo muy sangriento de nuestra historia, hemos tenido varios, pero ese es lo llamamos con Mayuskula. Entonces, dicen que eso también interrumpió un poco la fluidez de esto, sin embargo, va a haber dos personajes que van a universalizar el porro y lo van a dar a conocer al planeta completo, que son Lucho Bermudes y Pacho Galán. Ellos van a tener el formato de las grandes bandas de las Big Bands y van a tener un formato de orquestas grandísimo que nos van a dar a conocer hasta el extremo del continente, hasta la Argentina por todas partes. Nos van a conocer y ellos son responsables de que la Cumbia sea extendido de una manera tan impresionante en América Latina hasta ser un fenómeno peruano mexicano, argentino. Hoy por hoy, América Latina reivindica esta Cumbia como suya, pero el origen está aquí entre nosotros. Y son estas orquestas las que van a dar a leer al mundo el conocimiento de estos ritmos, digamos las que la internacionalizan y durante todo el periodo en que Colombia empieza a modernizarse y empiezan a inaugurarse las grandes obras y empiezan a crearse digamos como la construcción de la infraestructura de Colombia. Todos se inauguraba con la orquesta de Pacho Galán. O sea, todo. Es como el con la orquesta de Lucho Vermudel. Lucho Vermudel es como la banda sonora de la primera parte del podredezo en Colombia en el siglo XX y va a atravesar como toda nuestra mirada de ferrocarriles, de país, de todo lo que está pasando en ese momento. Lucho Vermudel es la banda sonora. Entonces esto también va a generar una identidad profunda en un país que se está transformando y que va a escuchar con Lucho Vermudel la musicalidad de la transformación de un país que se está modernizando. O sea, el con la orquesta de Lucho Vermudel es como la banda sonora de la primera parte del podredezo en Colombia. O sea, el con la orquesta de Lucho Vermudel es como la banda sonora de la primera parte del podredezo en Colombia. Esto es estas historias. Y es el festival de porro de San Pelayo. Y así es nuestra primera historia porque el festival de porro es el que va a glutinar todas las bandas y es el que nos va a permitir que nos encontremos en todas estas presentaciones. Y es el que va a llevar todos los ritmos clásicos de soy pelallero, el pilón, el sabado de la gloria, el tortugo, el siete de agosto, el sapo viejo y todo el personaje María Varilla. Todo esta leyenda, la idea es que eso se pueda de alguna manera canalizar. Entonces en 1977 en San Pelayo, que es un pequeño pueblo del departamento de Córdoba, que conoció toda la bonanza del algodón porque suena una época era un tiempo en que nosotros teníamos una secuencia de cosechas, inclusive viajente que vivía del oficio de cosechero, que iban a recoger las cosechas que venían en todo el año que eran las de algodón, las del café. Eso es todo un mundo en el cual nosotros estábamos en esa época. Y ahí era cuando venían toda la gente a recoger algodón porque esto era un paisaí absolutamente impresionante porque era copos blancos de algodones como decían juguuduce la caña y era algodón en todas partes. Entonces era la época en que los cultivos del algodón y el ganado y toda la época de la ganadiría le van a dar al porro palitiado una variante propia. Y esa variante propia es una variante que se hace en San Pelayo. Entonces existen San Pelayo, un recorrido milagroso del tiempo, dicen. Una isla de música en el letargo del valle, glorioso San Pelayo de trompetas y tambores. Eso es parte de los fragmentos de mi valle del gran Raúl Comesjatin. Entonces en 1977 se empiezan a crear los grandes parrantones que duran cinco días y el pueblo se transforma. Y empiezan a llegar las cabalgatas que van a llegar a la tarima y llegan al centro del pueblo y al día siguiente van a llegar todas las bandas participantes de la región. Y llegan bandas adultas y juveniles porque en todas las fiestas hemos visto que siempre hay una cantera de todos los jóvenes que van a crecer en estos ritmos y van a calantizar su continuidad de la historia. Entonces empieza una parranda, pero una parranda, una cosa impresionante y en esas parrándoles va a haber galerías de arte, va a haber muestras de atal, estalleres pedagógicos, conferencias, músicos diácaemicos sobre el porro y la música tradicional. El bien es la alborada donde se presentan las bandas para el concurso donde están los músicos invitados. Y ese es un festival de música popular donde hay de todo porque también hay ranchera, reggaeton, ballenato, todo. El domingo, el domingo de una celebración bellísima, bellísima, bellísima, porque ese día es cuando se brinden de lo menaje a todos los músicos que ya no están a todos los que han muerto. Entonces como una forma de recordarlos, idear desde el tributo, hay un homenaje musical donde las bandas caminan hacia el cementerio y onran con la música, el talento y la labor y la contribución de las que yo no están y rinden o frentas florales. Eso es una cosa muy bonita y esto es lo que les va a dar como esa continuidad entre los mundos, entre el mundo que estamos acá y los del masa ya que nos dieron todas esas músicas maravillosas. Y que empieza pues todo este festival y esto va a dinamizar la cultura y esto va a hacer que generaciones de jóvenes, de niños, de adultos, de ancianos, se emocionen de tocarlo, de bailarlo, hagan ruedas, de fandomo, que se va envolviendo bailes y palabras y van caminando por todas partes con los pasos del fandomo, que es como van bailando los ritmos del porro y como van deslizándose por un universo musical absolutamente maravilloso, porque es la fiesta de festival de porro de San Pelayo. Entonces, ese es nuestro primer aparte, ese es nuestro primer relato. Aquí empieza el espacio comercial. Los sonidos del mar caribe retumban en la región del silu y generan músicas tradicionales únicas de nuestro país, donde propios y extraños se citan en torno a los tambores y a las guaitas. Parte de esa riqueza cultural y musical de nuestro país se puede escuchar en el atardecer. Lunes a viernes desde las cuatro en la tarde en Radio Nacional de Colombia, estamos donde tú estás. Paralelamente, a esta gran celebración del porro que es una forma tan grande y tan importante de nuestra identidad histórica y musical, viene otro festival que tiene todos los espíritus de brujo de rebeldía, de magia, de resiliencia, de la vida. La historia de resiliencia, de diálogo, de resistencia, porque está en las zonas, una de las zonas que ha tenido mayor sufrimiento durante las diferentes épocas del conflicto armado, que son los montes de varía, que es el festival de oejas, de su crefestival de guaitas. Y se conjugan todos los elementos de altives resiliencia, de resistencia de la guaita, porque la guaita es resistencia también, porque la guaita diáloga, cuando los diálogos se han roto, porque la guaita suena, cuando han zonado los trinos de la tradogía y la tristeza. Y resulta que esta gente conjura, como una resistencia esténdrica maravillosa, desde tiempos ancestrales, la guaita, como elemento que es capaz de armonizarlo todo y de llevarlos a una gran epopella del alma. Entonces resulta que ovejas es un pueblo pequeñito, y también siempre que se hacen los festivales, que lo hemos visto mucho, es porque queremos rescatar y asegurar la continuidad de una tradición. Entonces en algún momento pensaban que las guaitas se habían perdido, que ese conocimiento ya no se estaba transmitiendo, y que era importante acuñarlo y hacerlo posible y darle una continuidad, que es la razón por la cual se organizan los festivales y se organizan los carnavales, la razón por la cual se hacen espagosar. Pero la manera como esto se garantiza la continuidad en el tiempo es a través de los festivales. Entonces allá en ovejas su cre, confluyen una gran cantidad de gaiteros, a tal punto que se le conoce como la universidad de la guaita, porque es el punto donde se unen todos los guaiteros de todos los pueblos, de todos los montes y de todos los planos, de todo esto que les he contado que son esas geografías. ¡Oh! ¡Ay no más! ¡No se invito festival! ¡Te gaisas que ovejas su crees! ¡No se invito festival! ¡Te gaisas que ovejas su crees! Para que oye a hablarlo del maestro macho crees, para que oye a hablarlo del maestro macho crees. ¡No se es nadie leje que yo hay los tiktorias! ¡No se es nadie leje que yo hay los tiktorias! ¡Oh! ¡Era la tienda viviendo en la escoria! ¡Oh! ¡Era la tienda y compa en la broma de historia! ¡Ajajajaj! ¡Oh! ¡Era la tienda y compa en la broma de historia! ¡Vas a vivir! Entonces ellos consideran que los años 80 ya la música estaba muy amenazada y que estaba el borde de la desaparición. Entonces ya no quedaban sino los caideros de San Jacinto y ya estaban a morir ya los grandes maestros y consideran que estaba muy disperso por la geografía esta música maravillosa y entonces para salvaguardar la tradición, para cunyarla, para celebrarla, para darle su merecido homenaje para entronizarla en el lugar en que debe estar en 1985. Se creó el Festival Nacional de Gaithas Francisco Girene en Ovejas, Sucre, en la región de los montes de María. Entonces este festival como tal se remonta al Natalicio del Patrón de Ovejas en Francisco de Asis, que no lo volvemos a encontrar porque en San Pacho era por él que se hacía, que se hacía en todos esos grandes eleoraciones en Quipto y en todo el Chocobo, pues desde el mismo San Francisco de Asis, que viene que también es Patrón de Acá. Entonces los de Aitero vivan en una zona donde se reunían el 4 de octubre en la plaza para acompañar a los instrumentos de los ritnos típicos de la región donde llegaban todos los tinterpetes de Monte de María y del Caribe Colombiano. Esto estaba asociado en un principio a las festividades religiosas, pero luego va a tener, digamos, como otro tiempo, en el cual va a ser el Festival por sí mismo. Entonces ya digamos como en otra fecha en donde no va a ser, no se va a ser al mismo tiempo con la celebración religiosa, aunque eso es si es origen, pero no va a ser la manera como se va a desarrollar en el futuro. Entonces el nombre del Festival, en la celebración del más duro de los duros, de un tipo de una destreza tan impresionante con la tambora, o sea ser particularmente diestro en esta zona, pues es un nivel de inmortalidad imposible imaginar porque todos los hombres, están hablando de gente con un nivel absolutamente alto, para que alguien se distingue y se destaque tanto, tiene que ser que tenía a todos los duros de su lado y eso fue lo que le pasó a Francisco Llirene. Y por eso es que el Festival lo nombran en su honor, que también él era el de Amador y también tocaba las baracas en el músico que tenía muchísimas destres asjuntas. Entonces empiezan a hacer una reunión de artistas y el 5 de septiembre se lleva a cabo la primera edición del Festival de las Gaithas de Francisco Llirene. Y las primeras ediciones del Festival se interpretan con la Gaitha larga, con la Gaitha corta, e hice subdividen en categorías para fisionados, en el 89 ya empiezan a incluir las modalidades infantiles, juveniles, la de la canción inédita y empiezan a formar lo que es más importante en todos los festivales, la formación de las escuelas infantiles, es lo que le va a permitir la continuidad. Entonces empiezan a llegar hombres campesinos que venían de todas las tierras, muchos ya eran mayores y que se consideran los grandes de la Gaitha, pero en 1987 aparecen orelaprada y se convierte y esto siempre es un relato de picolo mismo cuando estuvimos narrando el Festival de Valle Nato. La primera mujer que toca las Gaithas en el Festival y eso es unito, eso es en 1987 y eso le va a permitir la participación femenina en el evento y esto pues es una cosa de la mayor importancia. Entonces empiezan a interpretar las canciones a tocar al llamador, la Gaitha, las maracas y eso para nosotros es muy importante, o sea, y finalmente esto nos va a llevar a una agrupación conformada por sobre las mujeres que se van a llamar las diosas de la Gaitha, game el favor, se aponga si usted en la situación y yo en el relato de que nosotros vamos a tener todas estas voces también en el Festival. En el 2015 el Festival Nacional de las Gaithas es declarado patrimonio cultural e inmaterial de la nación y empieza a celebrarse pues lea todo el interés cultural y empiezan a llegar y entonces empiezan las competencias, los talleres de formación, la alborrada. Todo la digamos el ritual, pues que nosotros en los Festivales tenemos el origen geográfico, el mito, la leyenda, el ritmo y esto todo esto se va a conformar en un Festival que son los que tienen las estructuras, que hacen que el ritmo y el mito se organicen para convocar a la gente y se vuelvan el espacio, digamos, el espacio musical y cósmico de la gosadera. En eso consiste en los Festivales, entonces el primer día el pueblo recibe todas las bandas participantes y empiezan a llegar todos los expertos en la música de Gaitha que se han venido preparando todo el año para llegar allá porque aquí llegan los mejores. El segundo día ocurre la alborrada que es un paseo musical que atraviesa todo el pueblo, un pueblo que normalmente es un pueblo muy tranquilo, que normalmente es un pueblo fresco por ahí pero esto llega y se arma un alborrota impresionante y se va llenando de gente, se va llenando de gente y empieza la parranda y empieza la parranda en las calles en las plazas en los patios de la casa, la parranda dura cuatro días. Y entonces no se aduerme, que empieza la una de la tarde y a las seis de la mañana hasta las seis de la mañana y hay que aguantar todo lo que hay que aguantar para parrandiar pero para parrandiar hay que tener todo el espíritu que he venuti en últimas es el que aguanta el espíritu. Y un bejo de juzillo de flora amarilla en la emalada de guamas de pepa cargá, sombra para mi pensamiento, mientras reposaba cuando en el cielo estaba un sol canicular. ¡Ah, mi mente le llegaban, mi elbriozo recuerdo más caliente que lo rayo de la luz solar, yo que rollo, cristalino de agua limpia y pura rebreca mi pensamiento, es un poquito más. Entonces el lunes que es el cuarto día vienen los conciertos y el concurso y las eliminatorias de las mejores bandas. Ahí es cuando se establecen ya los premios finales las categorías y la premiación. Y esa premiación digamos es realmente un reconocimiento, al talento, a la disciplina, al trabajo, a la manera como la gente se ha preparado para esto. Entonces alrededor de esto es todo lo que va a ser el festival. O el festival, como tal nos garantiza la continuidad de la tradición, los gaiteros bajan del bonte, de todos los montes van bajando los gaiteros para contrarse en el festival y eso es lo que nos va a traer toda la armonía y la maravilla. Pero resulta que esto está entretejido con una historia durísima que nosotros hemos vivido que eso no lo va a matar, a través de los festivales, varias veces nos hemos atravesado trasfondo extremadamente dolorosos porque bueno, es la miena, es una condición histórica, nosotros hemos tenido que vivir y esperamos no tener que vivir llamas resulta que este pueblo de la región de Asinú es un pueblo que ha tenido una resistencia a muchísimas formas de conflicto y ellos han hecho la resistencia a través del diálogo y han superado con la sabrosura. Todo lo que ha significado la dureza es lo que ellos han vivido es una redección que ha sido estigmatizada por todo lo la barbaridad de los bejámenes de la guerra donde la gente se se para sobre el dolor con la música en Colombia y eso lo vimos mucho en el chocó, la alegria resistencia, la música resistencia, el baile resistencia. Es así como nosotros existimos, gozamos, vivimos, amamos, soñamos y seguimos existiendo en los días de lluvia como en los días de sol. Entonces este país que se crea nos montes de María, que es una historia entre la alegria y la resiliencia, aprendido a dirir sus conflictos a través del ritmo y del verbo, la misma característica dialogante de la gaita. El mismo hecho de que la gaita esté permanentemente en un contacto de diálogo, hace que la gente dialogue para poder vivir y dirir sus conflictos aprendiendo de la gaita y que sea la gaita la que musicalices estos diálogos. Entonces ellos han tenido dos procesos de paz al interior de ellos y en los procesos de paz la gaita ha sido siempre la que marca la que suena. Ese diálogo, estos son gente pensantes, son gente muy altiva, son gente con una raíz muy profunda de la que son tremendamente conscientes, son conscientes de su raíz y su ancestralidad. Y ellos, la gente de Oejas dice que ellos no han tenido que ese pan ninguna parte para atraer la música, la música la tienen ellos en su propio corazón, se produce en los montes de María, es parte de su manera de evitar el universo y por lo tanto ellos simplemente tocan lo que tienen en el alma. Y es un rasgo de identidad y de pertenencia que les da toda la fuerza para resistir de al hogar, aguantar, plantarse frente a cosas inimaginables que se han vivido en esa región y que sobrevive a pesar de todo lo que pase porque la música sobreviva todo, el arte sobreviva todo, los conflictos pasaran y estamos en mejores tiempos ahora. Pero la gaita sigue ahí, la gaita los ha acompañado en los días más duros, la gaita los acompaña en los días de esperanza, la gaita los acompaña con su diálogo, la gaita los acompaña con su fe, la gaita los acompaña en la existencia misma de su manera de evitar una región tan mágica y tan hermosa. Y la celebración a esa gaita es el festival de ovejas, incluso la única vez que no sé que no se celebró a tiempo, digamos es el euro, pero no a tiempo, fue el día que mataron a un candidato en la alcaldía y en ese momento no sé eso el festival en octubre, porque era un duelo muy grande, pero se terminó haciendo en diciembre para que el festival no fuera derrotado por la muerte. Entonces, ellos es como la única vez que no lo han podido hacer, porque aquí los festivales atraviesan cualquier cantidad de visitudes, entonces no lo hicieron en su fecha original, por lo demás siempre se hace celebrado y siempre se celebra y esto cada vez tiene más, más honoridad, más gente sabe ese festival, en un principio recibié a la gente gratis, porque no es que haya hoteles allá y como dos, no es en la casa en las habitaciones de la gente después esto se volvió también una manera de ingreso para el pueblo porque la gente va llegando y se queda donde los reciban y es también es parte de entrar a formar parte de estas historias que se van tejiendo alrededor de la gaita y es un festival que tiene toda la importancia y estos dos festivales, el festival del porro y el festival de las gaitas y ovejas son los que los que nos sonorizan los que nos llevan a las aguas de la sabana a todo el sinú porque como le digo que hay montes pero también hay sabana, y hay es una región absolutamente rica y absolutamente mágica y además de ser tan bella, tan mágica y tan rica es absolutamente sonora, entonces los encuentros de la ancestralidad, el sonido de nuestra identidad es sonido del porro que nos hace conocer por muchísimas regiones del planeta y el sonido de la gaita que nos trae la representación del mito, la leyenda, la magia y el espíritu de la altivés, la identidad, la pertenencia y la y la resiliencia de poder vivir todo lo vivido y seguir bailando y seguir gozando es lo que significa el festival de ovejas en montes de María y toda la alegría del porro que nos ha dado tanto brillo y tanta luz es lo que significa el festival del porro de San Pelayo, entonces con estos dos ritmos maravillosos de estas regiones encantadas, increíbles, que nos dan estas lecciones de valía, de unas estrálidas de los gaiteros bajando, de la gente encontrándose y de la gozadera que es de lo que se tratan todas estas fiestas y de los puntos de encuentro que donde se resignifica, la dureza de la vida por la magia del bail de la música de los pasos de las caderas del frenes y del movimiento, es lo que nos lleva a estas regiones tan maravillosas y lo que nos hace entrar con un profundo respeto y con un profundo permiso por los ancestros, por las ancestras, por las gaitas, por la gente que estaba, por la gente que estuvo y por el espíritu tan poderoso y tan indomable, que en estas tierras se baila y se canta con la gaita, y por lo cual hemos estado recorriendo con un honor impresionante para tratar de poder representar siquiera minimamente, la riqueza de la que estamos hablando acá, siquiera a un nivel medianamente imaginable, lo que es una magia de estas y lo que es una sonoridad de estas y lo que es la altivés y la manera tan sabia y tan maravillosa con estas personas, alegran la vida y el espíritu del cosmos, con la magia de la gaita y con la gratitud por ellos y por todo lo que ellos hacen, por el espíritu y la posibilidad de la recinificación de todo, que lo hace la música de los festivales, para el objetivo común de todas estas fiestas, la gozadera, que es de lo que se trata, o sea todos los festivales y ferias y fiestas que estamos haciendo, nos hacemos para gozar y los hacemos desde el punto de vista del encuentro de la fuerza vivificante del espíritu y el encuentro de la magia y el encuentro de la danza, donde el cuerpo, el ritmo, el espíritu, el agua y la gaita se encuentran para conjugar una poesía cosmica del universo, que es como se expresa en estos festivales, y que es el recorrido que hemos hecho en el día de hoy, con toda la felicidad y la alegría de narrarles estas historias maravillosas, de un punto tan mágico de nuestra propia diversidad, como pueblos, como país, como sonido, como música y como espíritu. Entonces, desde la magia del sinudo, desde la destreza de los enúes, desde las gaitas, desde las aguas, desde las cienagas, desde los ríos, desde María Varilla, desde las historias de montes de María, desde las resiliencias, desde el diálogo de las gaitas, desde la magia, desde la sonoridad, desde las orquestas, desde los gaiteros que bajan del monte, desde los músicos muertos, que son visitados en cada ocasión, en los porros y desde toda la sonoridad, que esto le ha dado al espíritu y al alma, de nuestros pueblos, en la narración Diana Uribe. Este podcast fue posible gracias al equipo de la Casa de Historia, de Ana Suárez, Elena Beltrán, Arturo Jimenez-Fina, Daniel Moreno Franco, grabado en los gatos estudio, la visión y la musicalización de Eduardo Corredor Ponseca, de Rueda Sonido, y contamos con Daniel Shruts, que está con nosotros acompañándonos y que lo introducimos en nuestro relato con mucha alegría. En este programa contamos con la narración impresionante, con una fuerza increíble, diarmando ribero desde ovejas su cree, que nos dio como el tono espiritual para este relato, contamos con la narración de María Alejandra Garces, jefe de prensa de la alcaldía de San Pelayo, y siempre con la ayuda fuerte y poderosa, de Santiago Espinoza Uribe y Laura Rojas Aponte, del podcast Cosas de Internet.
[{"start": 0.0, "end": 9.56, "text": " Buenas, bienvenidos y bienvenidas a uno de los festivales m\u00e1s maravillosos, m\u00e1s m\u00e1gicos y ese"}, {"start": 9.56, "end": 38.68, "text": " festival de porro de San Pelayo."}, {"start": 40.120000000000005, "end": 47.06, "text": " En realidad son dos festivales, los que vamos a narrar en un solo espacio con el relato de"}, {"start": 47.06, "end": 52.400000000000006, "text": " estos dos festivales, el festival de porro de San Pelayo y el festival nacional de gallitas de"}, {"start": 52.400000000000006, "end": 61.480000000000004, "text": " ovejas terminamos en nuestra serie de ferias y fiestas la regi\u00f3n Caribe que la hemos estado"}, {"start": 61.480000000000004, "end": 68.28, "text": " visitando, gozando, bailando o yendo a trav\u00e9s de sus m\u00faltiples manifestaciones lo hemos hecho"}, {"start": 68.28, "end": 74.52, "text": " con el Gu\u00edn M\u00fan Festival a la parte isle\u00f1a de la archipi\u00e9lago lo hemos hecho con el carnaval de"}, {"start": 74.52, "end": 80.88, "text": " barranquilla con el festival ballenato y ahora con estos dos con el festival de porro y con el"}, {"start": 80.88, "end": 87.5, "text": " festival nacional de gallitas de ovejas terminamos nuestra visita por el Caribe que es interminable pero"}, {"start": 87.5, "end": 93.04, "text": " pues son much\u00edsimas m\u00e1s regiones las que nos habitan y las que nos suenan en la m\u00fasica entonces hoy"}, {"start": 93.04, "end": 99.84, "text": " estamos ah\u00ed para poder hablar de esto como nos vamos a meter con los ancestros como nos vamos a"}, {"start": 99.84, "end": 105.76, "text": " meter con el agua como nos vamos a meter con los en\u00faves como vamos a estar en much\u00edsimos territorios"}, {"start": 105.76, "end": 112.0, "text": " sagrados toca pedir permiso y respeto y honrar las tradiciones de todo lo que vamos a invocar"}, {"start": 112.0, "end": 118.84, "text": " ac\u00e1 porque esto m\u00e1s que un relato es una invocaci\u00f3n porque aqu\u00ed va a salir de todo aqu\u00ed van a salir"}, {"start": 118.84, "end": 125.08, "text": " aguas van a salir esp\u00edritus van a salir mitos van a salir rebeld\u00edas van a salir di\u00e1logos van a"}, {"start": 125.08, "end": 133.0, "text": " salir esp\u00edritus poderosos e indomables pueblos resilientes que han aguantado absolutamente todo"}, {"start": 133.0, "end": 141.16, "text": " por el di\u00e1logo de la gaita y por su capacidad de ser pensantes y ser pueblos de di\u00e1logo y de"}, {"start": 141.16, "end": 148.24, "text": " pensamiento y de m\u00fasica y de danza estos relatos est\u00e1n atravesados por cualquier cantidad de historias"}, {"start": 148.24, "end": 155.64000000000001, "text": " porque pues es parte de todo lo nuestro no lo que lo que canta y lo que duele que se encuentran"}, {"start": 155.64000000000001, "end": 162.32000000000002, "text": " tantas veces en nuestros relatos cif\u00e9reas y fiestas vamos a empezar por la geograf\u00eda vamos a"}, {"start": 162.32000000000002, "end": 168.96, "text": " empezar por la regi\u00f3n donde ocurren estos dos festivales porque la regi\u00f3n en s\u00ed misma es una"}, {"start": 168.96, "end": 177.4, "text": " magia infinita esta regi\u00f3n es la savana y esos los montes de mar\u00eda son los valles del"}, {"start": 177.4, "end": 186.0, "text": " San Jorge del sin\u00fa es la regi\u00f3n que se conoce como el pa\u00eds encantado de las aguas es la regi\u00f3n"}, {"start": 186.0, "end": 193.88, "text": " donde habitan los in\u00faos o los in\u00faos en\u00faes un pueblo ind\u00edgena prehisp\u00e1nico de tiempos inmemoriales"}, {"start": 193.88, "end": 201.48000000000002, "text": " que habit\u00f3 los valles del r\u00edo sin\u00fa del San Jorge y llegan incluso hasta elitoral caribe en"}, {"start": 201.48000000000002, "end": 206.52, "text": " todos los alrededores del golfo de morrosquillo y en los actuales departamentos que hoy son"}, {"start": 206.52, "end": 213.12, "text": " cordo a sucre y sur de bolivar entonces una regi\u00f3n enorme este es pueblo de caos a la"}, {"start": 213.12, "end": 220.36, "text": " alfabrer\u00eda y a la cer\u00e1mica que tienen toda esa cantidad de formas antropom\u00f3rgicas y de"}, {"start": 220.36, "end": 226.44, "text": " animales de todo tipo de animales de son\u00e1quaticos y son terrestres era lo que en una \u00e9poca"}, {"start": 226.44, "end": 232.8, "text": " orlando fars borda nuestro gran soci\u00f3logo analista que tanto nos con los dijo y nos estudi\u00f3"}, {"start": 232.8, "end": 239.52, "text": " llamada las comunidades anfibias y son culturas que ser\u00e1n reconocer a partir de la relaci\u00f3n con"}, {"start": 239.52, "end": 246.60000000000002, "text": " las aguas aqu\u00ed hay dos protagonistas que van a estar estrechamente ligados las aguas y la"}, {"start": 246.60000000000002, "end": 253.28, "text": " gaita la gaita es nuestra invitada fundamental y la gaita tambi\u00e9n se relaciona profundamente con"}, {"start": 253.28, "end": 259.56, "text": " las aguas estamos en una regi\u00f3n absolutamente m\u00e1gica y una regi\u00f3n completamente maravillosa donde hay"}, {"start": 259.56, "end": 268.2, "text": " r\u00edos y cienagas llenas de leyendas y de cosmogon\u00edas y los r\u00edos y las cienagas tambi\u00e9n nos van"}, {"start": 268.2, "end": 277.64, "text": " a conectar con el litoral caribe entonces estamos en una plenitud de aguas que es muy particular"}, {"start": 277.64, "end": 282.44, "text": " dentro del planeta tierra yo siempre les he insistido que el mundo no es verde colombia el verde"}, {"start": 282.44, "end": 289.28, "text": " o sea cuando uno much\u00edsimas regiones a las que no son inimaginables toda esta cantidad de aguas"}, {"start": 289.28, "end": 296.55999999999995, "text": " y m\u00e1s esta mezcla de aguas dulces de diferentes vertientes que se van a encontrar con el litoral"}, {"start": 296.55999999999995, "end": 303.0, "text": " de la Atl\u00e1ntico y van a crear tambi\u00e9n una parte de ese mundo Atl\u00e1ntico es decir aqu\u00ed estamos"}, {"start": 303.0, "end": 312.67999999999995, "text": " entre cruzados en muchas formas de geograf\u00edas de historias de mitos y de influencias y esto nos"}, {"start": 312.68, "end": 319.52, "text": " va a llevar a una riqueza impresionante y a unas formaciones de identidad hist\u00f3rica maravillosa entonces"}, {"start": 319.52, "end": 327.68, "text": " los en\u00faes como tal eran ingenieros eran ingenieros es decir pueblos que sab\u00edan manejar las aguas y pueblos"}, {"start": 327.68, "end": 334.16, "text": " que sab\u00edan construir caminos y entender la naturaleza de la regi\u00f3n que habitaron algo que hoy"}, {"start": 334.16, "end": 340.24, "text": " sigue siendo una cosa muy muy necesaria y est\u00e1n tambi\u00e9n en la regi\u00f3n de los montes de mar\u00eda es el"}, {"start": 340.24, "end": 347.12, "text": " lugar donde llaman el sitio donde las nubes se besan con la punta del ser entonces en todo este"}, {"start": 347.12, "end": 356.76, "text": " espectro cosmico geogr\u00e1fico acu\u00e1tico anfibio ancestral en toda la regi\u00f3n del sino hay una voz"}, {"start": 356.76, "end": 362.8, "text": " digamos hay un sonido que es el que nos va a dar todo esto que es la guaita y la guaita nos va a"}, {"start": 362.8, "end": 369.96000000000004, "text": " meter en los dos festivales porque ella es como la se\u00f1ora como la due\u00f1a de todo lo que va a pasar"}, {"start": 369.96, "end": 371.96, "text": " de aqu\u00ed en adelante"}, {"start": 371.96, "end": 388.96, "text": " aquaticas"}, {"start": 388.96, "end": 412.14, "text": " Yo siempre vivo en cantar,"}, {"start": 412.14, "end": 420.52, "text": " yo siempre vivo en cantar,"}, {"start": 420.52, "end": 425.84, "text": " yo les acabo al cuento que ovejas original,"}, {"start": 425.84, "end": 428.56, "text": " porque se baila el salosa,"}, {"start": 428.56, "end": 431.56, "text": " o la gaita la gaita."}, {"start": 431.56, "end": 440.44, "text": " Entonces la gaita como tal,"}, {"start": 440.44, "end": 443.78000000000003, "text": " que es, digamos, nuestra gran invitada,"}, {"start": 443.78000000000003, "end": 445.78000000000003, "text": " aquella la que estamos invocando,"}, {"start": 445.78000000000003, "end": 449.82, "text": " es un instrumento que tiene su origen ind\u00edgena,"}, {"start": 449.82, "end": 455.02, "text": " pero tambi\u00e9n tiene una gran significaci\u00f3n dentro del mundo afro,"}, {"start": 455.02, "end": 456.22, "text": " y estos dos pueblos,"}, {"start": 456.22, "end": 458.66, "text": " estos dos elementos se van a juntar,"}, {"start": 458.66, "end": 460.82, "text": " y m\u00e1s adelante va a ver en el caso"}, {"start": 460.82, "end": 462.09999999999997, "text": " del Festival de Porro,"}, {"start": 462.09999999999997, "end": 464.9, "text": " Instrumentos de Viento Provenientes de Europa,"}, {"start": 464.9, "end": 468.74, "text": " que le van a dar una particularidad, sonido del Festival,"}, {"start": 468.74, "end": 472.98, "text": " y que nos va a juntar una cantidad de ritmos ac\u00e1 de cotonos,"}, {"start": 472.98, "end": 476.58, "text": " que llamaban, es decir, de, de esta,"}, {"start": 476.58, "end": 478.98, "text": " los tonos de los diferentes ecosistemas,"}, {"start": 478.98, "end": 482.14, "text": " que est\u00e1n representados en la diversidad de instrumentos,"}, {"start": 482.14, "end": 484.62, "text": " que se entrelazan para llegar a formar,"}, {"start": 484.62, "end": 487.86, "text": " unas harmon\u00edas musicales ancestrales m\u00edticas,"}, {"start": 487.86, "end": 492.34000000000003, "text": " que nos llevan al contacto con todos estos pueblos y toda esta riqueza."}, {"start": 492.34000000000003, "end": 496.18, "text": " Entonces, nosotros sabemos de las gaitas que tocan las culturas"}, {"start": 496.18, "end": 498.90000000000003, "text": " de la Sierra Nevada, las culturas de los pueblos,"}, {"start": 498.90000000000003, "end": 501.06, "text": " Coguis y Arbacos,"}, {"start": 501.06, "end": 504.74, "text": " y tambi\u00e9n sabemos de las gaitas que se tocan en \u00c1frica,"}, {"start": 504.74, "end": 509.78000000000003, "text": " en el Museo de B\u00farquina Faso, seg\u00fan la regi\u00f3n donde est\u00e9n,"}, {"start": 509.78000000000003, "end": 512.5, "text": " se van dando las posibilidades de los instrumentos,"}, {"start": 512.5, "end": 515.02, "text": " y las regiones ganaderas son los tambores,"}, {"start": 515.02, "end": 516.14, "text": " son los cueros,"}, {"start": 516.14, "end": 519.74, "text": " si las regiones de juncos y es acu\u00e1tica son las flautas"}, {"start": 519.74, "end": 521.66, "text": " que ellos tienen y las gaitas,"}, {"start": 521.66, "end": 524.62, "text": " si las regiones en la zona \u00e1rabe van a ser las darbucas,"}, {"start": 524.62, "end": 527.98, "text": " entonces ellos, digamos, tienen su propia ancestralidad,"}, {"start": 527.98, "end": 530.74, "text": " su propia naturaleza y su propio sonido,"}, {"start": 530.74, "end": 535.86, "text": " que se va a encontrar ac\u00e1 con el sonido de los pueblos originarios,"}, {"start": 535.86, "end": 539.62, "text": " y que van a crear una sonoridad conjunta,"}, {"start": 539.62, "end": 543.8199999999999, "text": " que se hace de estos dos encuentros de elementos ac\u00e1."}, {"start": 543.82, "end": 547.5, "text": " Y vamos a tener esta gaita, la gaita est\u00e1 elaborada"}, {"start": 547.5, "end": 551.94, "text": " con el tallo del cardo de la pitalla,"}, {"start": 551.94, "end": 556.1, "text": " y la cabeza de la gaita es hecha de sera de avejas,"}, {"start": 556.1, "end": 557.9000000000001, "text": " y de carbon de palo,"}, {"start": 557.9000000000001, "end": 560.7, "text": " y la boquilla, por la cual se toca,"}, {"start": 560.7, "end": 564.4200000000001, "text": " es del ca\u00f1\u00f3n de la pluma del pato."}, {"start": 564.4200000000001, "end": 566.34, "text": " La gaita tiene dos tipos,"}, {"start": 566.34, "end": 569.46, "text": " es la gaita corta y la gaita larga."}, {"start": 569.46, "end": 572.5, "text": " La gaita corta tiene seis orificios,"}, {"start": 572.5, "end": 577.38, "text": " y por all\u00e1 se pueden sacar ritmos de cumbia, porro y p\u00falla,"}, {"start": 577.38, "end": 579.58, "text": " que son los del primer festival que vamos a narrar,"}, {"start": 579.58, "end": 581.62, "text": " y la gaita larga se divide en dos,"}, {"start": 581.62, "end": 583.78, "text": " la gaita hembra y la gaita macho."}, {"start": 583.78, "end": 586.66, "text": " La gaita hembra y la gaita macho tienen un di\u00e1logo permanente,"}, {"start": 586.66, "end": 589.38, "text": " porque la hembra lleva la armon\u00eda,"}, {"start": 589.38, "end": 591.5, "text": " y esa tiene cinco orificios,"}, {"start": 591.5, "end": 592.86, "text": " un que se van digitando,"}, {"start": 592.86, "end": 596.78, "text": " y la otra, la gaita macho va complementando con los compases"}, {"start": 596.78, "end": 599.5, "text": " y el contrapunto y las notas graves."}, {"start": 599.5, "end": 603.58, "text": " Las gaitas se acompa\u00f1an de los tambores, de las maracas,"}, {"start": 603.58, "end": 605.78, "text": " y ah\u00ed es donde nos vamos encontrando"}, {"start": 605.78, "end": 607.7, "text": " con todas las diferentes culturas,"}, {"start": 607.7, "end": 610.86, "text": " en la forma como se va acompa\u00f1ando la gaita."}, {"start": 610.86, "end": 613.5, "text": " Y la gaita est\u00e1 con nosotros desde siempre,"}, {"start": 613.5, "end": 616.5, "text": " y va a estar tambi\u00e9n en la \u00e9poca de la colonia,"}, {"start": 616.5, "end": 620.3, "text": " y va a estar en la \u00e9poca de las fusiones de todos estos pueblos,"}, {"start": 620.3, "end": 624.06, "text": " y va a formar una cantidad de bailes y de ritmos,"}, {"start": 624.06, "end": 626.9, "text": " y va a tener una cantidad de variaciones,"}, {"start": 626.9, "end": 630.8199999999999, "text": " y entre ellos es una historia ancestral que nosotros tenemos"}, {"start": 630.8199999999999, "end": 634.6999999999999, "text": " que atraviesa toda nuestro devenir hist\u00f3rico,"}, {"start": 634.6999999999999, "end": 640.62, "text": " y resulta que la gaita nos va a llevar a la primera parte que es el porro,"}, {"start": 640.62, "end": 645.86, "text": " y el porro va a ser una de las caracter\u00edsticas fundamentales de nuestros ritmos."}, {"start": 645.86, "end": 650.4599999999999, "text": " Candelario Bes\u00f3n, o desea que el porro tiene la media exacta de la pasi\u00f3n,"}, {"start": 650.4599999999999, "end": 654.22, "text": " tiene el ritmo de la vida y la t\u00e9 con el coraz\u00f3n."}, {"start": 654.22, "end": 658.14, "text": " Y los or\u00edgenes de estas fiestas siempre se disputan que hay donde fue primer,"}, {"start": 658.14, "end": 660.82, "text": " pero es que en toda la regi\u00f3n se est\u00e1 tocando."}, {"start": 660.82, "end": 663.86, "text": " Entonces igual que la m\u00fasica ballenata,"}, {"start": 663.86, "end": 665.58, "text": " que si fue primero aqu\u00ed, guayap,"}, {"start": 665.58, "end": 668.26, "text": " aqu\u00ed siempre tambi\u00e9n hay toda una discusi\u00f3n sobre los or\u00edgenes"}, {"start": 668.26, "end": 671.46, "text": " que los pasa much\u00edsimo en las festividades."}, {"start": 671.46, "end": 676.82, "text": " Pero antes del porro como tal que va a ser un ritmo tan caracter\u00edstico de todo esto,"}, {"start": 676.82, "end": 680.3000000000001, "text": " que es nuestro invitado, exicia el pandango."}, {"start": 680.3000000000001, "end": 684.02, "text": " Y ese baile aparece aqu\u00ed en la \u00e9poca colonial."}, {"start": 684.02, "end": 688.34, "text": " Existen referencias del pandango en Espa\u00f1a y en Am\u00e9rica."}, {"start": 688.34, "end": 693.86, "text": " O algunos dicen que el baile espa\u00f1ol se hac\u00eda muy similar a los bundes,"}, {"start": 693.86, "end": 695.34, "text": " que eran bailes africanos."}, {"start": 695.34, "end": 698.9399999999999, "text": " Aqu\u00ed entre las diferentes momentos,"}, {"start": 698.9399999999999, "end": 704.1, "text": " porque esto nos lle\u00f3 a ritmos de bailes que se han bailado en IT, en Cuba,"}, {"start": 704.1, "end": 706.6999999999999, "text": " en Cartagena, donde se han a los bundes,"}, {"start": 706.6999999999999, "end": 710.1, "text": " y donde ocurr\u00edan en las playas y en las plazas."}, {"start": 710.1, "end": 714.5, "text": " Entonces vamos a entrar aqu\u00ed una gran cantidad de influencias"}, {"start": 714.5, "end": 717.82, "text": " y adem\u00e1s de caminos, porque adem\u00e1s Cartagena,"}, {"start": 717.82, "end": 721.78, "text": " despu\u00e9s vamos a ver que por la bloqueo de los piratas tambi\u00e9n"}, {"start": 721.78, "end": 724.4200000000001, "text": " hubo que hacer caminos alternos rutas terrestres"}, {"start": 724.4200000000001, "end": 727.94, "text": " y esas rutas terrestres van a desarrollar una cantidad de poblaciones"}, {"start": 727.94, "end": 733.94, "text": " y por esas poblaciones va a poderse encontrar una cantidad de ritmos que van a confluir."}, {"start": 733.94, "end": 736.7, "text": " Porque estas historias son historias,"}, {"start": 736.7, "end": 741.4200000000001, "text": " digamos que se van entrelazando entre todo lo que va pasando ah\u00ed,"}, {"start": 741.4200000000001, "end": 743.7, "text": " entre las aguas, entre los tiempos coloniales,"}, {"start": 743.7, "end": 745.94, "text": " entre las llegadas de los pueblos de la \u00c1frica,"}, {"start": 745.94, "end": 750.7800000000001, "text": " entre la presencia ancestral de nuestros habitantes originales,"}, {"start": 750.7800000000001, "end": 752.38, "text": " entre el mundo anfibio."}, {"start": 752.38, "end": 768.3, "text": " Todo eso nos va dando un marco y nos va dando una entrada a este mundo impresionante."}, {"start": 842.38, "end": 857.78, "text": " Entonces, teniendo todos estos elementos ah\u00ed,"}, {"start": 857.78, "end": 860.78, "text": " vamos a convertir esto en festivales."}, {"start": 860.78, "end": 864.38, "text": " Y vamos a convertir esto en festivales para qu\u00e9?"}, {"start": 864.38, "end": 871.18, "text": " Para poder de alguna manera en marcar, institutionalizar, darle continuidad,"}, {"start": 871.18, "end": 878.8599999999999, "text": " darle espacios, darle poderes temporales a todos estos instrumentos."}, {"start": 878.8599999999999, "end": 882.3, "text": " Porque cuando se meten en un festival o se meten en un carnaval,"}, {"start": 882.3, "end": 889.5, "text": " se meten en una organizaci\u00f3n que hace que toda esta magia tenga un canal"}, {"start": 889.5, "end": 894.6999999999999, "text": " que permita aglutinar a la gente para lo m\u00e1s importante de todo,"}, {"start": 894.6999999999999, "end": 899.0999999999999, "text": " que es la raz\u00f3n por la que hace hace todo esto para goz\u00e1rsela."}, {"start": 899.1, "end": 902.34, "text": " Porque el objetivo de los carnavales y de los vestibales,"}, {"start": 902.34, "end": 906.62, "text": " que es lo que es m\u00e1s impresionante de todo, es que la gente de la goce."}, {"start": 906.62, "end": 909.66, "text": " Todo esto se hace en Colombia para eso."}, {"start": 909.66, "end": 917.78, "text": " Entonces resulta que empiezan a construir el festival de San Pelayo del porro,"}, {"start": 917.78, "end": 922.82, "text": " porque dicen que el porro ten\u00eda un periodo cl\u00e1sico que fue super productivo,"}, {"start": 922.82, "end": 926.26, "text": " super f\u00e9rtil que fue hacia los a\u00f1os 30."}, {"start": 926.26, "end": 929.98, "text": " Y alrededor del porro est\u00e1 la leyenda de Mar\u00eda Varilla,"}, {"start": 929.98, "end": 933.46, "text": " que se habla de la mujer que mejor bailaba el pandango,"}, {"start": 933.46, "end": 937.62, "text": " que era una mujer extraordinaria que era capaz de alborotar la naturaleza"}, {"start": 937.62, "end": 939.62, "text": " con sus ritmos y sus bailes."}, {"start": 939.62, "end": 944.26, "text": " Y alrededor del mito de ella se va construyendo tambi\u00e9n la leyenda de las guaitas."}, {"start": 944.26, "end": 949.02, "text": " Entonces, de la magia de la geograf\u00eda, de la leyenda, de las invocaciones"}, {"start": 949.02, "end": 952.9, "text": " y de la estralidad, nace en las materias primas que se han a volver,"}, {"start": 952.9, "end": 958.42, "text": " festivales y los festivales es lo que nos permite remitirnos a momentos"}, {"start": 958.42, "end": 963.14, "text": " en los cuales irrumpe la maravilla y la felicidad y el goce"}, {"start": 963.14, "end": 965.78, "text": " y la m\u00fasica en la vida de la gente,"}, {"start": 965.78, "end": 969.54, "text": " que es de lo que se trata el recorrido que nosotros estamos haciendo."}, {"start": 969.54, "end": 974.4599999999999, "text": " Entonces, pues el porro tiene su historia y tiene la historia de las bandas,"}, {"start": 974.4599999999999, "end": 979.9399999999999, "text": " porque resulta que tambi\u00e9n viene de much\u00edsimos instrumentos europeos de viento,"}, {"start": 979.94, "end": 985.3800000000001, "text": " que le van a dar una sonoridad tambi\u00e9n que se va entre las arcolas que ya existen."}, {"start": 985.3800000000001, "end": 988.3000000000001, "text": " Entonces, hablan de un tiempo en los a\u00f1os 30"}, {"start": 988.3000000000001, "end": 993.4200000000001, "text": " en que despu\u00e9s del periodo cl\u00e1sico consideran que hay una especie de sequ\u00eda"}, {"start": 993.4200000000001, "end": 997.5, "text": " en la generaci\u00f3n del porro por las guerras en Europa,"}, {"start": 997.5, "end": 1002.3000000000001, "text": " que hace que los instrumentos nos lleguen con la regularidad con que llegaban antes"}, {"start": 1002.3000000000001, "end": 1008.9000000000001, "text": " y por el otro lado, por lo que en Colombia llamamos la violencia con Ben Mayuskula,"}, {"start": 1008.9, "end": 1015.02, "text": " que es un periodo de guerra civil y un periodo muy sangriento de nuestra historia,"}, {"start": 1015.02, "end": 1018.74, "text": " hemos tenido varios, pero ese es lo llamamos con Mayuskula."}, {"start": 1018.74, "end": 1023.38, "text": " Entonces, dicen que eso tambi\u00e9n interrumpi\u00f3 un poco la fluidez de esto,"}, {"start": 1023.38, "end": 1029.54, "text": " sin embargo, va a haber dos personajes que van a universalizar el porro"}, {"start": 1029.54, "end": 1035.94, "text": " y lo van a dar a conocer al planeta completo, que son Lucho Bermudes y Pacho Gal\u00e1n."}, {"start": 1035.94, "end": 1040.3400000000001, "text": " Ellos van a tener el formato de las grandes bandas de las Big Bands"}, {"start": 1040.3400000000001, "end": 1044.02, "text": " y van a tener un formato de orquestas grand\u00edsimo"}, {"start": 1044.02, "end": 1048.1000000000001, "text": " que nos van a dar a conocer hasta el extremo del continente,"}, {"start": 1048.1000000000001, "end": 1050.66, "text": " hasta la Argentina por todas partes."}, {"start": 1050.66, "end": 1054.74, "text": " Nos van a conocer y ellos son responsables de que la Cumbia sea extendido"}, {"start": 1054.74, "end": 1058.18, "text": " de una manera tan impresionante en Am\u00e9rica Latina"}, {"start": 1058.18, "end": 1062.3, "text": " hasta ser un fen\u00f3meno peruano mexicano, argentino."}, {"start": 1062.3, "end": 1065.5, "text": " Hoy por hoy, Am\u00e9rica Latina reivindica esta Cumbia"}, {"start": 1065.5, "end": 1069.74, "text": " como suya, pero el origen est\u00e1 aqu\u00ed entre nosotros."}, {"start": 1069.74, "end": 1076.46, "text": " Y son estas orquestas las que van a dar a leer al mundo el conocimiento de estos ritmos,"}, {"start": 1076.46, "end": 1079.66, "text": " digamos las que la internacionalizan y durante todo el periodo"}, {"start": 1079.66, "end": 1082.1, "text": " en que Colombia empieza a modernizarse"}, {"start": 1082.1, "end": 1086.7, "text": " y empiezan a inaugurarse las grandes obras y empiezan a crearse"}, {"start": 1086.7, "end": 1090.66, "text": " digamos como la construcci\u00f3n de la infraestructura de Colombia."}, {"start": 1090.66, "end": 1094.82, "text": " Todos se inauguraba con la orquesta de Pacho Gal\u00e1n."}, {"start": 1094.82, "end": 1095.82, "text": " O sea, todo."}, {"start": 1095.82, "end": 1098.46, "text": " Es como el con la orquesta de Lucho Vermudel."}, {"start": 1098.46, "end": 1104.1399999999999, "text": " Lucho Vermudel es como la banda sonora de la primera parte del podredezo en Colombia"}, {"start": 1104.1399999999999, "end": 1110.06, "text": " en el siglo XX y va a atravesar como toda nuestra mirada de ferrocarriles,"}, {"start": 1110.06, "end": 1113.6599999999999, "text": " de pa\u00eds, de todo lo que est\u00e1 pasando en ese momento."}, {"start": 1113.6599999999999, "end": 1115.7, "text": " Lucho Vermudel es la banda sonora."}, {"start": 1115.7, "end": 1119.9399999999998, "text": " Entonces esto tambi\u00e9n va a generar una identidad profunda"}, {"start": 1119.9399999999998, "end": 1122.62, "text": " en un pa\u00eds que se est\u00e1 transformando"}, {"start": 1122.62, "end": 1127.26, "text": " y que va a escuchar con Lucho Vermudel la musicalidad"}, {"start": 1127.26, "end": 1155.78, "text": " de la transformaci\u00f3n de un pa\u00eds que se est\u00e1 modernizando."}, {"start": 1155.78, "end": 1165.78, "text": " O sea, el con la orquesta de Lucho Vermudel es como la banda sonora de la primera parte del podredezo en Colombia."}, {"start": 1165.78, "end": 1173.78, "text": " O sea, el con la orquesta de Lucho Vermudel es como la banda sonora de la primera parte del podredezo en Colombia."}, {"start": 1173.78, "end": 1202.78, "text": " Esto es estas historias."}, {"start": 1202.78, "end": 1207.78, "text": " Y es el festival de porro de San Pelayo."}, {"start": 1207.78, "end": 1214.78, "text": " Y as\u00ed es nuestra primera historia porque el festival de porro es el que va a glutinar todas las bandas"}, {"start": 1214.78, "end": 1219.78, "text": " y es el que nos va a permitir que nos encontremos en todas estas presentaciones."}, {"start": 1219.78, "end": 1225.78, "text": " Y es el que va a llevar todos los ritmos cl\u00e1sicos de soy pelallero, el pil\u00f3n,"}, {"start": 1225.78, "end": 1233.78, "text": " el sabado de la gloria, el tortugo, el siete de agosto, el sapo viejo y todo el personaje Mar\u00eda Varilla."}, {"start": 1233.78, "end": 1238.78, "text": " Todo esta leyenda, la idea es que eso se pueda de alguna manera canalizar."}, {"start": 1238.78, "end": 1246.78, "text": " Entonces en 1977 en San Pelayo, que es un peque\u00f1o pueblo del departamento de C\u00f3rdoba,"}, {"start": 1246.78, "end": 1253.78, "text": " que conoci\u00f3 toda la bonanza del algod\u00f3n porque suena una \u00e9poca era un tiempo"}, {"start": 1253.78, "end": 1261.78, "text": " en que nosotros ten\u00edamos una secuencia de cosechas, inclusive viajente que viv\u00eda del oficio de cosechero,"}, {"start": 1261.78, "end": 1267.78, "text": " que iban a recoger las cosechas que ven\u00edan en todo el a\u00f1o que eran las de algod\u00f3n, las del caf\u00e9."}, {"start": 1267.78, "end": 1271.78, "text": " Eso es todo un mundo en el cual nosotros est\u00e1bamos en esa \u00e9poca."}, {"start": 1271.78, "end": 1279.78, "text": " Y ah\u00ed era cuando ven\u00edan toda la gente a recoger algod\u00f3n porque esto era un paisa\u00ed absolutamente impresionante"}, {"start": 1279.78, "end": 1286.78, "text": " porque era copos blancos de algodones como dec\u00edan juguuduce la ca\u00f1a y era algod\u00f3n en todas partes."}, {"start": 1286.78, "end": 1294.78, "text": " Entonces era la \u00e9poca en que los cultivos del algod\u00f3n y el ganado y toda la \u00e9poca de la ganadir\u00eda"}, {"start": 1294.78, "end": 1300.78, "text": " le van a dar al porro palitiado una variante propia."}, {"start": 1300.78, "end": 1305.78, "text": " Y esa variante propia es una variante que se hace en San Pelayo."}, {"start": 1305.78, "end": 1310.78, "text": " Entonces existen San Pelayo, un recorrido milagroso del tiempo, dicen."}, {"start": 1310.78, "end": 1317.78, "text": " Una isla de m\u00fasica en el letargo del valle, glorioso San Pelayo de trompetas y tambores."}, {"start": 1317.78, "end": 1323.78, "text": " Eso es parte de los fragmentos de mi valle del gran Ra\u00fal Comesjatin."}, {"start": 1323.78, "end": 1333.78, "text": " Entonces en 1977 se empiezan a crear los grandes parrantones que duran cinco d\u00edas y el pueblo se transforma."}, {"start": 1333.78, "end": 1343.78, "text": " Y empiezan a llegar las cabalgatas que van a llegar a la tarima y llegan al centro del pueblo y al d\u00eda siguiente van a llegar todas las bandas participantes de la regi\u00f3n."}, {"start": 1343.78, "end": 1358.78, "text": " Y llegan bandas adultas y juveniles porque en todas las fiestas hemos visto que siempre hay una cantera de todos los j\u00f3venes que van a crecer en estos ritmos y van a calantizar su continuidad de la historia."}, {"start": 1358.78, "end": 1366.78, "text": " Entonces empieza una parranda, pero una parranda, una cosa impresionante y en esas parr\u00e1ndoles va a haber galer\u00edas de arte,"}, {"start": 1366.78, "end": 1374.78, "text": " va a haber muestras de atal, estalleres pedag\u00f3gicos, conferencias, m\u00fasicos di\u00e1caemicos sobre el porro y la m\u00fasica tradicional."}, {"start": 1374.78, "end": 1381.78, "text": " El bien es la alborada donde se presentan las bandas para el concurso donde est\u00e1n los m\u00fasicos invitados."}, {"start": 1381.78, "end": 1388.78, "text": " Y ese es un festival de m\u00fasica popular donde hay de todo porque tambi\u00e9n hay ranchera, reggaeton, ballenato, todo."}, {"start": 1388.78, "end": 1399.78, "text": " El domingo, el domingo de una celebraci\u00f3n bell\u00edsima, bell\u00edsima, bell\u00edsima, porque ese d\u00eda es cuando se brinden de lo menaje a todos los m\u00fasicos que ya no est\u00e1n a todos los que han muerto."}, {"start": 1399.78, "end": 1408.78, "text": " Entonces como una forma de recordarlos, idear desde el tributo, hay un homenaje musical donde las bandas caminan hacia el cementerio"}, {"start": 1408.78, "end": 1417.78, "text": " y onran con la m\u00fasica, el talento y la labor y la contribuci\u00f3n de las que yo no est\u00e1n y rinden o frentas florales."}, {"start": 1417.78, "end": 1432.78, "text": " Eso es una cosa muy bonita y esto es lo que les va a dar como esa continuidad entre los mundos, entre el mundo que estamos ac\u00e1 y los del masa ya que nos dieron todas esas m\u00fasicas maravillosas."}, {"start": 1432.78, "end": 1457.78, "text": " Y que empieza pues todo este festival y esto va a dinamizar la cultura y esto va a hacer que generaciones de j\u00f3venes, de ni\u00f1os, de adultos, de ancianos, se emocionen de tocarlo, de bailarlo, hagan ruedas, de fandomo, que se va envolviendo bailes y palabras y van caminando por todas partes con los pasos del fandomo, que es como van bailando los ritmos del porro y como van"}, {"start": 1457.78, "end": 1466.78, "text": " desliz\u00e1ndose por un universo musical absolutamente maravilloso, porque es la fiesta de festival de porro de San Pelayo."}, {"start": 1466.78, "end": 1469.78, "text": " Entonces, ese es nuestro primer aparte, ese es nuestro primer relato."}, {"start": 1475.78, "end": 1477.78, "text": " Aqu\u00ed empieza el espacio comercial."}, {"start": 1477.78, "end": 1498.78, "text": " Los sonidos del mar caribe retumban en la regi\u00f3n del silu y generan m\u00fasicas tradicionales \u00fanicas de nuestro pa\u00eds, donde propios y extra\u00f1os se citan en torno a los tambores y a las guaitas."}, {"start": 1498.78, "end": 1510.78, "text": " Parte de esa riqueza cultural y musical de nuestro pa\u00eds se puede escuchar en el atardecer. Lunes a viernes desde las cuatro en la tarde en Radio Nacional de Colombia, estamos donde t\u00fa est\u00e1s."}, {"start": 1510.78, "end": 1539.78, "text": " Paralelamente, a esta gran celebraci\u00f3n del porro que es una forma tan grande y tan importante de nuestra identidad hist\u00f3rica y musical, viene otro festival que tiene todos los esp\u00edritus de brujo de rebeld\u00eda, de magia, de resiliencia, de la vida."}, {"start": 1539.78, "end": 1559.78, "text": " La historia de resiliencia, de di\u00e1logo, de resistencia, porque est\u00e1 en las zonas, una de las zonas que ha tenido mayor sufrimiento durante las diferentes \u00e9pocas del conflicto armado, que son los montes de var\u00eda, que es el festival de oejas, de su crefestival de guaitas."}, {"start": 1559.78, "end": 1580.78, "text": " Y se conjugan todos los elementos de altives resiliencia, de resistencia de la guaita, porque la guaita es resistencia tambi\u00e9n, porque la guaita di\u00e1loga, cuando los di\u00e1logos se han roto, porque la guaita suena, cuando han zonado los trinos de la tradog\u00eda y la tristeza."}, {"start": 1580.78, "end": 1597.78, "text": " Y resulta que esta gente conjura, como una resistencia est\u00e9ndrica maravillosa, desde tiempos ancestrales, la guaita, como elemento que es capaz de armonizarlo todo y de llevarlos a una gran epopella del alma."}, {"start": 1597.78, "end": 1613.78, "text": " Entonces resulta que ovejas es un pueblo peque\u00f1ito, y tambi\u00e9n siempre que se hacen los festivales, que lo hemos visto mucho, es porque queremos rescatar y asegurar la continuidad de una tradici\u00f3n."}, {"start": 1613.78, "end": 1636.78, "text": " Entonces en alg\u00fan momento pensaban que las guaitas se hab\u00edan perdido, que ese conocimiento ya no se estaba transmitiendo, y que era importante acu\u00f1arlo y hacerlo posible y darle una continuidad, que es la raz\u00f3n por la cual se organizan los festivales y se organizan los carnavales, la raz\u00f3n por la cual se hacen espagosar."}, {"start": 1636.78, "end": 1654.78, "text": " Pero la manera como esto se garantiza la continuidad en el tiempo es a trav\u00e9s de los festivales. Entonces all\u00e1 en ovejas su cre, confluyen una gran cantidad de gaiteros, a tal punto que se le conoce como la universidad de la guaita,"}, {"start": 1654.78, "end": 1666.78, "text": " porque es el punto donde se unen todos los guaiteros de todos los pueblos, de todos los montes y de todos los planos, de todo esto que les he contado que son esas geograf\u00edas."}, {"start": 1685.78, "end": 1687.78, "text": " \u00a1Oh! \u00a1Ay no m\u00e1s!"}, {"start": 1691.78, "end": 1702.78, "text": " \u00a1No se invito festival! \u00a1Te gaisas que ovejas su crees! \u00a1No se invito festival! \u00a1Te gaisas que ovejas su crees!"}, {"start": 1702.78, "end": 1712.78, "text": " Para que oye a hablarlo del maestro macho crees, para que oye a hablarlo del maestro macho crees."}, {"start": 1712.78, "end": 1722.78, "text": " \u00a1No se es nadie leje que yo hay los tiktorias! \u00a1No se es nadie leje que yo hay los tiktorias!"}, {"start": 1722.78, "end": 1733.78, "text": " \u00a1Oh! \u00a1Era la tienda viviendo en la escoria! \u00a1Oh! \u00a1Era la tienda y compa en la broma de historia!"}, {"start": 1733.78, "end": 1735.78, "text": " \u00a1Ajajajaj!"}, {"start": 1735.78, "end": 1739.78, "text": " \u00a1Oh! \u00a1Era la tienda y compa en la broma de historia!"}, {"start": 1739.78, "end": 1742.78, "text": " \u00a1Vas a vivir!"}, {"start": 1747.78, "end": 1755.78, "text": " Entonces ellos consideran que los a\u00f1os 80 ya la m\u00fasica estaba muy amenazada y que estaba el borde de la desaparici\u00f3n."}, {"start": 1755.78, "end": 1762.78, "text": " Entonces ya no quedaban sino los caideros de San Jacinto y ya estaban a morir ya los grandes maestros"}, {"start": 1762.78, "end": 1777.78, "text": " y consideran que estaba muy disperso por la geograf\u00eda esta m\u00fasica maravillosa y entonces para salvaguardar la tradici\u00f3n, para cunyarla, para celebrarla, para darle su merecido homenaje"}, {"start": 1777.78, "end": 1783.78, "text": " para entronizarla en el lugar en que debe estar en 1985."}, {"start": 1783.78, "end": 1792.78, "text": " Se cre\u00f3 el Festival Nacional de Gaithas Francisco Girene en Ovejas, Sucre, en la regi\u00f3n de los montes de Mar\u00eda."}, {"start": 1792.78, "end": 1802.78, "text": " Entonces este festival como tal se remonta al Natalicio del Patr\u00f3n de Ovejas en Francisco de Asis, que no lo volvemos a encontrar"}, {"start": 1802.78, "end": 1817.78, "text": " porque en San Pacho era por \u00e9l que se hac\u00eda, que se hac\u00eda en todos esos grandes eleoraciones en Quipto y en todo el Chocobo, pues desde el mismo San Francisco de Asis, que viene que tambi\u00e9n es Patr\u00f3n de Ac\u00e1."}, {"start": 1817.78, "end": 1835.78, "text": " Entonces los de Aitero vivan en una zona donde se reun\u00edan el 4 de octubre en la plaza para acompa\u00f1ar a los instrumentos de los ritnos t\u00edpicos de la regi\u00f3n donde llegaban todos los tinterpetes de Monte de Mar\u00eda y del Caribe Colombiano."}, {"start": 1835.78, "end": 1847.78, "text": " Esto estaba asociado en un principio a las festividades religiosas, pero luego va a tener, digamos, como otro tiempo, en el cual va a ser el Festival por s\u00ed mismo."}, {"start": 1847.78, "end": 1858.78, "text": " Entonces ya digamos como en otra fecha en donde no va a ser, no se va a ser al mismo tiempo con la celebraci\u00f3n religiosa, aunque eso es si es origen, pero no va a ser la manera como se va a desarrollar en el futuro."}, {"start": 1858.78, "end": 1876.78, "text": " Entonces el nombre del Festival, en la celebraci\u00f3n del m\u00e1s duro de los duros, de un tipo de una destreza tan impresionante con la tambora, o sea ser particularmente diestro en esta zona, pues es un nivel de inmortalidad imposible imaginar porque todos los hombres,"}, {"start": 1876.78, "end": 1888.78, "text": " est\u00e1n hablando de gente con un nivel absolutamente alto, para que alguien se distingue y se destaque tanto, tiene que ser que ten\u00eda a todos los duros de su lado y eso fue lo que le pas\u00f3 a Francisco Llirene."}, {"start": 1888.78, "end": 1901.78, "text": " Y por eso es que el Festival lo nombran en su honor, que tambi\u00e9n \u00e9l era el de Amador y tambi\u00e9n tocaba las baracas en el m\u00fasico que ten\u00eda much\u00edsimas destres asjuntas."}, {"start": 1901.78, "end": 1911.78, "text": " Entonces empiezan a hacer una reuni\u00f3n de artistas y el 5 de septiembre se lleva a cabo la primera edici\u00f3n del Festival de las Gaithas de Francisco Llirene."}, {"start": 1911.78, "end": 1921.78, "text": " Y las primeras ediciones del Festival se interpretan con la Gaitha larga, con la Gaitha corta, e hice subdividen en categor\u00edas para fisionados,"}, {"start": 1921.78, "end": 1935.78, "text": " en el 89 ya empiezan a incluir las modalidades infantiles, juveniles, la de la canci\u00f3n in\u00e9dita y empiezan a formar lo que es m\u00e1s importante en todos los festivales, la formaci\u00f3n de las escuelas infantiles,"}, {"start": 1935.78, "end": 1948.78, "text": " es lo que le va a permitir la continuidad. Entonces empiezan a llegar hombres campesinos que ven\u00edan de todas las tierras, muchos ya eran mayores y que se consideran los grandes de la Gaitha,"}, {"start": 1948.78, "end": 1957.78, "text": " pero en 1987 aparecen orelaprada y se convierte y esto siempre es un relato de picolo mismo cuando estuvimos narrando el Festival de Valle Nato."}, {"start": 1957.78, "end": 1973.78, "text": " La primera mujer que toca las Gaithas en el Festival y eso es unito, eso es en 1987 y eso le va a permitir la participaci\u00f3n femenina en el evento y esto pues es una cosa de la mayor importancia."}, {"start": 1973.78, "end": 1989.78, "text": " Entonces empiezan a interpretar las canciones a tocar al llamador, la Gaitha, las maracas y eso para nosotros es muy importante, o sea, y finalmente esto nos va a llevar a una agrupaci\u00f3n conformada por sobre las mujeres que se van a llamar las diosas de la Gaitha, game el favor,"}, {"start": 1989.78, "end": 2006.78, "text": " se aponga si usted en la situaci\u00f3n y yo en el relato de que nosotros vamos a tener todas estas voces tambi\u00e9n en el Festival."}, {"start": 2049.78, "end": 2077.78, "text": " En el 2015 el Festival Nacional de las Gaithas es declarado patrimonio cultural e inmaterial de la naci\u00f3n y empieza a celebrarse pues lea todo el inter\u00e9s cultural y empiezan a llegar y entonces empiezan las competencias, los talleres de formaci\u00f3n, la alborrada."}, {"start": 2077.78, "end": 2106.78, "text": " Todo la digamos el ritual, pues que nosotros en los Festivales tenemos el origen geogr\u00e1fico, el mito, la leyenda, el ritmo y esto todo esto se va a conformar en un Festival que son los que tienen las estructuras, que hacen que el ritmo y el mito se organicen para convocar a la gente y se vuelvan el espacio, digamos, el espacio musical y c\u00f3smico de la gosadera."}, {"start": 2106.78, "end": 2122.78, "text": " En eso consiste en los Festivales, entonces el primer d\u00eda el pueblo recibe todas las bandas participantes y empiezan a llegar todos los expertos en la m\u00fasica de Gaitha que se han venido preparando todo el a\u00f1o para llegar all\u00e1 porque aqu\u00ed llegan los mejores."}, {"start": 2122.78, "end": 2148.78, "text": " El segundo d\u00eda ocurre la alborrada que es un paseo musical que atraviesa todo el pueblo, un pueblo que normalmente es un pueblo muy tranquilo, que normalmente es un pueblo fresco por ah\u00ed pero esto llega y se arma un alborrota impresionante y se va llenando de gente, se va llenando de gente y empieza la parranda y empieza la parranda en las calles en las plazas en los patios de la casa, la parranda dura cuatro d\u00edas."}, {"start": 2148.78, "end": 2164.78, "text": " Y entonces no se aduerme, que empieza la una de la tarde y a las seis de la ma\u00f1ana hasta las seis de la ma\u00f1ana y hay que aguantar todo lo que hay que aguantar para parrandiar pero para parrandiar hay que tener todo el esp\u00edritu que he venuti en \u00faltimas es el que aguanta el esp\u00edritu."}, {"start": 2208.78, "end": 2232.78, "text": " Y un bejo de juzillo de flora amarilla en la emalada de guamas de pepa carg\u00e1, sombra para mi pensamiento, mientras reposaba cuando en el cielo estaba un sol canicular."}, {"start": 2232.78, "end": 2248.78, "text": " \u00a1Ah, mi mente le llegaban, mi elbriozo recuerdo m\u00e1s caliente que lo rayo de la luz solar, yo que rollo, cristalino de agua limpia y pura rebreca mi pensamiento, es un poquito m\u00e1s."}, {"start": 2248.78, "end": 2264.78, "text": " Entonces el lunes que es el cuarto d\u00eda vienen los conciertos y el concurso y las eliminatorias de las mejores bandas. Ah\u00ed es cuando se establecen ya los premios finales las categor\u00edas y la premiaci\u00f3n."}, {"start": 2264.78, "end": 2280.78, "text": " Y esa premiaci\u00f3n digamos es realmente un reconocimiento, al talento, a la disciplina, al trabajo, a la manera como la gente se ha preparado para esto. Entonces alrededor de esto es todo lo que va a ser el festival."}, {"start": 2280.78, "end": 2297.78, "text": " O el festival, como tal nos garantiza la continuidad de la tradici\u00f3n, los gaiteros bajan del bonte, de todos los montes van bajando los gaiteros para contrarse en el festival y eso es lo que nos va a traer toda la armon\u00eda y la maravilla."}, {"start": 2297.78, "end": 2313.78, "text": " Pero resulta que esto est\u00e1 entretejido con una historia dur\u00edsima que nosotros hemos vivido que eso no lo va a matar, a trav\u00e9s de los festivales, varias veces nos hemos atravesado trasfondo extremadamente dolorosos porque bueno,"}, {"start": 2313.78, "end": 2335.78, "text": " es la miena, es una condici\u00f3n hist\u00f3rica, nosotros hemos tenido que vivir y esperamos no tener que vivir llamas resulta que este pueblo de la regi\u00f3n de Asin\u00fa es un pueblo que ha tenido una resistencia a much\u00edsimas formas de conflicto y ellos han hecho la resistencia a trav\u00e9s del di\u00e1logo y han superado con la sabrosura."}, {"start": 2335.78, "end": 2361.78, "text": " Todo lo que ha significado la dureza es lo que ellos han vivido es una redecci\u00f3n que ha sido estigmatizada por todo lo la barbaridad de los bej\u00e1menes de la guerra donde la gente se se para sobre el dolor con la m\u00fasica en Colombia y eso lo vimos mucho en el choc\u00f3, la alegria resistencia, la m\u00fasica resistencia, el baile resistencia."}, {"start": 2361.78, "end": 2372.78, "text": " Es as\u00ed como nosotros existimos, gozamos, vivimos, amamos, so\u00f1amos y seguimos existiendo en los d\u00edas de lluvia como en los d\u00edas de sol."}, {"start": 2372.78, "end": 2390.78, "text": " Entonces este pa\u00eds que se crea nos montes de Mar\u00eda, que es una historia entre la alegria y la resiliencia, aprendido a dirir sus conflictos a trav\u00e9s del ritmo y del verbo, la misma caracter\u00edstica dialogante de la gaita."}, {"start": 2390.78, "end": 2409.78, "text": " El mismo hecho de que la gaita est\u00e9 permanentemente en un contacto de di\u00e1logo, hace que la gente dialogue para poder vivir y dirir sus conflictos aprendiendo de la gaita y que sea la gaita la que musicalices estos di\u00e1logos."}, {"start": 2409.78, "end": 2420.78, "text": " Entonces ellos han tenido dos procesos de paz al interior de ellos y en los procesos de paz la gaita ha sido siempre la que marca la que suena."}, {"start": 2420.78, "end": 2434.78, "text": " Ese di\u00e1logo, estos son gente pensantes, son gente muy altiva, son gente con una ra\u00edz muy profunda de la que son tremendamente conscientes, son conscientes de su ra\u00edz y su ancestralidad."}, {"start": 2434.78, "end": 2456.78, "text": " Y ellos, la gente de Oejas dice que ellos no han tenido que ese pan ninguna parte para atraer la m\u00fasica, la m\u00fasica la tienen ellos en su propio coraz\u00f3n, se produce en los montes de Mar\u00eda, es parte de su manera de evitar el universo y por lo tanto ellos simplemente tocan lo que tienen en el alma."}, {"start": 2456.78, "end": 2484.78, "text": " Y es un rasgo de identidad y de pertenencia que les da toda la fuerza para resistir de al hogar, aguantar, plantarse frente a cosas inimaginables que se han vivido en esa regi\u00f3n y que sobrevive a pesar de todo lo que pase porque la m\u00fasica sobreviva todo, el arte sobreviva todo, los conflictos pasaran y estamos en mejores tiempos ahora."}, {"start": 2484.78, "end": 2507.78, "text": " Pero la gaita sigue ah\u00ed, la gaita los ha acompa\u00f1ado en los d\u00edas m\u00e1s duros, la gaita los acompa\u00f1a en los d\u00edas de esperanza, la gaita los acompa\u00f1a con su di\u00e1logo, la gaita los acompa\u00f1a con su fe, la gaita los acompa\u00f1a en la existencia misma de su manera de evitar una regi\u00f3n tan m\u00e1gica y tan hermosa."}, {"start": 2507.78, "end": 2529.78, "text": " Y la celebraci\u00f3n a esa gaita es el festival de ovejas, incluso la \u00fanica vez que no s\u00e9 que no se celebr\u00f3 a tiempo, digamos es el euro, pero no a tiempo, fue el d\u00eda que mataron a un candidato en la alcald\u00eda y en ese momento no s\u00e9 eso el festival en octubre, porque era un duelo muy grande, pero se termin\u00f3 haciendo en diciembre para que el festival no fuera derrotado por la muerte."}, {"start": 2529.78, "end": 2552.78, "text": " Entonces, ellos es como la \u00fanica vez que no lo han podido hacer, porque aqu\u00ed los festivales atraviesan cualquier cantidad de visitudes, entonces no lo hicieron en su fecha original, por lo dem\u00e1s siempre se hace celebrado y siempre se celebra y esto cada vez tiene m\u00e1s, m\u00e1s honoridad, m\u00e1s gente sabe ese festival, en un principio recibi\u00e9 a la gente gratis,"}, {"start": 2552.78, "end": 2580.78, "text": " porque no es que haya hoteles all\u00e1 y como dos, no es en la casa en las habitaciones de la gente despu\u00e9s esto se volvi\u00f3 tambi\u00e9n una manera de ingreso para el pueblo porque la gente va llegando y se queda donde los reciban y es tambi\u00e9n es parte de entrar a formar parte de estas historias que se van tejiendo alrededor de la gaita y es un festival que tiene toda la importancia y estos dos festivales,"}, {"start": 2580.78, "end": 2597.78, "text": " el festival del porro y el festival de las gaitas y ovejas son los que los que nos sonorizan los que nos llevan a las aguas de la sabana a todo el sin\u00fa porque como le digo que hay montes pero tambi\u00e9n hay sabana,"}, {"start": 2597.78, "end": 2612.78, "text": " y hay es una regi\u00f3n absolutamente rica y absolutamente m\u00e1gica y adem\u00e1s de ser tan bella, tan m\u00e1gica y tan rica es absolutamente sonora, entonces los encuentros de la ancestralidad,"}, {"start": 2612.78, "end": 2626.78, "text": " el sonido de nuestra identidad es sonido del porro que nos hace conocer por much\u00edsimas regiones del planeta y el sonido de la gaita que nos trae la representaci\u00f3n del mito, la leyenda, la magia y el esp\u00edritu"}, {"start": 2626.78, "end": 2652.78, "text": " de la altiv\u00e9s, la identidad, la pertenencia y la y la resiliencia de poder vivir todo lo vivido y seguir bailando y seguir gozando es lo que significa el festival de ovejas en montes de Mar\u00eda y toda la alegr\u00eda del porro que nos ha dado tanto brillo y tanta luz es lo que significa el festival del porro de San Pelayo,"}, {"start": 2652.78, "end": 2676.78, "text": " entonces con estos dos ritmos maravillosos de estas regiones encantadas, incre\u00edbles, que nos dan estas lecciones de val\u00eda, de unas estr\u00e1lidas de los gaiteros bajando, de la gente encontr\u00e1ndose y de la gozadera que es de lo que se tratan todas estas fiestas y de los puntos de encuentro que donde se resignifica,"}, {"start": 2676.78, "end": 2703.78, "text": " la dureza de la vida por la magia del bail de la m\u00fasica de los pasos de las caderas del frenes y del movimiento, es lo que nos lleva a estas regiones tan maravillosas y lo que nos hace entrar con un profundo respeto y con un profundo permiso por los ancestros, por las ancestras, por las gaitas, por la gente que estaba, por la gente que estuvo y por el esp\u00edritu tan poderoso y tan indomable,"}, {"start": 2703.78, "end": 2732.78, "text": " que en estas tierras se baila y se canta con la gaita, y por lo cual hemos estado recorriendo con un honor impresionante para tratar de poder representar siquiera minimamente, la riqueza de la que estamos hablando ac\u00e1, siquiera a un nivel medianamente imaginable, lo que es una magia de estas y lo que es una sonoridad de estas y lo que es la altiv\u00e9s y la manera tan sabia y tan maravillosa con estas personas,"}, {"start": 2732.78, "end": 2753.78, "text": " alegran la vida y el esp\u00edritu del cosmos, con la magia de la gaita y con la gratitud por ellos y por todo lo que ellos hacen, por el esp\u00edritu y la posibilidad de la recinificaci\u00f3n de todo, que lo hace la m\u00fasica de los festivales, para el objetivo com\u00fan de todas estas fiestas, la gozadera, que es de lo que se trata,"}, {"start": 2753.78, "end": 2782.78, "text": " o sea todos los festivales y ferias y fiestas que estamos haciendo, nos hacemos para gozar y los hacemos desde el punto de vista del encuentro de la fuerza vivificante del esp\u00edritu y el encuentro de la magia y el encuentro de la danza, donde el cuerpo, el ritmo, el esp\u00edritu, el agua y la gaita se encuentran para conjugar una poes\u00eda cosmica del universo, que es como se expresa en estos festivales,"}, {"start": 2782.78, "end": 2801.78, "text": " y que es el recorrido que hemos hecho en el d\u00eda de hoy, con toda la felicidad y la alegr\u00eda de narrarles estas historias maravillosas, de un punto tan m\u00e1gico de nuestra propia diversidad, como pueblos, como pa\u00eds, como sonido, como m\u00fasica y como esp\u00edritu."}, {"start": 2801.78, "end": 2821.78, "text": " Entonces, desde la magia del sinudo, desde la destreza de los en\u00faes, desde las gaitas, desde las aguas, desde las cienagas, desde los r\u00edos, desde Mar\u00eda Varilla, desde las historias de montes de Mar\u00eda, desde las resiliencias, desde el di\u00e1logo de las gaitas, desde la magia, desde la sonoridad,"}, {"start": 2821.78, "end": 2839.78, "text": " desde las orquestas, desde los gaiteros que bajan del monte, desde los m\u00fasicos muertos, que son visitados en cada ocasi\u00f3n, en los porros y desde toda la sonoridad, que esto le ha dado al esp\u00edritu y al alma, de nuestros pueblos, en la narraci\u00f3n Diana Uribe."}, {"start": 2852.78, "end": 2880.78, "text": " Este podcast fue posible gracias al equipo de la Casa de Historia, de Ana Su\u00e1rez, Elena Beltr\u00e1n, Arturo Jimenez-Fina, Daniel Moreno Franco, grabado en los gatos estudio, la visi\u00f3n y la musicalizaci\u00f3n de Eduardo Corredor Ponseca, de Rueda Sonido, y contamos con Daniel Shruts, que est\u00e1 con nosotros acompa\u00f1\u00e1ndonos y que lo introducimos en nuestro relato con mucha alegr\u00eda."}, {"start": 2880.78, "end": 2909.78, "text": " En este programa contamos con la narraci\u00f3n impresionante, con una fuerza incre\u00edble, diarmando ribero desde ovejas su cree, que nos dio como el tono espiritual para este relato, contamos con la narraci\u00f3n de Mar\u00eda Alejandra Garces, jefe de prensa de la alcald\u00eda de San Pelayo, y siempre con la ayuda fuerte y poderosa, de Santiago Espinoza Uribe y Laura Rojas Aponte,"}, {"start": 2909.78, "end": 2912.78, "text": " del podcast Cosas de Internet."}]
Diana Uribe
https://www.youtube.com/watch?v=EF021KcdlFo
Carnaval del Oriente Colombiano y Festival de la Tigra
#podcastdianauribe #dianauribefm #Santander La serie de Ferias y Fiestas de Colombia nos lleva a Santander. En este capítulo nos vamos a gozar en dos fiestas: El Carnaval de Oriente en Málaga y el Festival de la Tigra en Piedecuesta. Aprovechamos para hablar de los Santanderes, una región que reúne la diversidad, la historia y la cultura de nuestro país. También hablaremos de dos fiestas diferentes, ambas muy rumberas y en las cuales se rescata la tradición, la cultura y las nuevas formas de festejar. Notas del episodio: «Un año en el Gran Santander» un video para entender la diversidad y la historia de los dos Santanderes →https://repository.usta.edu.co/handle/11634/957 Aquí les dejamos una reseña sobre uno de los hechos que fundamentan la identidad de los santandereanos «La Revolución de los Comuneros» →https://www.banrepcultural.org/biblioteca-virtual/credencial-historia/numero-240/revolucion-de-los-comuneros ¿Pensando ir de turismo a Santander? aquí algunos datos de Málaga, el municipio sede del Carnaval del Oriente Colombiano →http://turisco.com.co/index.php/que-comer/comida-tradicional/carnes-y-pollo/53-municipios/provincia-de-garcia-rovira/520-municipio-de-malaga#municipio-de-malaga De lo poco que encontramos en Internet del Carnaval del Oriente Colombiano les compartimos este enlace. Igual, no se engañen por su poca presencia en el mundo digital ¡es tremendo Carnaval! →https://guadalupestereo.com/carnaval-del-oriente-colombiano/ Turistear por Piedecuesta, otra forma de conocer al Santander del sur →https://encolombia.com/turismo/destinos-turisticos/destinos-colombianos/santander/piedecuesta/ «Ruge la tigra en su Festival» página oficial del Festival de la Tigra →https://www.festivaldelatigra.org/ ¡Síguenos en nuestras Redes Sociales! Facebook: https://www.facebook.com/DianaUribe.fm/ Instagram: https://www.instagram.com/dianauribefm/?hl=es-la Twitter: https://twitter.com/dianauribefm?lang=es Pagina web: https://www.dianauribe.fm
Buenas, hoy vamos a hablar del carnaval del Oriente Colombiano y del Festival de la Tigra, que son dos historias de una de las regiones más diversas, más variadas y de mayores contrastes, que existe en nuestro ya diverso, variado y contrastado país. Estamos hablando de los santanderes o del Gran Santander. Esta región es una región tanto geográfica como históricamente noval, es súper importante, son tierras de contrastes en la naturaleza, en la cultura, son contrastes humanos, son un punto fundamental de nuestra formación como Estado Nacional, como historia de nuestras independencias, casi uno puede coger Santander como una especie de fractal, como un corte de todo lo que es Colombia, o sea tomando esta región uno puede ver todo lo que es este país solamente en la diversidad de historias y geografías de esta región. Es una región, digamos como si uno lo tomaran en una lupa, sería una reproducción en pequeño de todo lo que es el país y de todo lo que ha ocurrido en nuestra historia y por eso es difícil categorizar, generar estereotipos a ser generalizaciones sobre los santanderianos porque hay una diversidad grande e importante. Sá Tudo Sería un Esta región está determinada por la presencia de la Cordillera Oriental y el Valle del Río Magdalena. En la parte norte limita con el Oriente con Venezuela, toda la frontera con Venezuela lo cual además genera una simbiosis histórica, cultural, poderosa y antigua, porque nosotros nacimos como un solo país, eso nos nos podía olvidar nunca en la vida, que nosotros era muy la gran Colombia como proyecto fundacional. Entonces en la esta deografía es montañosa, está la Cordillera Oriental que atraviesa a Esura Norte, que hace ramificaciones en el nuevo de Santurbán donde está el paramo, que es uno de los últimos paramos, les recuerdo que los paramos son formaciones muy raras en la tierra y la mayoría de estos están aquí en Colombia. Este paramo está altamente amenazado por toda la acción humana y de ahí nacen los ríos de Suratá, que es uno de los más importantes de la región porque son los ríos de donde sale el agua para Bucaramanga y para las demás poblaciones aledañas. Está el enorme tremendo fantástico de escomunal cañón del Chiquamocha, que es un prodigio de la naturaleza. Digamos yo con esta región tengo muchísima cercanía, casi filialidad, casi familiaridad, en un sentido cercano del alma, tengo que ir con muchísima frecuencia a barichara y cuando voy por Bucaramanga y atraviesa al cañón del Chiquamocha una y otra vez no dejo de maravillarme, nunca logra pasmar en mi el asombre o gigantesco de verlo. Esta la cerranía de los Jariquilles, una formación montañosa grandísima, al oxidente que está separando la Cordillera Oriental por la olla del río Soares y está en los territorios donde se encuentran en parque de los Jariquilles en una formación para preservar las últimas formaciones selbáticas que sobrevié en Santander porque aquí también hubo muchas selvas y también hubo mucha hacha, eso también forma parte de los imaginarios de cómo se crearon muchos de las construcciones y de las poblaciones de Santander. Digamos ahí especies que están al borde de la extinción, todo eso, es una región que tiene toda una cantidad de diversidad natural y también una necesidad de protección de esa diversidad de la naturaleza porque todas sus bondades también hay que cuidarlas mucho allá. En esta región la historia es particularmente fuerte porque en esta región la conquista fue brutal, fue impresionante, muchos de los guanes se suicidaron en más a cuando percibieron la llegada de los españoles y todavía en las cuevas venenos que están activos de familias enteras que se suicidaron en todas estas formaciones guanes que todavía nos guardan muchísimos secretos y de las que hay que saber aún mucho más, estaban los Jariquilles quedan en nombre de la cerranía, pues la cual algunos en socorro llaman la cerranía a los cobardes porque dicen que en una batalla los españoles salieron huyendo en una batalla contra los Jariquilles, entonces esa es otra forma de decir a la cerranía, están los agatais, están los chitareiros, los chipatas, los laches, los barri, los motilones y había muchísimos más pueblos indienes en el Departamento de Norte de Santander. Entonces aquí vamos como una confrontación muy brava y luego pueblos que llegaron a la extinción y dicen que cuando se creó barra, caermeja, que es uno de los centros neuraljicos más bravos en la historia de este país, pues cuando ya se hizo la última extinción de los pueblos indígenas, lo que marcaría una historia también terriblemente dramática en la ciudad. En el siglo XIX llegaron los alemanes que son claves aquí porque son toda una franja, digamos de historia de población que se refleja también en la misma fisionemía de la gente para un poco de gente que viene en el Norte de Santander y también en el Norte de Oyaca, de una franja alemana que hizo una presencia muy importante, a nivel histórico, a nivel comercial, a nivel científico, a nivel artístico y está pues una de las figuras importantísimas que es lenger que va a ser una cantidad de caminos. Y en esta región han ocurrido muchas cosas, esta es tierra como un era, lo que les va a dar a ellos a ellos y a todos nosotros, un sentido de orgullo porque esta rebelión es importantísima, es una rebelión a la que le hacen falta, milicéries grandes producciones y miles de extras, porque esto merece muchísima más atención de la que tradicionalmente se le tiene la rebelión como un era y estas tierras como uneras desde su corro y todo se van recorriendo y está a la figura de José Antonio Galán y de Charalá y toda una tradición que es bien importante para entendernos también nosotros en un proyecto posible de nación que fue la revolución comunera, también están las poblaciones de pienta, en pienta hay una batalla, esa batalla la pierden, pero esa batalla es importantísima porque hace que las tropas de los españoles no alcancen a concentrarse en el puente de Boyacá. Entonces, aunque la batalla en sí mismo no se ganó, fue lo suficientemente estratégica para que los que estuvieran en el puente de Boyacá eran los que estaban y no los refuerzos que venían de Santander, esa es una de las condiciones de éxito de la batalla de Boyacá y por lo tanto de la independencia en esta secuencia de batallas que nos llevaría al nacimiento de un continente, también haya esta ocaña, la famosa y cima convención y está la villa del rosario que son fundamentales, no solamente para el proceso de independencia sino para el proceso de formación de nosotros como está un nacional, o sea nosotros en la villa del rosario empezamos a adquirir, digamos como una personaria única, la historia como país más o menos es la la Constitución de Villa del Rosario. Entonces, aquí hay una cantidad de puntos neurálicos de nuestra independencia, de nuestra formación como está un nacional, de nuestra formación jurídica también ilegal, es una tierra de pensadores, poetas, escritores muy prolífica en ese sentido, también fue una región muy importante en el tiempo del tabaco y en el cultivo del café y malaga cuenta con una tradición de mujeres que han trabajado en la industria tabacalera desde hace muchísimas muchísimas décadas y aquí hay una serie de poblaciones que son absolutamente fantásticas y hay una serie de ferias y fiestas que anotos vamos a hablar de dos perú, es que hay muchas porque está el festival de la guabina y del típle en Vélez, Santander que es la capital mundial del bocadillo, o sea eso es importante y sobre todo con quecito. El traje típico de Vélez fue una de los detalles que más preciosamente se cuidó y se llevó a la representación de encanto. El personaje de Mirabel tiene un traje de la falda típica de Vélez, está también las ferias del socorro que son muy antiguas en Santander la tierra comunera, la tierra de Manuel Beltran, está el carnaval de Ocaña, el carnaval del norte de Santander que fue creado por un barraquilleros de un Santanderiano y que es típico de los ocañeros que también son gente muy muy orgullosa de su región y de su historia. Está digamos aquí hay una cantidad de poblaciones de muchísima importancia en esta y en otras épocas Pamplona también fue absolutamente fundamental en tiempos coloniales, ellos fueron muy importantes en el tiempo prehispánico, fueron muy importantes en el tiempo colonial, fueron muy importantes en el tiempo de formación de nosotros como estado nacional en el tiempo de la independencia. Entonces toda nuestra historia está atravesada por los santanderes en las múltiples etapas en que nosotros lo hacemos vivido. Así que cuando estamos hablando de tradición estamos hablando de una tradición muy importante para todo el país y que genera una cantidad de identidades propia y sentido de pertenencia de lo que significa ser Santanderiano empezando por el asiento manco. ¡No vamos amezcar asiento personal! La rolla y la Antonia Santos y José Antonio Galán Nacer es haber visado la Tierra Santanderiana Bendita por via creado una libertad colombiana Nacer es haber visado la Tierra Santanderiana Bendita por via creado una libertad colombiana Cura de casta valiente que con la patria se inspira Como el juicio hasta la muerte Nuestro diogarcía romida Otorre la patria mía que el Magdalena refleja Con su gran refilería de Nevarranca vermeja Son porros aquí en Florida, ven es mala la dinero Contra la sola vida De esta presencia se arreglió Nacer es haber visado la Tierra Santanderiana Bendita por via creado la libertad colombiana Nacer es haber visado la Tierra Santanderiana Bendita por via creado la libertad colombiana De toda esta diversidad nos vamos a meter en dos espacios particulares Nos vamos a meter en las ferias y fiestas del carnaval del Oriente colombiano en Malaga Aquí podíamos hablar infinitamente Porque aquí queda barichara tierra de mis más profundos amores Que es uno de los pueblos patrimonios de la historia de la humanidad Una joya colonial Absolutamente maravillosa en potrar en las montañas desde el siglo XVI Donde pasan todas las magias, todos los embrujos y todas las fascinaciones Que pueden pasar en estas tierras ya que en ese desdicas siempre con muchísimo amor Cualquier historia en donde ellos aparezcan Pero todo lo que nos vamos para Malaga es decir La capital de la región oriental que se conoce con el nombre de García Robira La región se llama García Robira, la capital es Malaga Dentro de toda la región que son los santanderes que en otra época fueron el antiguo gran santander Que después se divienen Santander del Norte de Santander del Sur Nosotros hicimos un recorrido por las regiones En la temporada anterior a nuestras ferias y fiestas con el RTVC Revisiten nuestra historia de la región Santanderiana para ver la formación de estos dos departamentos Como se crearon y todo pero digamos geográficamente tienen una profunda conexión Entonces dentro del gran santander está el norte y el sur Y dentro del norte y el sur hay también otro poco de regiones Por eso decimos que esto es un fractal porque tiene una cantidad de diversidades adentro Que son increíbles Una de estas regiones que es muy importante en la de García Robira Y allá en su capital hay una fiesta la más impresionante No tiene compadre yo por eso digo que vivas sin entre y es como sale del corazón Malaga es una señora una de nuestra que como en la herencia De sigo te cantan diciendo, te quiero malaga Y seguimos con la magia de malaga Esta fiesta no está en internet para que ustedes vean que no toda la representación del universo está en la red Y lo sé porque en para nuestra investigación no la encontramos en la red La encontramos en el amor, en la pasión, en el arte y en la maravilla De la aura y del maestro Luis Enrique Suárez y de Fernanda Suárez que nos dieron tanto amor en el relato de su fiesta Que nos transmitieron su alegría, su arte O sea para esto está la gente, la representación del arreto no es capaz de representar el universo entero en el que vivimos Eso es el nivel de chisme Esto es el resultado que van y nos cuentan de estas fiestas Siempre que nosotros vamos a contar estas historias Tenemos un corazón, un latido, una vida, alguien para quienes estas fiestas son la vida entera Y nos empiezan a contar malaga fue fundada por Jerónimo de Aguayo en 1542 Y las fiestas patronales, acuerdes de que las fiestas patronales siempre hacen alusión al Santo Patrón Aquí el Santo Patrón es San Jerónimo que es como el que marca las fiestas Se hacen durante el puente de reyes La mayoría de las fiestas que hemos contado se hacen durante los puentes de reyes, pues están el carnaval de negro, si blanco, si están en el de río sucio, está a este Que es así muy importante y aquí vienen las familias y vienen las días por aso, o sea mucha gente que en este momento no viven malaga, viene exclusivamente para estas fiestas como una cita con la alegría y con la tradición, entonces ellos escogen a San Jerónimo como patrono del pueblo Y a la vez esto cayó del fundador del pueblo y empiezan a montar una tradición que ya casi tiene 100 años Y esto viene con un desfiles central de carrosas, pero las carrosas son hechas en materiales, ecológico, las carrosas son hechas también en papel maché, o sea, alrededor de las carrosas Hay todo un trabajo artístico, artesanal, como sucede, digamos con nuestras otras historias Aquí hay un trabajo meticuloso detallado, entonces vienen las carrosas y las carrosas son típicos de su región, como en el caso de concepción Uno de los municipios cercanos a Malaga donde la carrosa típica representa las ruanas, esta rey en eso montañosa, entonces dentro de nuestra historia de los pisos térmicos Aquí hay climas fríos y climas templados, digamos es importante porque el imaginario de nuestro país caliente es cierto en muchos lados pero no en otros Entonces aquí hay diferencias térmicas importantes por la cordillera que les cuento, por lo tanto los de concepción tienen como atuendo típico la ruana Entonces aquí vienen las carrosas y las carrosas también vienen no solamente de todos los municipios aledaños porque esta fiesta llama a todos más de 25 municipios que están alrededor de Malaga vienen para la fiesta sino que Malaga misma tiene un montón de barrios y cada uno de esos barrios tiene sus carrosas, entonces todo el mundo va a llegar con las bandas musicales, con los trajes típicos, con las carrosas y cómo se acomodan, cómo puedan, eso pueden llegar hasta hasta 15 personas en una sala de bajo como de orden esteras, todo el mundo viene y todo el mundo se acomodan, los reciben en las casas, en multitudes y esto es un gentil preparándose para festejar en serio y como hemos visto que las ferias y fiestas tienen estos personajes como Joséito, Canavale, en Barranquilla, en Malaga está pericles y pericles es la encarrenación y la máxima autoridad de las fiestas y es el que guía las carrosas y las comparsas como el director de la fiesta representa con un sombrero, con un cubilete y aquí vienen todas clases de ventos, viene la fiesta de la gru, viene la feria gastronómica, viene la feria ganadera, que es muy importante en la región y es muy importante también en las festividades de las ferias del socorro, tiene más de 100 años de tradición y que venía por el río, viene también la gente de la capital caprina de Colombia de Capitanéjo, que tiene historias mágicas increíbles de donde vienen las ovejas y las cabras, vienen las verbenas, llevan los grupos musicales, vienen la música popular, el ballenato, el meren, el meren, que es particularmente poderosa, porque ellos tienen lazos fuertes con Boyacá, de donde viene la carranja y también tienen lazos con manizales, de donde vienen los pasodobles, porque hay una identidad con la región andalusa, que los emparenta con la beta española por el lado de manizales, esto para que ustedes vean el nivel de diversidad del que estamos hablando. Entonces, aquí vienen desde la carranja hasta los pasodobles y vienen todos los pueblos con sus verbenas y vienen los dulces, esta región de dulces, entonces vienen las panuchas, los dulces de arroz y de coco y todos los dulces específicos de malaga, lo que le da a la feria, un sabor delicioso en el dulce y en la gastronomía, en otro época llegada también un circo que recorre todo el país para las ferias, hay basares, hay eventos culturales, hay procesiones, se asanjeron de todo, de todo, o sea, literalmente, todo como embotica, porque aquí hay un carnaval campesino que es súper importante, porque representa a los tejidos las mujeres tejeduras, el arte del tabaco, tiene carrosas, ese carnaval campesino es un punto de identidad muy importante ahí, es previo al carnaval del oriente y es un punto de identidad muy importante y de pertenencia, porque eso representa la tradición los tejidos, las mujeres tejedoras, el arte del tabaco, también las carrosas, o sea, es fundamental como en la reigambre de todo lo que se está representando acá, también hay otro elemento que hemos visto en muchas otras fiestas y que aquí tiene una importancia capital que son los matachines, se acuerda que también los vimos en Río Susio, también los hemos visto en muchas otras fiestas, del 16 de diciembre al 24, o sea, durante el tiempo de la novena, en toda la región están los matachines, están hasta en Bucaramanga, que son las cirosas y todos los matachines salen al carnaval, pasa como con los cachaceros, se acuerda que en escadrillas de San Martín, si usted le tiene miedo al cachacero, el cachacero, lo persigue toda la fiesta, bueno, lo mismo pasa con los matachines, con las funciones generan un poco de caos y pánico festivo, porque van con una vejiga curada, que también no hemos visto en el carnaval de Río Susio, y le dan vejigasos a la gente, y la gente sale corriendo y hay un poco de recochas, generalmente ellos están cubiertos, entonces el personaje, el artista, el inspirador, el maestro, Luis Enrique Suárez, que nos cuenta estas historias, es una persona que tiene toda su vida comprometida en estos carnavales y en estas ferias, y resulta que él es el matacín conocido, porque él se pinta, todo el mundo sabe quién es, la gracia de los matachines, lo mismo que con las marimondas en la carnaval de Barracía, es que tú no sabes quiénes son, entonces pueden ser tus grandes amigos o tus padres o tus tíos los están basilando en la fiesta, él no se pone las máscaras que son propias de los matachines y lo que se pinta, entonces es uno de los personajes como el matachín conocido, que empezó a pintarse años atrás, y al principio cuando se pintaba eso no era bien visto, hasta que le introdujo esta forma de pintura y de arte dentro del carnaval, dentro de toda la manera como los personajes modifican, crean y construyen las fiestas, entonces esta fiesta ha logrado mantenerse por muchísimo tiempo y ha logrado atravesar todo esto, este retorio sufrir una violencia muy brava durante los años 50, ha sufrido muchas formas de conflicto porque por año han llegado todas, todas las formas de conflicto que hemos tenido que vivir, se han presentado en esta región y se han presentado de manera muy dura, y la fiesta sigue, porque resulta que entre nosotros la fiesta sigue, es una de las formas más grandes de resileín, si a que nosotros tenemos, hemos vivido nuestra historia entre las fiestas y las parrandas, y eso no es defini, nos defini también en una gran medida. Yo traigo la gozadera, que les saca chispa a tus caderas, yo traigo la gozadera, que ya el moro tal la pexa de ella, yo vengo de feinte, ganando mi parranda, yo vengo a divertir de cinte, como yo mata, saca el espazon, que soy ya no te falta, y le meto hasta los yo acá con tu amiguita, y me baire de mi fuego, yo meto la cantena, y le meto la energía, hasta la javuela, yo rompo la piñata con tantas gozaderas, le saca el dolor, tan negra la pensadera. Yo traigo la gozadera, que les saca chispa a tus caderas, yo traigo la gozadera, que ya el moro tal la pensadera. Yo traigo la gozadera, que les saca chispa a tus caderas, yo vengo de feinte, ganando mi parranda, yo vengo a divertir de cinte, como yo mata, saca el espazon, que soy ya no te hace falta, y le meto hasta los yo acá con tu amiguita. Esta provincia ha traído migraciones de muchos partes y muchas dias curas, y hace que este carnaval sea la alegría, el orgullo, la felicidad, para la gente de Málaga, que tiene una identidad muy particular, que tiene un asento diferente, al asento característico Santanderiano, que es de una manera muy particular en el sur y un poquito más golpea y tu en el norte. Entonces es un asento que no existe en el resto de Colombia, porque también a nosotros nos definen los asentos, en todas estas ferias y fiestas las ferias también tienen asentos, segundo donde se hagan, tienen al asento pastuzo, los carnavales de negro y blanco, o el asento país en la feria de las flores, o el valluno en la feria de Cali, o el barranquilleron, el carnaval de barranquilla, aquí están los diferentes asentos Santanderianos, y la gente de Málaga tiene un asento ligeramente distinto, y tiene particularidades en su cultura y hace parte del mosaico impresionante, que es Santandero. Aquí empieza el espacio comercial. Todas las regiones de nuestro país han pasado por la serie de ferias y fiestas de Colombia. En este capítulo, el turno es para Santander, y el oriente colombiano, señal memoria también guarda imágenes, audios y voces de Santander, y en todas las regiones del país nos invitamos a conocer estos archivos en la página www.señalmemoria.com Y después de esto nos vamos con una feria que encontraste con toda la tradición, la antigüedad, que tiene el carnaval del oriente colombiano en Málaga, es una fiesta que tiene solamente 6 años, de construye y de creada, pero es muy importante, porque es el presente, es el futuro, es la voz de las nuevas generaciones, nosotros normalmente en las fiestas hemos hablado de tradiciones, que se han elaborado a lo largo de muchas décadas a veces siglos, y hemos creado el imaginario de un país a través de las fiestas, que se reconoce su ascendencia en su pasado, y en la conservación, de toda esa ancestralidad, hasta llegar al presente como un testimonio de su paso por la cultura y por la tierra donde habitan, eso ha sido, digamos, como una constante en todas las recorridos de las fiestas que hemos hecho, este no, este nuevo, se lo acaban de inventar, pero está fantástico, es el festival de la tigra, y aquí nos metemos con dos grandes, Edson Belandia y Adrián Aliscano, estos son palabras mayores, son palabras mayores porque esta gente está construyendo una narrativa de país, a partir de unos ejes, nuevos y distintos, y esta gente está comprendiendo el lenguaje del futuro, y esta entendiendo que los tiempos cambian, que las historias generan la necesidad de nuevas lecturas, de nuevas miradas, ellos son esta nueva mirada con un talento artístico absolutamente desbordante, con una capacidad de canto y de, ahí sí como dicen las Santanderianos, de cantar las verdades mano, yo le canto una verdad, ellos cantan las verdades, si, de los creadores de los corredos mexicanos que le cantaban, a usted una verdad, bueno esta gente cantaba verdad eso, el festival de nación en el 2017, fíjate que no tienen ni 100 años, de 50, 80, nación en el 2017, los festivales también pueden hacer, y nosotros también podemos ser testigos de nuevos festivales que están haciendo, y que llegaran a construir imaginarios que no podemos siquiera pensar ahorita cuando arrancan, pero esto es un proyecto colaborativo y es un proyecto colectivo, y se creó para reactivar espacios de cultura, y intercambio en pie de cuesta, Santander, esto es en pie de cuesta, el otro es en malaga y esto es en pie de cuesta y eso parece, parece regiones muy diferentes, pero seguimos hablando de Santander, entonces el festival empieza y se origina en un gigantesco fracaso, como pasan muchas cosas en la vida, resulta que durante una protesta de la minga indígena en el 2016, hecho en Belandia y el combo que de músicos que lo acompañaban decidieron hacer una fiesta para apoyar la protesta, y se llama música pinga para la minga, y se fueron para allá pero eso salió pésimo pero mal, porque llegó un parche que era distinto al de la minga, que también estaba en otra protesta, o sea la minga estaba protestando, llegó otro combo que también estaba protestando, Belandia iba allá a cantar de a los que estaban protestando, y resulta que el otro combo, que también estaba en la protesta, pero no era parte de la minga, bloqueó con una extractó mulas la entrada al concierto, nadie llegó al concierto, se armó una pelea con la policía, los músicos salieron corriendo de ahí con sus instrumentos con su música, y aquello salió pésimo espantosamente mal, así que Belandia quedó muy aburrido, yo no, pero yo si quería hacer un festival o un pleno, la idea era buena, no puedo salir peor, pero la idea era buena, entonces dije no, yo me va a inventar un festival bien bacano, con otros parámetros, autogestionado, auto surgido por nosotros, creado por ellos mismos, amano, y aquí es como una parte fundamental, un festival que unre la naturaleza, los paramos, la fauna, y la gente de piedra de cuesta, eso son como los puntos nodales del festival, y le pone la tigra, la tigra en honor al jaguar, a la jaguara, a la tigra, no solamente por su en nuestro totem, no solamente por ser el felino más importante, que todavía está en los montes, que ruje, sino porque también es una forma de reivindicar el monte, el monte como una realidad de nuestra geografía, cuando yo les digo que Santander es muy variado, es porque también hay toda una tradición de lacha, de desmontar para crear ciudades que ha generado, todo un imaginario, y eso también existe, y para ellos ese imaginario es muy importante como lo fue en la colonización anti-oqueña, eso es una vertiente de país que se ha enfrentado a una naturaleza salvaje y la ha dominado, y eso es una lectura, esta es otra lectura totalmente diferente en donde el monte no es un obstáculo para la creación de ciudades, sino una riqueza para la reivindicación de cultura, todo esto está pasando en el mismo Santander, porque los imaginarios aquí se entran cruzan de maneras increíbles, entonces resulta que el combo de ellos reivindica el monte, y también la vereda le dañapia de cuesta que se llama la tigra, por eso se llama el festival de la tigra, y lo montaron, y llegaron los músicos, y los músicos cada uno se autogestiona su llegada allá, o sea no tiene patrocínios, y en ese sentido tiene como ese espíritu del festival de Wuchtock, que era una autopía, y que no era patrocinado por nadie, sino por el intento de crear un imaginario a través de la música, es un poco de esa manera, entonces acá, acá cual llega y se hace cargo de su propio parche, lo cual ya le da una característica muy particular, y hace que el hecho de que siga sobreviviendo, se amila groso, porque implica mucha fe y mucha credibilidad de la gente, entonces resulta que este no ha parado, y no logró parar ni siquiera con la pandemia, porque tiene una característica distinta a todos los demás festivales, y es que ocurre cuando puede, o sea no es el seis de enero, no es antes de la cuaresma, no es el día del santo o del patrono, no tiene esa filiación a una fecha, a un momento del año, en donde se convoca a toda la comunidad para entrar en la celebración de la fiesta, sino que se hace en el año, pero cuando se puede, entonces esto lo hace móvil, y lo hace también tremendamente flexible, resulta que lo hicieron en febrero en el 2020 y después en marzo vino la pandemia, y el confinamiento, entonces el festival por un pelito, por un pelito alcanzó a ocurrir, y bueno, y alcanzó a ver toda la gozadera antes de que veniera, todo esa penuridad de la pandemia, el festival por su propia naturaleza es eclecrico, es decir, no tiene una marca de identidad, como todos los demás festivales que hemos visto, que tienen una marca de identidad y de pertenencia, y a pesar de las diversidades, tienen digamos como un patro en una medida, un enfoque dentro de la música en general, este es distinto en eso, entonces aquí cada todo el mundo, entonces aquí vienen la carranca, viene el punk, la música de cámara, los coros, el metal más bravo, eso permite la convergencia de todos los ritmos, como digamos como con el tiempo llegó a ser local parque, si que al principio tocaba todo el mundo en un sitio distinto y ahora todo el mundo puede tocar, entonces esto llega a ser con todos los ritmos de toda la latitud, la única característica, el único recorre que sí, que sí, para montarse en esa tarima es una calidad musical enorme, o sea, hay una curaduría musical muy poderosa, y tiene otra característica que la vimos en el Green Moon Festival, y es que esto está acompañado de talleres de cine, de lectura, de medio ambiente, el Green Moon Festival cuando nosotros lo vimos en la temporada pasada, también tiene un acompañamiento de conocimiento y de cultura de la isla, es una manera en que los raizares muestran toda su tradición, su cultura, y lo hacen de una manera a través lo mismo de talleres, de una cantidad de explicaciones, o el sentido en que el jamming tenía también un jamming académico, para explicar todo el sentido, digamos, histórico anticolonial de muchas cosas que el Reggae significa como música, pues eso también tiene una serie de talleres, y tienen una serie de actividades cinematográficas, poéticas, literarias, de todo, y es una manera de ayudar a la gente a enseñar, aquí hay una pedagogía del medio ambiente, porque la clave del festival es el medio ambiente, entonces hay una pedagogía para ayudar a salvar el agua, que siempre es un punto grave en la región, es una región donde las aguas hay que cuidar la de mucha gente, hay mucha sequía, hay estorraques, hay unas formaciones increíbles, hay que salvar el agua, hay cineforos, hay taller de música, que va acompañando, digamos, como con una lectura paralela, para entender el sentido del festival, que es un festival musical, pero que tiene esta acompañamiento, digamos, artístico en el sentido del cine de los talleres y todo como el Greenwood, que es cuando un mensaje se da a través de muchas barcientes, eso también es parte del festival de la tigra, que es musical, pero que tiene todos estos elementos. Carmen, por eso no voy a olvidarte, si ahora te llevo dentro, voy dentro de mi fecho, Carmen, a ella la saredeno, a ella la saredeno, a ella la saredeno. Carmen, pero me queda tu retrato, y el lindo pañuelito blanco, y el riso de tu cabello, Carmen, y el riso de tu cabello, Carmen, y el riso de tu cabello, Carmen, y el riso de tu cabello. Carmen, y moronita con sentía, tu ya eres para ver mi vida, tu ya en la saredeno, Carmen, tu ya en la saredeno, Carmen, tu ya en la saredeno, a ella la saredeno. Este festival tiene un compromiso con el futuro, porque bueno, yo les conté que el siglo XX se acabó, ¿no? Es que en el siglo XX una de las cosas son distintas, hay nuevas preocupaciones, el cambio climático, la construcción de espacios de paz, hay nuevas lecturas, una nueva generación ha llegado al planeta, y la nueva generación que llega al planeta, viene a evitar un planeta en peligro, y viene a evitar un planeta del que se tiene que empuera rapidito, para poder evitar en él, porque la cosa es grave. Entonces, esto es la nueva generación, la importancia de traer para nosotros el festival de la Tigra, y la otra parte del reconocimiento artístico, con toda admiración del respeto al trabajo hecho en la Tigra y Adriana Liscano, es porque hay una nueva generación con una nueva historia, y esa nueva generación con esa nueva historia, con esa nueva idea de hacer política, de hacer paz, de hacer ecología, de construir imaginarios, de crear otras formas de evitar el planeta, y viene es una tradición que se construye dentro de nuestras ferias y fiestas, entonces así como hablamos de las grandes tradiciones, hablamos de las que se están formando, así como hablamos de los ancestros que nos han traído la fiesta, hablamos de los presentes que nos están construyendo las tradiciones, y ellos también serán ancestros, que a partir de aquí tendrán una mirada de planeta y de medio ambiente muy distinta, porque esto genera forma de conciencia nuevas. Entonces, en ese sentido es distinto a los demás festivales, por lo que les digo, porque está gestionado, porque no tiene una fecha física, porque se hace cuando se puede, porque canta otro tipo de cosas, porque tiene otro tipo de músicas, pero las ferias y fiestas tienen eso, la posibilidad de ser diversas, de recoger tradiciones o de construirlas, de contarnos nuevas historias o de revelarnos historias de origenes, de traer a todo el mundo y convocarlo, para una gozadera y una catarsis, siempre cuando empezábamos con nuestras ferias y fiestas en la temporada pasada hablábamos de lo apolíneo y de lo Dionysiaco como espacios de la psiquizumana, donde la gente se pone por un lado a la construcción que hacían los griegos de la geometría, de la matemática, de la red de donde es de la tierra, y por el otro lado, la fiesta total de la Dionysiaca, que eran los tres meses dedicados a Dionysio, o que los romanos llamaban Baco, el Dios del Vino. Las fiestas todas son para lo mismo, para gozar, para Dionysio, para la Dionysiaca, para la fuerza vivificante, para el alma, para la alegría, el carnaval desean los de malaga, alimentan nuestra vida y nos da sentido a la vida, y lo desean con una alegría que se proyectaba en los ojos que les hace brillar los ojos cuando hablan de su carnaval, el festival de la tigra nos trae nuevas miradas y nos trae nuestros nuevos ojos de lo mismo, la gozadera, porque en unos y en otros vamos a pasar la bueno, con diferentes símbolos, con diferentes alegrías, pero con la idea de que el festival es la manera como él le vamos a la naturaleza nuestro canto por estar vivos. Entonces, dentro de toda la diversidad, que significa esta región de Santander, donde salen tantas miradas de nuestra historia y nuestra geografía, nosotros recorremos, estas dos fiestas, como formas de contraste entre todas las diversidades, de esta región que recogé, casi que un fresco como un mosaico, de todo lo que somos, y a la cual rendimos un enorme tributo desde la música desde el alma y desde la geografía, con esto relato de dos de sus muchísimas fiestas, de sus muchísimas alegrías, de esta gente escarpada en las montañas, en los paramos, en los ríos y en los cañones. Cuando es orítica en agosto, sí, pero, pero orítica por ejemplo, este año es el 8 de agosto, ¿quién es bien? Pues imagínate que va a haber una homenaje a Pablo Gajenasus, que fue como uno de los arquitectivos de la canción protesta y atita pulido y viene gente, vería la ojeda, hacoóveles y la mamá negra, no amorales, los rolling ruinas que son, pues, absolutamente memorables, se va a estiampéña, en que le basura y sea natural esa suprema, en E. Hardham, en Sample Paramu, E. Fox, alto bajo, latinsón, Adriana Liscano, evidentemente, y Edson Belandia, papacitos ambos, viene, Sagraf, lo convia, de Argentina también viene ese que el Susterman, con su ejemplo de abatucadas, viene Lucio Feuillet, la panela, la alvariza, el quinteto coral universitario de la Wyss, y E. Karenjana, o sea, todos son combos de gente que viene a traer nuevas historias, nuevas música, nuevos sonidos, nuevas formas de entender el mundo, que es también parte de la riqueza de todas las ferias y las fiestas, que nos asoman, a los imaginarios y a los entendimientos de regiones, de mundos, de culturas, de alegría, y nos enriquecen a todos, dentro de esta relato de país que es tan maravilloso de ferias y fiestas, que siempre es un honor contar, porque nos está metiendo con la parte más amorosa, dulce y fantástica y crítica y maravillosa y satírica de país, que son las ferias y fiestas, y que siempre es asomarse a los universos más vacanos que tenemos nosotros. Entonces, desde los espacios de los paramos del gorribo, del cañón del chica mocha, de la ecología, de malaga de su alegría, de las compasas, de las teciduras, de los tiempos del tabaco, de la cantidad de gente metida en las casas de malaga para gozar, de los matachines, de la protección al paramo de los mensajes ecológicos, de Belanda y la tigre a Adriana, de todas las historias que ellos cuentan de todos los músicos que vienen a acompañarlos, y desde todo esta musicalidad alucinante, que les estamos sugiriendo contando y imaginando en este espacio tan fantástico de asomarse a las tierras maravillosas de Santander, en la narración de Ana Uribe, y para ustedes, feliz domingo. Este podcast fue posible gracias al equipo de la Casa de la Historia. Y a Nathuárez, Milena Beltrán, Arturo Jiménez Piña, grabado en los gatos estudio, la adición y la musicalización de Eduardo Corredor Fonseca, de Rueda Sonido, y contamos con Daniel Shruts, que está con nosotros acompañándonos de Aquena del Ante, y que lo introducimos en nuestro relato con mucha alegría. Agradecimientos muy especiales para este relato a Laura Kasman, a los maestros, Luis Enrique Suárez, Fernanda Suárez, Artistas de Málaga y de la provincia de García Robira, y a Echom Belandia y Adriana Liscano, en pie de cuesta, con mucho cariño se les agradece a todos ellos su amor y su relato. Y siempre con la ayuda fuerte y poderosa, de Santiago Espinoza Uribe y Laura Rojasaponte, del podcast de Internet. Y que yo no puedo ver con como vos desquietos, hay yo quiero que te quede, que me lleve de todos, donde no me encuentre, un labo sin ego, donde rompe el tiempo en ser verdad de ser silenzo, donde las acciones se vinieron y van a ser. ¡Esto es tu mel問題!
[{"start": 0.0, "end": 22.04, "text": " Buenas, hoy vamos a hablar del carnaval del Oriente Colombiano y del Festival de la Tigra,"}, {"start": 22.04, "end": 30.799999999999997, "text": " que son dos historias de una de las regiones m\u00e1s diversas, m\u00e1s variadas y de mayores contrastes,"}, {"start": 30.799999999999997, "end": 37.48, "text": " que existe en nuestro ya diverso, variado y contrastado pa\u00eds."}, {"start": 37.48, "end": 67.44, "text": " Estamos hablando de los santanderes o del Gran Santander."}, {"start": 67.48, "end": 97.44, "text": " Esta regi\u00f3n es una regi\u00f3n tanto geogr\u00e1fica como hist\u00f3ricamente"}, {"start": 97.44, "end": 106.4, "text": " noval, es s\u00faper importante, son tierras de contrastes en la naturaleza, en la cultura, son contrastes"}, {"start": 106.4, "end": 113.4, "text": " humanos, son un punto fundamental de nuestra formaci\u00f3n como Estado Nacional, como historia de nuestras"}, {"start": 113.4, "end": 121.92, "text": " independencias, casi uno puede coger Santander como una especie de fractal, como un corte de todo lo"}, {"start": 121.92, "end": 129.48, "text": " que es Colombia, o sea tomando esta regi\u00f3n uno puede ver todo lo que es este pa\u00eds solamente en la"}, {"start": 129.48, "end": 136.0, "text": " diversidad de historias y geograf\u00edas de esta regi\u00f3n. Es una regi\u00f3n, digamos como si uno lo"}, {"start": 136.0, "end": 141.88, "text": " tomaran en una lupa, ser\u00eda una reproducci\u00f3n en peque\u00f1o de todo lo que es el pa\u00eds y de todo lo"}, {"start": 141.88, "end": 148.52, "text": " que ha ocurrido en nuestra historia y por eso es dif\u00edcil categorizar, generar estereotipos"}, {"start": 148.52, "end": 155.20000000000002, "text": " a ser generalizaciones sobre los santanderianos porque hay una diversidad grande e importante."}, {"start": 179.24, "end": 182.28, "text": " S\u00e1 Tudo Ser\u00eda"}, {"start": 200.12, "end": 203.12, "text": " un"}, {"start": 203.12, "end": 232.98000000000002, "text": " Esta regi\u00f3n est\u00e1 determinada por la presencia de la Cordillera Oriental y el Valle del"}, {"start": 232.98, "end": 240.89999999999998, "text": " R\u00edo Magdalena. En la parte norte limita con el Oriente con Venezuela, toda la frontera con"}, {"start": 240.89999999999998, "end": 248.78, "text": " Venezuela lo cual adem\u00e1s genera una simbiosis hist\u00f3rica, cultural, poderosa y antigua,"}, {"start": 248.78, "end": 254.98, "text": " porque nosotros nacimos como un solo pa\u00eds, eso nos nos pod\u00eda olvidar nunca en la vida,"}, {"start": 254.98, "end": 259.86, "text": " que nosotros era muy la gran Colombia como proyecto fundacional. Entonces en la esta"}, {"start": 259.86, "end": 268.14, "text": " deograf\u00eda es monta\u00f1osa, est\u00e1 la Cordillera Oriental que atraviesa a Esura Norte, que hace ramificaciones"}, {"start": 268.14, "end": 273.42, "text": " en el nuevo de Santurb\u00e1n donde est\u00e1 el paramo, que es uno de los \u00faltimos paramos, les recuerdo que"}, {"start": 273.42, "end": 280.02000000000004, "text": " los paramos son formaciones muy raras en la tierra y la mayor\u00eda de estos est\u00e1n aqu\u00ed en Colombia."}, {"start": 280.02000000000004, "end": 286.62, "text": " Este paramo est\u00e1 altamente amenazado por toda la acci\u00f3n humana y de ah\u00ed nacen los r\u00edos de"}, {"start": 286.62, "end": 290.54, "text": " Surat\u00e1, que es uno de los m\u00e1s importantes de la regi\u00f3n porque son los r\u00edos de donde sale el"}, {"start": 290.54, "end": 297.34000000000003, "text": " agua para Bucaramanga y para las dem\u00e1s poblaciones aleda\u00f1as. Est\u00e1 el enorme tremendo fant\u00e1stico"}, {"start": 297.34000000000003, "end": 303.98, "text": " de escomunal ca\u00f1\u00f3n del Chiquamocha, que es un prodigio de la naturaleza. Digamos yo con"}, {"start": 303.98, "end": 311.46, "text": " esta regi\u00f3n tengo much\u00edsima cercan\u00eda, casi filialidad, casi familiaridad, en un sentido cercano"}, {"start": 311.46, "end": 318.78, "text": " del alma, tengo que ir con much\u00edsima frecuencia a barichara y cuando voy por Bucaramanga y atraviesa"}, {"start": 318.78, "end": 325.62, "text": " al ca\u00f1\u00f3n del Chiquamocha una y otra vez no dejo de maravillarme, nunca logra pasmar en mi"}, {"start": 325.62, "end": 330.85999999999996, "text": " el asombre o gigantesco de verlo. Esta la cerran\u00eda de los Jariquilles, una formaci\u00f3n"}, {"start": 330.85999999999996, "end": 336.58, "text": " monta\u00f1osa grand\u00edsima, al oxidente que est\u00e1 separando la Cordillera Oriental por la"}, {"start": 336.58, "end": 342.58, "text": " olla del r\u00edo Soares y est\u00e1 en los territorios donde se encuentran en parque de los Jariquilles"}, {"start": 342.58, "end": 348.65999999999997, "text": " en una formaci\u00f3n para preservar las \u00faltimas formaciones selb\u00e1ticas que sobrevi\u00e9 en Santander"}, {"start": 348.65999999999997, "end": 353.46, "text": " porque aqu\u00ed tambi\u00e9n hubo muchas selvas y tambi\u00e9n hubo mucha hacha, eso tambi\u00e9n forma"}, {"start": 353.46, "end": 358.82, "text": " parte de los imaginarios de c\u00f3mo se crearon muchos de las construcciones y de las poblaciones"}, {"start": 358.82, "end": 364.29999999999995, "text": " de Santander. Digamos ah\u00ed especies que est\u00e1n al borde de la extinci\u00f3n, todo eso, es"}, {"start": 364.3, "end": 370.86, "text": " una regi\u00f3n que tiene toda una cantidad de diversidad natural y tambi\u00e9n una necesidad"}, {"start": 370.86, "end": 376.02000000000004, "text": " de protecci\u00f3n de esa diversidad de la naturaleza porque todas sus bondades tambi\u00e9n hay que"}, {"start": 376.02000000000004, "end": 383.46000000000004, "text": " cuidarlas mucho all\u00e1. En esta regi\u00f3n la historia es particularmente fuerte porque en esta"}, {"start": 383.46000000000004, "end": 392.14, "text": " regi\u00f3n la conquista fue brutal, fue impresionante, muchos de los guanes se suicidaron en m\u00e1s"}, {"start": 392.14, "end": 397.97999999999996, "text": " a cuando percibieron la llegada de los espa\u00f1oles y todav\u00eda en las cuevas venenos que est\u00e1n"}, {"start": 397.97999999999996, "end": 404.97999999999996, "text": " activos de familias enteras que se suicidaron en todas estas formaciones guanes que todav\u00eda"}, {"start": 404.97999999999996, "end": 409.7, "text": " nos guardan much\u00edsimos secretos y de las que hay que saber a\u00fan mucho m\u00e1s, estaban"}, {"start": 409.7, "end": 415.38, "text": " los Jariquilles quedan en nombre de la cerran\u00eda, pues la cual algunos en socorro llaman la"}, {"start": 415.38, "end": 420.02, "text": " cerran\u00eda a los cobardes porque dicen que en una batalla los espa\u00f1oles salieron huyendo"}, {"start": 420.02, "end": 426.21999999999997, "text": " en una batalla contra los Jariquilles, entonces esa es otra forma de decir a la cerran\u00eda,"}, {"start": 426.21999999999997, "end": 433.5, "text": " est\u00e1n los agatais, est\u00e1n los chitareiros, los chipatas, los laches, los barri, los motilones"}, {"start": 433.5, "end": 438.06, "text": " y hab\u00eda much\u00edsimos m\u00e1s pueblos indienes en el Departamento de Norte de Santander."}, {"start": 438.06, "end": 443.38, "text": " Entonces aqu\u00ed vamos como una confrontaci\u00f3n muy brava y luego pueblos que llegaron a la"}, {"start": 443.38, "end": 448.7, "text": " extinci\u00f3n y dicen que cuando se cre\u00f3 barra, caermeja, que es uno de los centros"}, {"start": 448.7, "end": 452.9, "text": " neuraljicos m\u00e1s bravos en la historia de este pa\u00eds, pues cuando ya se hizo la \u00faltima"}, {"start": 452.9, "end": 457.02, "text": " extinci\u00f3n de los pueblos ind\u00edgenas, lo que marcar\u00eda una historia tambi\u00e9n terriblemente"}, {"start": 457.02, "end": 462.7, "text": " dram\u00e1tica en la ciudad. En el siglo XIX llegaron los alemanes que son claves aqu\u00ed porque"}, {"start": 462.7, "end": 468.46, "text": " son toda una franja, digamos de historia de poblaci\u00f3n que se refleja tambi\u00e9n en la"}, {"start": 468.46, "end": 472.98, "text": " misma fisionem\u00eda de la gente para un poco de gente que viene en el Norte de Santander"}, {"start": 472.98, "end": 479.26, "text": " y tambi\u00e9n en el Norte de Oyaca, de una franja alemana que hizo una presencia muy importante,"}, {"start": 479.26, "end": 484.82, "text": " a nivel hist\u00f3rico, a nivel comercial, a nivel cient\u00edfico, a nivel art\u00edstico y est\u00e1"}, {"start": 484.82, "end": 490.74, "text": " pues una de las figuras important\u00edsimas que es lenger que va a ser una cantidad de caminos."}, {"start": 490.74, "end": 496.5, "text": " Y en esta regi\u00f3n han ocurrido muchas cosas, esta es tierra como un era, lo que les va"}, {"start": 496.5, "end": 501.90000000000003, "text": " a dar a ellos a ellos y a todos nosotros, un sentido de orgullo porque esta rebeli\u00f3n"}, {"start": 501.9, "end": 507.7, "text": " es important\u00edsima, es una rebeli\u00f3n a la que le hacen falta, milic\u00e9ries grandes producciones"}, {"start": 507.7, "end": 512.74, "text": " y miles de extras, porque esto merece much\u00edsima m\u00e1s atenci\u00f3n de la que tradicionalmente"}, {"start": 512.74, "end": 517.5, "text": " se le tiene la rebeli\u00f3n como un era y estas tierras como uneras desde su corro y todo se van"}, {"start": 517.5, "end": 523.8199999999999, "text": " recorriendo y est\u00e1 a la figura de Jos\u00e9 Antonio Gal\u00e1n y de Charal\u00e1 y toda una tradici\u00f3n"}, {"start": 523.8199999999999, "end": 528.86, "text": " que es bien importante para entendernos tambi\u00e9n nosotros en un proyecto posible de naci\u00f3n"}, {"start": 528.86, "end": 534.86, "text": " que fue la revoluci\u00f3n comunera, tambi\u00e9n est\u00e1n las poblaciones de pienta, en pienta hay"}, {"start": 534.86, "end": 540.5, "text": " una batalla, esa batalla la pierden, pero esa batalla es important\u00edsima porque hace que las"}, {"start": 540.5, "end": 545.62, "text": " tropas de los espa\u00f1oles no alcancen a concentrarse en el puente de Boyac\u00e1."}, {"start": 545.62, "end": 551.1, "text": " Entonces, aunque la batalla en s\u00ed mismo no se gan\u00f3, fue lo suficientemente estrat\u00e9gica"}, {"start": 551.1, "end": 555.5, "text": " para que los que estuvieran en el puente de Boyac\u00e1 eran los que estaban y no los refuerzos"}, {"start": 555.5, "end": 560.02, "text": " que ven\u00edan de Santander, esa es una de las condiciones de \u00e9xito de la batalla de Boyac\u00e1 y por"}, {"start": 560.02, "end": 566.3, "text": " lo tanto de la independencia en esta secuencia de batallas que nos llevar\u00eda al nacimiento"}, {"start": 566.3, "end": 572.3, "text": " de un continente, tambi\u00e9n haya esta oca\u00f1a, la famosa y cima convenci\u00f3n y est\u00e1 la villa"}, {"start": 572.3, "end": 576.78, "text": " del rosario que son fundamentales, no solamente para el proceso de independencia sino para"}, {"start": 576.78, "end": 581.34, "text": " el proceso de formaci\u00f3n de nosotros como est\u00e1 un nacional, o sea nosotros en la villa"}, {"start": 581.34, "end": 585.82, "text": " del rosario empezamos a adquirir, digamos como una personaria \u00fanica, la historia como"}, {"start": 585.82, "end": 589.62, "text": " pa\u00eds m\u00e1s o menos es la la Constituci\u00f3n de Villa del Rosario."}, {"start": 589.62, "end": 595.5400000000001, "text": " Entonces, aqu\u00ed hay una cantidad de puntos neur\u00e1licos de nuestra independencia, de nuestra"}, {"start": 595.5400000000001, "end": 601.5, "text": " formaci\u00f3n como est\u00e1 un nacional, de nuestra formaci\u00f3n jur\u00eddica tambi\u00e9n ilegal, es una"}, {"start": 601.5, "end": 608.22, "text": " tierra de pensadores, poetas, escritores muy prol\u00edfica en ese sentido, tambi\u00e9n fue"}, {"start": 608.22, "end": 615.14, "text": " una regi\u00f3n muy importante en el tiempo del tabaco y en el cultivo del caf\u00e9 y malaga cuenta"}, {"start": 615.14, "end": 620.74, "text": " con una tradici\u00f3n de mujeres que han trabajado en la industria tabacalera desde hace much\u00edsimas"}, {"start": 620.74, "end": 627.3000000000001, "text": " much\u00edsimas d\u00e9cadas y aqu\u00ed hay una serie de poblaciones que son absolutamente fant\u00e1sticas"}, {"start": 627.3000000000001, "end": 631.3000000000001, "text": " y hay una serie de ferias y fiestas que anotos vamos a hablar de dos per\u00fa, es que hay muchas"}, {"start": 631.3000000000001, "end": 636.22, "text": " porque est\u00e1 el festival de la guabina y del t\u00edple en V\u00e9lez, Santander que es la capital"}, {"start": 636.22, "end": 641.1800000000001, "text": " mundial del bocadillo, o sea eso es importante y sobre todo con quecito."}, {"start": 641.1800000000001, "end": 648.6, "text": " El traje t\u00edpico de V\u00e9lez fue una de los detalles que m\u00e1s preciosamente se cuid\u00f3 y se"}, {"start": 648.6, "end": 652.22, "text": " llev\u00f3 a la representaci\u00f3n de encanto."}, {"start": 652.22, "end": 657.7, "text": " El personaje de Mirabel tiene un traje de la falda t\u00edpica de V\u00e9lez, est\u00e1 tambi\u00e9n"}, {"start": 657.7, "end": 663.1800000000001, "text": " las ferias del socorro que son muy antiguas en Santander la tierra comunera, la tierra"}, {"start": 663.18, "end": 668.66, "text": " de Manuel Beltran, est\u00e1 el carnaval de Oca\u00f1a, el carnaval del norte de Santander que fue"}, {"start": 668.66, "end": 673.9, "text": " creado por un barraquilleros de un Santanderiano y que es t\u00edpico de los oca\u00f1eros que tambi\u00e9n"}, {"start": 673.9, "end": 678.06, "text": " son gente muy muy orgullosa de su regi\u00f3n y de su historia."}, {"start": 678.06, "end": 683.06, "text": " Est\u00e1 digamos aqu\u00ed hay una cantidad de poblaciones de much\u00edsima importancia en esta y en otras"}, {"start": 683.06, "end": 689.3, "text": " \u00e9pocas Pamplona tambi\u00e9n fue absolutamente fundamental en tiempos coloniales, ellos fueron"}, {"start": 689.3, "end": 694.78, "text": " muy importantes en el tiempo prehisp\u00e1nico, fueron muy importantes en el tiempo colonial,"}, {"start": 694.78, "end": 699.54, "text": " fueron muy importantes en el tiempo de formaci\u00f3n de nosotros como estado nacional en el tiempo"}, {"start": 699.54, "end": 700.9399999999999, "text": " de la independencia."}, {"start": 700.9399999999999, "end": 707.2199999999999, "text": " Entonces toda nuestra historia est\u00e1 atravesada por los santanderes en las m\u00faltiples etapas"}, {"start": 707.2199999999999, "end": 709.4599999999999, "text": " en que nosotros lo hacemos vivido."}, {"start": 709.4599999999999, "end": 714.02, "text": " As\u00ed que cuando estamos hablando de tradici\u00f3n estamos hablando de una tradici\u00f3n muy importante"}, {"start": 714.02, "end": 720.3, "text": " para todo el pa\u00eds y que genera una cantidad de identidades propia y sentido de pertenencia"}, {"start": 720.3, "end": 750.26, "text": " de lo que significa ser Santanderiano empezando por el asiento manco."}, {"start": 750.26, "end": 772.02, "text": " \u00a1No vamos amezcar asiento personal!"}, {"start": 772.02, "end": 779.02, "text": " La rolla y la Antonia Santos y Jos\u00e9 Antonio Gal\u00e1n"}, {"start": 779.02, "end": 784.02, "text": " Nacer es haber visado la Tierra Santanderiana"}, {"start": 784.02, "end": 789.02, "text": " Bendita por via creado una libertad colombiana"}, {"start": 789.02, "end": 794.02, "text": " Nacer es haber visado la Tierra Santanderiana"}, {"start": 794.02, "end": 801.02, "text": " Bendita por via creado una libertad colombiana"}, {"start": 801.02, "end": 806.02, "text": " Cura de casta valiente que con la patria se inspira"}, {"start": 806.02, "end": 809.02, "text": " Como el juicio hasta la muerte"}, {"start": 809.02, "end": 811.02, "text": " Nuestro diogarc\u00eda romida"}, {"start": 811.02, "end": 818.02, "text": " Otorre la patria m\u00eda que el Magdalena refleja"}, {"start": 818.02, "end": 825.02, "text": " Con su gran refiler\u00eda de Nevarranca vermeja"}, {"start": 825.02, "end": 831.02, "text": " Son porros aqu\u00ed en Florida, ven es mala la dinero"}, {"start": 831.02, "end": 834.02, "text": " Contra la sola vida"}, {"start": 834.02, "end": 836.02, "text": " De esta presencia se arregli\u00f3"}, {"start": 836.02, "end": 841.02, "text": " Nacer es haber visado la Tierra Santanderiana"}, {"start": 841.02, "end": 846.02, "text": " Bendita por via creado la libertad colombiana"}, {"start": 846.02, "end": 851.02, "text": " Nacer es haber visado la Tierra Santanderiana"}, {"start": 851.02, "end": 862.02, "text": " Bendita por via creado la libertad colombiana"}, {"start": 862.02, "end": 867.02, "text": " De toda esta diversidad nos vamos a meter en dos espacios particulares"}, {"start": 867.02, "end": 873.02, "text": " Nos vamos a meter en las ferias y fiestas del carnaval del Oriente colombiano en Malaga"}, {"start": 873.02, "end": 875.02, "text": " Aqu\u00ed pod\u00edamos hablar infinitamente"}, {"start": 875.02, "end": 879.02, "text": " Porque aqu\u00ed queda barichara tierra de mis m\u00e1s profundos amores"}, {"start": 879.02, "end": 882.02, "text": " Que es uno de los pueblos patrimonios de la historia de la humanidad"}, {"start": 882.02, "end": 884.02, "text": " Una joya colonial"}, {"start": 884.02, "end": 888.02, "text": " Absolutamente maravillosa en potrar en las monta\u00f1as desde el siglo XVI"}, {"start": 888.02, "end": 893.02, "text": " Donde pasan todas las magias, todos los embrujos y todas las fascinaciones"}, {"start": 893.02, "end": 898.02, "text": " Que pueden pasar en estas tierras ya que en ese desdicas siempre con much\u00edsimo amor"}, {"start": 898.02, "end": 901.02, "text": " Cualquier historia en donde ellos aparezcan"}, {"start": 901.02, "end": 904.02, "text": " Pero todo lo que nos vamos para Malaga es decir"}, {"start": 904.02, "end": 910.02, "text": " La capital de la regi\u00f3n oriental que se conoce con el nombre de Garc\u00eda Robira"}, {"start": 910.02, "end": 915.02, "text": " La regi\u00f3n se llama Garc\u00eda Robira, la capital es Malaga"}, {"start": 915.02, "end": 922.02, "text": " Dentro de toda la regi\u00f3n que son los santanderes que en otra \u00e9poca fueron el antiguo gran santander"}, {"start": 922.02, "end": 926.02, "text": " Que despu\u00e9s se divienen Santander del Norte de Santander del Sur"}, {"start": 926.02, "end": 930.02, "text": " Nosotros hicimos un recorrido por las regiones"}, {"start": 930.02, "end": 936.02, "text": " En la temporada anterior a nuestras ferias y fiestas con el RTVC"}, {"start": 936.02, "end": 943.02, "text": " Revisiten nuestra historia de la regi\u00f3n Santanderiana para ver la formaci\u00f3n de estos dos departamentos"}, {"start": 943.02, "end": 950.02, "text": " Como se crearon y todo pero digamos geogr\u00e1ficamente tienen una profunda conexi\u00f3n"}, {"start": 950.02, "end": 954.02, "text": " Entonces dentro del gran santander est\u00e1 el norte y el sur"}, {"start": 954.02, "end": 959.02, "text": " Y dentro del norte y el sur hay tambi\u00e9n otro poco de regiones"}, {"start": 959.02, "end": 963.02, "text": " Por eso decimos que esto es un fractal porque tiene una cantidad de diversidades adentro"}, {"start": 963.02, "end": 965.02, "text": " Que son incre\u00edbles"}, {"start": 965.02, "end": 969.02, "text": " Una de estas regiones que es muy importante en la de Garc\u00eda Robira"}, {"start": 969.02, "end": 998.02, "text": " Y all\u00e1 en su capital hay una fiesta la m\u00e1s impresionante"}, {"start": 998.02, "end": 1010.02, "text": " No tiene compadre yo por eso digo que vivas sin entre y es como sale del coraz\u00f3n"}, {"start": 1010.02, "end": 1017.02, "text": " Malaga es una se\u00f1ora una de nuestra que como en la herencia"}, {"start": 1017.02, "end": 1033.02, "text": " De sigo te cantan diciendo, te quiero malaga"}, {"start": 1033.02, "end": 1039.02, "text": " Y seguimos con la magia de malaga"}, {"start": 1039.02, "end": 1046.02, "text": " Esta fiesta no est\u00e1 en internet para que ustedes vean que no toda la representaci\u00f3n del universo est\u00e1 en la red"}, {"start": 1046.02, "end": 1051.02, "text": " Y lo s\u00e9 porque en para nuestra investigaci\u00f3n no la encontramos en la red"}, {"start": 1051.02, "end": 1057.02, "text": " La encontramos en el amor, en la pasi\u00f3n, en el arte y en la maravilla"}, {"start": 1057.02, "end": 1067.02, "text": " De la aura y del maestro Luis Enrique Su\u00e1rez y de Fernanda Su\u00e1rez que nos dieron tanto amor en el relato de su fiesta"}, {"start": 1067.02, "end": 1071.02, "text": " Que nos transmitieron su alegr\u00eda, su arte"}, {"start": 1071.02, "end": 1081.02, "text": " O sea para esto est\u00e1 la gente, la representaci\u00f3n del arreto no es capaz de representar el universo entero en el que vivimos"}, {"start": 1081.02, "end": 1083.02, "text": " Eso es el nivel de chisme"}, {"start": 1083.02, "end": 1086.02, "text": " Esto es el resultado que van y nos cuentan de estas fiestas"}, {"start": 1086.02, "end": 1089.02, "text": " Siempre que nosotros vamos a contar estas historias"}, {"start": 1089.02, "end": 1096.02, "text": " Tenemos un coraz\u00f3n, un latido, una vida, alguien para quienes estas fiestas son la vida entera"}, {"start": 1096.02, "end": 1105.02, "text": " Y nos empiezan a contar malaga fue fundada por Jer\u00f3nimo de Aguayo en 1542"}, {"start": 1105.02, "end": 1112.02, "text": " Y las fiestas patronales, acuerdes de que las fiestas patronales siempre hacen alusi\u00f3n al Santo Patr\u00f3n"}, {"start": 1112.02, "end": 1119.02, "text": " Aqu\u00ed el Santo Patr\u00f3n es San Jer\u00f3nimo que es como el que marca las fiestas"}, {"start": 1119.02, "end": 1121.02, "text": " Se hacen durante el puente de reyes"}, {"start": 1121.02, "end": 1131.02, "text": " La mayor\u00eda de las fiestas que hemos contado se hacen durante los puentes de reyes, pues est\u00e1n el carnaval de negro, si blanco, si est\u00e1n en el de r\u00edo sucio, est\u00e1 a este"}, {"start": 1131.02, "end": 1143.02, "text": " Que es as\u00ed muy importante y aqu\u00ed vienen las familias y vienen las d\u00edas por aso, o sea mucha gente que en este momento no viven malaga, viene"}, {"start": 1143.02, "end": 1155.02, "text": " exclusivamente para estas fiestas como una cita con la alegr\u00eda y con la tradici\u00f3n, entonces ellos escogen a San Jer\u00f3nimo como patrono del pueblo"}, {"start": 1155.02, "end": 1164.02, "text": " Y a la vez esto cay\u00f3 del fundador del pueblo y empiezan a montar una tradici\u00f3n que ya casi tiene 100 a\u00f1os"}, {"start": 1164.02, "end": 1176.02, "text": " Y esto viene con un desfiles central de carrosas, pero las carrosas son hechas en materiales, ecol\u00f3gico, las carrosas son hechas tambi\u00e9n en papel mach\u00e9, o sea, alrededor de las carrosas"}, {"start": 1176.02, "end": 1183.02, "text": " Hay todo un trabajo art\u00edstico, artesanal, como sucede, digamos con nuestras otras historias"}, {"start": 1183.02, "end": 1192.02, "text": " Aqu\u00ed hay un trabajo meticuloso detallado, entonces vienen las carrosas y las carrosas son t\u00edpicos de su regi\u00f3n, como en el caso de concepci\u00f3n"}, {"start": 1192.02, "end": 1202.02, "text": " Uno de los municipios cercanos a Malaga donde la carrosa t\u00edpica representa las ruanas, esta rey en eso monta\u00f1osa, entonces dentro de nuestra historia de los pisos t\u00e9rmicos"}, {"start": 1202.02, "end": 1213.02, "text": " Aqu\u00ed hay climas fr\u00edos y climas templados, digamos es importante porque el imaginario de nuestro pa\u00eds caliente es cierto en muchos lados pero no en otros"}, {"start": 1213.02, "end": 1223.02, "text": " Entonces aqu\u00ed hay diferencias t\u00e9rmicas importantes por la cordillera que les cuento, por lo tanto los de concepci\u00f3n tienen como atuendo t\u00edpico la ruana"}, {"start": 1223.02, "end": 1238.02, "text": " Entonces aqu\u00ed vienen las carrosas y las carrosas tambi\u00e9n vienen no solamente de todos los municipios aleda\u00f1os porque esta fiesta llama a todos m\u00e1s de 25 municipios que est\u00e1n alrededor de Malaga"}, {"start": 1238.02, "end": 1253.02, "text": " vienen para la fiesta sino que Malaga misma tiene un mont\u00f3n de barrios y cada uno de esos barrios tiene sus carrosas, entonces todo el mundo va a llegar con las bandas musicales, con los trajes t\u00edpicos, con las carrosas"}, {"start": 1253.02, "end": 1266.02, "text": " y c\u00f3mo se acomodan, c\u00f3mo puedan, eso pueden llegar hasta hasta 15 personas en una sala de bajo como de orden esteras, todo el mundo viene y todo el mundo se acomodan, los reciben en las casas, en multitudes"}, {"start": 1266.02, "end": 1280.02, "text": " y esto es un gentil prepar\u00e1ndose para festejar en serio y como hemos visto que las ferias y fiestas tienen estos personajes como Jos\u00e9ito, Canavale, en Barranquilla, en Malaga est\u00e1 pericles"}, {"start": 1280.02, "end": 1289.02, "text": " y pericles es la encarrenaci\u00f3n y la m\u00e1xima autoridad de las fiestas y es el que gu\u00eda las carrosas y las comparsas como el director de la fiesta"}, {"start": 1289.02, "end": 1308.02, "text": " representa con un sombrero, con un cubilete y aqu\u00ed vienen todas clases de ventos, viene la fiesta de la gru, viene la feria gastron\u00f3mica, viene la feria ganadera, que es muy importante en la regi\u00f3n y es muy importante tambi\u00e9n en las festividades de las ferias del socorro, tiene m\u00e1s de 100 a\u00f1os de tradici\u00f3n y que ven\u00eda por el r\u00edo,"}, {"start": 1308.02, "end": 1324.02, "text": " viene tambi\u00e9n la gente de la capital caprina de Colombia de Capitan\u00e9jo, que tiene historias m\u00e1gicas incre\u00edbles de donde vienen las ovejas y las cabras, vienen las verbenas, llevan los grupos musicales, vienen la m\u00fasica popular, el ballenato, el meren,"}, {"start": 1324.02, "end": 1339.02, "text": " el meren, que es particularmente poderosa, porque ellos tienen lazos fuertes con Boyac\u00e1, de donde viene la carranja y tambi\u00e9n tienen lazos con manizales, de donde vienen los pasodobles,"}, {"start": 1339.02, "end": 1353.02, "text": " porque hay una identidad con la regi\u00f3n andalusa, que los emparenta con la beta espa\u00f1ola por el lado de manizales, esto para que ustedes vean el nivel de diversidad del que estamos hablando."}, {"start": 1353.02, "end": 1370.02, "text": " Entonces, aqu\u00ed vienen desde la carranja hasta los pasodobles y vienen todos los pueblos con sus verbenas y vienen los dulces, esta regi\u00f3n de dulces, entonces vienen las panuchas, los dulces de arroz y de coco y todos los dulces espec\u00edficos de malaga,"}, {"start": 1370.02, "end": 1388.02, "text": " lo que le da a la feria, un sabor delicioso en el dulce y en la gastronom\u00eda, en otro \u00e9poca llegada tambi\u00e9n un circo que recorre todo el pa\u00eds para las ferias, hay basares, hay eventos culturales, hay procesiones, se asanjeron de todo,"}, {"start": 1388.02, "end": 1408.02, "text": " de todo, o sea, literalmente, todo como embotica, porque aqu\u00ed hay un carnaval campesino que es s\u00faper importante, porque representa a los tejidos las mujeres tejeduras, el arte del tabaco, tiene carrosas, ese carnaval campesino es un punto de identidad muy importante ah\u00ed,"}, {"start": 1408.02, "end": 1428.02, "text": " es previo al carnaval del oriente y es un punto de identidad muy importante y de pertenencia, porque eso representa la tradici\u00f3n los tejidos, las mujeres tejedoras, el arte del tabaco, tambi\u00e9n las carrosas, o sea, es fundamental como en la reigambre de todo lo que se est\u00e1 representando ac\u00e1,"}, {"start": 1428.02, "end": 1446.02, "text": " tambi\u00e9n hay otro elemento que hemos visto en muchas otras fiestas y que aqu\u00ed tiene una importancia capital que son los matachines, se acuerda que tambi\u00e9n los vimos en R\u00edo Susio, tambi\u00e9n los hemos visto en muchas otras fiestas, del 16 de diciembre al 24, o sea, durante el tiempo de la novena,"}, {"start": 1446.02, "end": 1461.02, "text": " en toda la regi\u00f3n est\u00e1n los matachines, est\u00e1n hasta en Bucaramanga, que son las cirosas y todos los matachines salen al carnaval, pasa como con los cachaceros, se acuerda que en escadrillas de San Mart\u00edn,"}, {"start": 1461.02, "end": 1468.02, "text": " si usted le tiene miedo al cachacero, el cachacero, lo persigue toda la fiesta, bueno, lo mismo pasa con los matachines,"}, {"start": 1468.02, "end": 1484.02, "text": " con las funciones generan un poco de caos y p\u00e1nico festivo, porque van con una vejiga curada, que tambi\u00e9n no hemos visto en el carnaval de R\u00edo Susio, y le dan vejigasos a la gente, y la gente sale corriendo y hay un poco de recochas,"}, {"start": 1484.02, "end": 1503.02, "text": " generalmente ellos est\u00e1n cubiertos, entonces el personaje, el artista, el inspirador, el maestro, Luis Enrique Su\u00e1rez, que nos cuenta estas historias, es una persona que tiene toda su vida comprometida en estos carnavales y en estas ferias, y resulta que \u00e9l es el matac\u00edn conocido,"}, {"start": 1503.02, "end": 1517.02, "text": " porque \u00e9l se pinta, todo el mundo sabe qui\u00e9n es, la gracia de los matachines, lo mismo que con las marimondas en la carnaval de Barrac\u00eda, es que t\u00fa no sabes qui\u00e9nes son, entonces pueden ser tus grandes amigos o tus padres o tus t\u00edos los est\u00e1n basilando en la fiesta,"}, {"start": 1517.02, "end": 1532.02, "text": " \u00e9l no se pone las m\u00e1scaras que son propias de los matachines y lo que se pinta, entonces es uno de los personajes como el matach\u00edn conocido, que empez\u00f3 a pintarse a\u00f1os atr\u00e1s, y al principio cuando se pintaba eso no era bien visto,"}, {"start": 1532.02, "end": 1543.02, "text": " hasta que le introdujo esta forma de pintura y de arte dentro del carnaval, dentro de toda la manera como los personajes modifican, crean y construyen las fiestas,"}, {"start": 1543.02, "end": 1555.02, "text": " entonces esta fiesta ha logrado mantenerse por much\u00edsimo tiempo y ha logrado atravesar todo esto, este retorio sufrir una violencia muy brava durante los a\u00f1os 50,"}, {"start": 1555.02, "end": 1567.02, "text": " ha sufrido muchas formas de conflicto porque por a\u00f1o han llegado todas, todas las formas de conflicto que hemos tenido que vivir, se han presentado en esta regi\u00f3n y se han presentado de manera muy dura,"}, {"start": 1567.02, "end": 1584.02, "text": " y la fiesta sigue, porque resulta que entre nosotros la fiesta sigue, es una de las formas m\u00e1s grandes de resile\u00edn, si a que nosotros tenemos, hemos vivido nuestra historia entre las fiestas y las parrandas, y eso no es defini, nos defini tambi\u00e9n en una gran medida."}, {"start": 1584.02, "end": 1605.02, "text": " Yo traigo la gozadera, que les saca chispa a tus caderas, yo traigo la gozadera, que ya el moro tal la pexa de ella, yo vengo de feinte, ganando mi parranda, yo vengo a divertir de cinte, como yo mata, saca el espazon, que soy ya no te falta, y le meto hasta los yo ac\u00e1 con tu amiguita,"}, {"start": 1605.02, "end": 1615.02, "text": " y me baire de mi fuego, yo meto la cantena, y le meto la energ\u00eda, hasta la javuela, yo rompo la pi\u00f1ata con tantas gozaderas, le saca el dolor, tan negra la pensadera."}, {"start": 1615.02, "end": 1644.02, "text": " Yo traigo la gozadera, que les saca chispa a tus caderas, yo traigo la gozadera, que ya el moro tal la pensadera."}, {"start": 1644.02, "end": 1664.02, "text": " Yo traigo la gozadera, que les saca chispa a tus caderas, yo vengo de feinte, ganando mi parranda, yo vengo a divertir de cinte, como yo mata, saca el espazon, que soy ya no te hace falta, y le meto hasta los yo ac\u00e1 con tu amiguita."}, {"start": 1664.02, "end": 1679.02, "text": " Esta provincia ha tra\u00eddo migraciones de muchos partes y muchas dias curas, y hace que este carnaval sea la alegr\u00eda, el orgullo, la felicidad,"}, {"start": 1679.02, "end": 1694.02, "text": " para la gente de M\u00e1laga, que tiene una identidad muy particular, que tiene un asento diferente, al asento caracter\u00edstico Santanderiano, que es de una manera muy particular en el sur y un poquito m\u00e1s golpea y tu en el norte."}, {"start": 1694.02, "end": 1710.02, "text": " Entonces es un asento que no existe en el resto de Colombia, porque tambi\u00e9n a nosotros nos definen los asentos, en todas estas ferias y fiestas las ferias tambi\u00e9n tienen asentos, segundo donde se hagan, tienen al asento pastuzo,"}, {"start": 1710.02, "end": 1737.02, "text": " los carnavales de negro y blanco, o el asento pa\u00eds en la feria de las flores, o el valluno en la feria de Cali, o el barranquilleron, el carnaval de barranquilla, aqu\u00ed est\u00e1n los diferentes asentos Santanderianos, y la gente de M\u00e1laga tiene un asento ligeramente distinto, y tiene particularidades en su cultura y hace parte del mosaico impresionante, que es Santandero."}, {"start": 1741.02, "end": 1744.02, "text": " Aqu\u00ed empieza el espacio comercial."}, {"start": 1753.02, "end": 1758.02, "text": " Todas las regiones de nuestro pa\u00eds han pasado por la serie de ferias y fiestas de Colombia."}, {"start": 1758.02, "end": 1776.02, "text": " En este cap\u00edtulo, el turno es para Santander, y el oriente colombiano, se\u00f1al memoria tambi\u00e9n guarda im\u00e1genes, audios y voces de Santander, y en todas las regiones del pa\u00eds nos invitamos a conocer estos archivos en la p\u00e1gina www.se\u00f1almemoria.com"}, {"start": 1776.02, "end": 1805.02, "text": " Y despu\u00e9s de esto nos vamos con una feria que encontraste con toda la tradici\u00f3n, la antig\u00fcedad, que tiene el carnaval del oriente colombiano en M\u00e1laga, es una fiesta que tiene solamente 6 a\u00f1os, de construye y de creada, pero es muy importante,"}, {"start": 1805.02, "end": 1834.02, "text": " porque es el presente, es el futuro, es la voz de las nuevas generaciones, nosotros normalmente en las fiestas hemos hablado de tradiciones, que se han elaborado a lo largo de muchas d\u00e9cadas a veces siglos, y hemos creado el imaginario de un pa\u00eds a trav\u00e9s de las fiestas, que se reconoce su ascendencia en su pasado, y en la conservaci\u00f3n,"}, {"start": 1834.02, "end": 1859.02, "text": " de toda esa ancestralidad, hasta llegar al presente como un testimonio de su paso por la cultura y por la tierra donde habitan, eso ha sido, digamos, como una constante en todas las recorridos de las fiestas que hemos hecho, este no, este nuevo, se lo acaban de inventar, pero est\u00e1 fant\u00e1stico,"}, {"start": 1859.02, "end": 1888.02, "text": " es el festival de la tigra, y aqu\u00ed nos metemos con dos grandes, Edson Belandia y Adri\u00e1n Aliscano, estos son palabras mayores, son palabras mayores porque esta gente est\u00e1 construyendo una narrativa de pa\u00eds, a partir de unos ejes, nuevos y distintos, y esta gente est\u00e1 comprendiendo el lenguaje del futuro, y esta entendiendo que los tiempos cambian,"}, {"start": 1888.02, "end": 1909.02, "text": " que las historias generan la necesidad de nuevas lecturas, de nuevas miradas, ellos son esta nueva mirada con un talento art\u00edstico absolutamente desbordante, con una capacidad de canto y de, ah\u00ed s\u00ed como dicen las Santanderianos, de cantar las verdades mano, yo le canto una verdad, ellos cantan las verdades,"}, {"start": 1909.02, "end": 1935.02, "text": " si, de los creadores de los corredos mexicanos que le cantaban, a usted una verdad, bueno esta gente cantaba verdad eso, el festival de naci\u00f3n en el 2017, f\u00edjate que no tienen ni 100 a\u00f1os, de 50, 80, naci\u00f3n en el 2017, los festivales tambi\u00e9n pueden hacer, y nosotros tambi\u00e9n podemos ser testigos de nuevos festivales que est\u00e1n haciendo, y que llegaran a construir imaginarios que no podemos siquiera pensar ahorita cuando arrancan,"}, {"start": 1935.02, "end": 1954.02, "text": " pero esto es un proyecto colaborativo y es un proyecto colectivo, y se cre\u00f3 para reactivar espacios de cultura, y intercambio en pie de cuesta, Santander, esto es en pie de cuesta, el otro es en malaga y esto es en pie de cuesta y eso parece, parece regiones muy diferentes, pero seguimos hablando de Santander,"}, {"start": 1954.02, "end": 1972.02, "text": " entonces el festival empieza y se origina en un gigantesco fracaso, como pasan muchas cosas en la vida, resulta que durante una protesta de la minga ind\u00edgena en el 2016,"}, {"start": 1972.02, "end": 1989.02, "text": " hecho en Belandia y el combo que de m\u00fasicos que lo acompa\u00f1aban decidieron hacer una fiesta para apoyar la protesta, y se llama m\u00fasica pinga para la minga, y se fueron para all\u00e1 pero eso sali\u00f3 p\u00e9simo pero mal,"}, {"start": 1989.02, "end": 2007.02, "text": " porque lleg\u00f3 un parche que era distinto al de la minga, que tambi\u00e9n estaba en otra protesta, o sea la minga estaba protestando, lleg\u00f3 otro combo que tambi\u00e9n estaba protestando, Belandia iba all\u00e1 a cantar de a los que estaban protestando, y resulta que el otro combo,"}, {"start": 2007.02, "end": 2020.02, "text": " que tambi\u00e9n estaba en la protesta, pero no era parte de la minga, bloque\u00f3 con una extract\u00f3 mulas la entrada al concierto, nadie lleg\u00f3 al concierto, se arm\u00f3 una pelea con la polic\u00eda, los m\u00fasicos salieron corriendo de ah\u00ed con sus instrumentos con su m\u00fasica,"}, {"start": 2020.02, "end": 2046.02, "text": " y aquello sali\u00f3 p\u00e9simo espantosamente mal, as\u00ed que Belandia qued\u00f3 muy aburrido, yo no, pero yo si quer\u00eda hacer un festival o un pleno, la idea era buena, no puedo salir peor, pero la idea era buena, entonces dije no, yo me va a inventar un festival bien bacano, con otros par\u00e1metros, autogestionado, auto surgido por nosotros, creado por ellos mismos, amano,"}, {"start": 2046.02, "end": 2073.02, "text": " y aqu\u00ed es como una parte fundamental, un festival que unre la naturaleza, los paramos, la fauna, y la gente de piedra de cuesta, eso son como los puntos nodales del festival, y le pone la tigra, la tigra en honor al jaguar, a la jaguara, a la tigra, no solamente por su en nuestro totem, no solamente por ser el felino m\u00e1s importante, que todav\u00eda est\u00e1 en los montes, que ruje,"}, {"start": 2073.02, "end": 2094.02, "text": " sino porque tambi\u00e9n es una forma de reivindicar el monte, el monte como una realidad de nuestra geograf\u00eda, cuando yo les digo que Santander es muy variado, es porque tambi\u00e9n hay toda una tradici\u00f3n de lacha, de desmontar para crear ciudades que ha generado, todo un imaginario,"}, {"start": 2094.02, "end": 2111.02, "text": " y eso tambi\u00e9n existe, y para ellos ese imaginario es muy importante como lo fue en la colonizaci\u00f3n anti-oque\u00f1a, eso es una vertiente de pa\u00eds que se ha enfrentado a una naturaleza salvaje y la ha dominado,"}, {"start": 2111.02, "end": 2132.02, "text": " y eso es una lectura, esta es otra lectura totalmente diferente en donde el monte no es un obst\u00e1culo para la creaci\u00f3n de ciudades, sino una riqueza para la reivindicaci\u00f3n de cultura, todo esto est\u00e1 pasando en el mismo Santander, porque los imaginarios aqu\u00ed se entran cruzan de maneras incre\u00edbles,"}, {"start": 2132.02, "end": 2150.02, "text": " entonces resulta que el combo de ellos reivindica el monte, y tambi\u00e9n la vereda le da\u00f1apia de cuesta que se llama la tigra, por eso se llama el festival de la tigra, y lo montaron, y llegaron los m\u00fasicos, y los m\u00fasicos cada uno se autogestiona su llegada all\u00e1,"}, {"start": 2150.02, "end": 2165.02, "text": " o sea no tiene patroc\u00ednios, y en ese sentido tiene como ese esp\u00edritu del festival de Wuchtock, que era una autop\u00eda, y que no era patrocinado por nadie, sino por el intento de crear un imaginario a trav\u00e9s de la m\u00fasica, es un poco de esa manera,"}, {"start": 2165.02, "end": 2180.02, "text": " entonces ac\u00e1, ac\u00e1 cual llega y se hace cargo de su propio parche, lo cual ya le da una caracter\u00edstica muy particular, y hace que el hecho de que siga sobreviviendo, se amila groso, porque implica mucha fe y mucha credibilidad de la gente,"}, {"start": 2180.02, "end": 2203.02, "text": " entonces resulta que este no ha parado, y no logr\u00f3 parar ni siquiera con la pandemia, porque tiene una caracter\u00edstica distinta a todos los dem\u00e1s festivales, y es que ocurre cuando puede, o sea no es el seis de enero, no es antes de la cuaresma, no es el d\u00eda del santo o del patrono,"}, {"start": 2203.02, "end": 2217.02, "text": " no tiene esa filiaci\u00f3n a una fecha, a un momento del a\u00f1o, en donde se convoca a toda la comunidad para entrar en la celebraci\u00f3n de la fiesta, sino que se hace en el a\u00f1o, pero cuando se puede,"}, {"start": 2217.02, "end": 2241.02, "text": " entonces esto lo hace m\u00f3vil, y lo hace tambi\u00e9n tremendamente flexible, resulta que lo hicieron en febrero en el 2020 y despu\u00e9s en marzo vino la pandemia, y el confinamiento, entonces el festival por un pelito, por un pelito alcanz\u00f3 a ocurrir, y bueno, y alcanz\u00f3 a ver toda la gozadera antes de que veniera,"}, {"start": 2241.02, "end": 2266.02, "text": " todo esa penuridad de la pandemia, el festival por su propia naturaleza es eclecrico, es decir, no tiene una marca de identidad, como todos los dem\u00e1s festivales que hemos visto, que tienen una marca de identidad y de pertenencia, y a pesar de las diversidades, tienen digamos como un patro en una medida, un enfoque dentro de la m\u00fasica en general,"}, {"start": 2266.02, "end": 2285.02, "text": " este es distinto en eso, entonces aqu\u00ed cada todo el mundo, entonces aqu\u00ed vienen la carranca, viene el punk, la m\u00fasica de c\u00e1mara, los coros, el metal m\u00e1s bravo, eso permite la convergencia de todos los ritmos, como digamos como con el tiempo lleg\u00f3 a ser local parque,"}, {"start": 2285.02, "end": 2299.02, "text": " si que al principio tocaba todo el mundo en un sitio distinto y ahora todo el mundo puede tocar, entonces esto llega a ser con todos los ritmos de toda la latitud, la \u00fanica caracter\u00edstica, el \u00fanico recorre que s\u00ed,"}, {"start": 2299.02, "end": 2318.02, "text": " que s\u00ed, para montarse en esa tarima es una calidad musical enorme, o sea, hay una curadur\u00eda musical muy poderosa, y tiene otra caracter\u00edstica que la vimos en el Green Moon Festival, y es que esto est\u00e1 acompa\u00f1ado de talleres de cine, de lectura, de medio ambiente,"}, {"start": 2318.02, "end": 2344.02, "text": " el Green Moon Festival cuando nosotros lo vimos en la temporada pasada, tambi\u00e9n tiene un acompa\u00f1amiento de conocimiento y de cultura de la isla, es una manera en que los raizares muestran toda su tradici\u00f3n, su cultura, y lo hacen de una manera a trav\u00e9s lo mismo de talleres, de una cantidad de explicaciones, o el sentido en que el jamming ten\u00eda tambi\u00e9n un jamming acad\u00e9mico,"}, {"start": 2344.02, "end": 2365.02, "text": " para explicar todo el sentido, digamos, hist\u00f3rico anticolonial de muchas cosas que el Reggae significa como m\u00fasica, pues eso tambi\u00e9n tiene una serie de talleres, y tienen una serie de actividades cinematogr\u00e1ficas, po\u00e9ticas, literarias, de todo, y es una manera de ayudar a la gente a ense\u00f1ar,"}, {"start": 2365.02, "end": 2387.02, "text": " aqu\u00ed hay una pedagog\u00eda del medio ambiente, porque la clave del festival es el medio ambiente, entonces hay una pedagog\u00eda para ayudar a salvar el agua, que siempre es un punto grave en la regi\u00f3n, es una regi\u00f3n donde las aguas hay que cuidar la de mucha gente, hay mucha sequ\u00eda, hay estorraques, hay unas formaciones incre\u00edbles, hay que salvar el agua,"}, {"start": 2387.02, "end": 2408.02, "text": " hay cineforos, hay taller de m\u00fasica, que va acompa\u00f1ando, digamos, como con una lectura paralela, para entender el sentido del festival, que es un festival musical, pero que tiene esta acompa\u00f1amiento, digamos, art\u00edstico en el sentido del cine de los talleres y todo como el Greenwood,"}, {"start": 2408.02, "end": 2420.02, "text": " que es cuando un mensaje se da a trav\u00e9s de muchas barcientes, eso tambi\u00e9n es parte del festival de la tigra, que es musical, pero que tiene todos estos elementos."}, {"start": 2438.02, "end": 2455.02, "text": " Carmen, por eso no voy a olvidarte, si ahora te llevo dentro, voy dentro de mi fecho, Carmen, a ella la saredeno, a ella la saredeno, a ella la saredeno."}, {"start": 2469.02, "end": 2494.02, "text": " Carmen, pero me queda tu retrato, y el lindo pa\u00f1uelito blanco, y el riso de tu cabello, Carmen, y el riso de tu cabello, Carmen, y el riso de tu cabello, Carmen, y el riso de tu cabello."}, {"start": 2494.02, "end": 2512.02, "text": " Carmen, y moronita con sent\u00eda, tu ya eres para ver mi vida, tu ya en la saredeno, Carmen, tu ya en la saredeno, Carmen, tu ya en la saredeno, a ella la saredeno."}, {"start": 2512.02, "end": 2527.02, "text": " Este festival tiene un compromiso con el futuro, porque bueno, yo les cont\u00e9 que el siglo XX se acab\u00f3, \u00bfno? Es que en el siglo XX una de las cosas son distintas,"}, {"start": 2527.02, "end": 2542.02, "text": " hay nuevas preocupaciones, el cambio clim\u00e1tico, la construcci\u00f3n de espacios de paz, hay nuevas lecturas, una nueva generaci\u00f3n ha llegado al planeta, y la nueva generaci\u00f3n que llega al planeta,"}, {"start": 2542.02, "end": 2559.02, "text": " viene a evitar un planeta en peligro, y viene a evitar un planeta del que se tiene que empuera rapidito, para poder evitar en \u00e9l, porque la cosa es grave. Entonces, esto es la nueva generaci\u00f3n, la importancia de traer para nosotros el festival de la Tigra,"}, {"start": 2559.02, "end": 2584.02, "text": " y la otra parte del reconocimiento art\u00edstico, con toda admiraci\u00f3n del respeto al trabajo hecho en la Tigra y Adriana Liscano, es porque hay una nueva generaci\u00f3n con una nueva historia, y esa nueva generaci\u00f3n con esa nueva historia, con esa nueva idea de hacer pol\u00edtica, de hacer paz, de hacer ecolog\u00eda, de construir imaginarios, de crear otras formas de evitar el planeta,"}, {"start": 2584.02, "end": 2604.02, "text": " y viene es una tradici\u00f3n que se construye dentro de nuestras ferias y fiestas, entonces as\u00ed como hablamos de las grandes tradiciones, hablamos de las que se est\u00e1n formando, as\u00ed como hablamos de los ancestros que nos han tra\u00eddo la fiesta, hablamos de los presentes que nos est\u00e1n construyendo las tradiciones, y ellos tambi\u00e9n ser\u00e1n ancestros,"}, {"start": 2604.02, "end": 2627.02, "text": " que a partir de aqu\u00ed tendr\u00e1n una mirada de planeta y de medio ambiente muy distinta, porque esto genera forma de conciencia nuevas. Entonces, en ese sentido es distinto a los dem\u00e1s festivales, por lo que les digo, porque est\u00e1 gestionado, porque no tiene una fecha f\u00edsica, porque se hace cuando se puede,"}, {"start": 2627.02, "end": 2654.02, "text": " porque canta otro tipo de cosas, porque tiene otro tipo de m\u00fasicas, pero las ferias y fiestas tienen eso, la posibilidad de ser diversas, de recoger tradiciones o de construirlas, de contarnos nuevas historias o de revelarnos historias de origenes, de traer a todo el mundo y convocarlo, para una gozadera y una catarsis,"}, {"start": 2654.02, "end": 2682.02, "text": " siempre cuando empez\u00e1bamos con nuestras ferias y fiestas en la temporada pasada habl\u00e1bamos de lo apol\u00edneo y de lo Dionysiaco como espacios de la psiquizumana, donde la gente se pone por un lado a la construcci\u00f3n que hac\u00edan los griegos de la geometr\u00eda, de la matem\u00e1tica, de la red de donde es de la tierra, y por el otro lado, la fiesta total de la Dionysiaca, que eran los tres meses dedicados a Dionysio, o que los romanos llamaban Baco, el Dios del Vino."}, {"start": 2682.02, "end": 2706.02, "text": " Las fiestas todas son para lo mismo, para gozar, para Dionysio, para la Dionysiaca, para la fuerza vivificante, para el alma, para la alegr\u00eda, el carnaval desean los de malaga, alimentan nuestra vida y nos da sentido a la vida, y lo desean con una alegr\u00eda que se proyectaba en los ojos que les hace brillar los ojos cuando hablan de su carnaval,"}, {"start": 2706.02, "end": 2732.02, "text": " el festival de la tigra nos trae nuevas miradas y nos trae nuestros nuevos ojos de lo mismo, la gozadera, porque en unos y en otros vamos a pasar la bueno, con diferentes s\u00edmbolos, con diferentes alegr\u00edas, pero con la idea de que el festival es la manera como \u00e9l le vamos a la naturaleza nuestro canto por estar vivos."}, {"start": 2732.02, "end": 2760.02, "text": " Entonces, dentro de toda la diversidad, que significa esta regi\u00f3n de Santander, donde salen tantas miradas de nuestra historia y nuestra geograf\u00eda, nosotros recorremos, estas dos fiestas, como formas de contraste entre todas las diversidades, de esta regi\u00f3n que recog\u00e9, casi que un fresco como un mosaico, de todo lo que somos,"}, {"start": 2760.02, "end": 2781.02, "text": " y a la cual rendimos un enorme tributo desde la m\u00fasica desde el alma y desde la geograf\u00eda, con esto relato de dos de sus much\u00edsimas fiestas, de sus much\u00edsimas alegr\u00edas, de esta gente escarpada en las monta\u00f1as, en los paramos, en los r\u00edos y en los ca\u00f1ones."}, {"start": 2781.02, "end": 2804.02, "text": " Cuando es or\u00edtica en agosto, s\u00ed, pero, pero or\u00edtica por ejemplo, este a\u00f1o es el 8 de agosto, \u00bfqui\u00e9n es bien? Pues imag\u00ednate que va a haber una homenaje a Pablo Gajenasus, que fue como uno de los arquitectivos de la canci\u00f3n protesta y atita pulido y viene gente, ver\u00eda la ojeda, haco\u00f3veles y la mam\u00e1 negra,"}, {"start": 2804.02, "end": 2831.02, "text": " no amorales, los rolling ruinas que son, pues, absolutamente memorables, se va a estiamp\u00e9\u00f1a, en que le basura y sea natural esa suprema, en E. Hardham, en Sample Paramu, E. Fox, alto bajo, latins\u00f3n, Adriana Liscano, evidentemente, y Edson Belandia, papacitos ambos, viene, Sagraf, lo convia, de Argentina tambi\u00e9n viene ese que el Susterman, con su ejemplo de abatucadas,"}, {"start": 2831.02, "end": 2855.02, "text": " viene Lucio Feuillet, la panela, la alvariza, el quinteto coral universitario de la Wyss, y E. Karenjana, o sea, todos son combos de gente que viene a traer nuevas historias, nuevas m\u00fasica, nuevos sonidos, nuevas formas de entender el mundo, que es tambi\u00e9n parte de la riqueza de todas las ferias y las fiestas, que nos asoman,"}, {"start": 2855.02, "end": 2874.02, "text": " a los imaginarios y a los entendimientos de regiones, de mundos, de culturas, de alegr\u00eda, y nos enriquecen a todos, dentro de esta relato de pa\u00eds que es tan maravilloso de ferias y fiestas, que siempre es un honor contar,"}, {"start": 2874.02, "end": 2891.02, "text": " porque nos est\u00e1 metiendo con la parte m\u00e1s amorosa, dulce y fant\u00e1stica y cr\u00edtica y maravillosa y sat\u00edrica de pa\u00eds, que son las ferias y fiestas, y que siempre es asomarse a los universos m\u00e1s vacanos que tenemos nosotros."}, {"start": 2891.02, "end": 2910.02, "text": " Entonces, desde los espacios de los paramos del gorribo, del ca\u00f1\u00f3n del chica mocha, de la ecolog\u00eda, de malaga de su alegr\u00eda, de las compasas, de las teciduras, de los tiempos del tabaco, de la cantidad de gente metida en las casas de malaga para gozar,"}, {"start": 2910.02, "end": 2925.02, "text": " de los matachines, de la protecci\u00f3n al paramo de los mensajes ecol\u00f3gicos, de Belanda y la tigre a Adriana, de todas las historias que ellos cuentan de todos los m\u00fasicos que vienen a acompa\u00f1arlos,"}, {"start": 2925.02, "end": 2944.02, "text": " y desde todo esta musicalidad alucinante, que les estamos sugiriendo contando y imaginando en este espacio tan fant\u00e1stico de asomarse a las tierras maravillosas de Santander, en la narraci\u00f3n de Ana Uribe, y para ustedes, feliz domingo."}, {"start": 2955.02, "end": 2981.02, "text": " Este podcast fue posible gracias al equipo de la Casa de la Historia."}, {"start": 2981.02, "end": 3002.02, "text": " Y a Nathu\u00e1rez, Milena Beltr\u00e1n, Arturo Jim\u00e9nez Pi\u00f1a, grabado en los gatos estudio, la adici\u00f3n y la musicalizaci\u00f3n de Eduardo Corredor Fonseca, de Rueda Sonido, y contamos con Daniel Shruts, que est\u00e1 con nosotros acompa\u00f1\u00e1ndonos de Aquena del Ante, y que lo introducimos en nuestro relato con mucha alegr\u00eda."}, {"start": 3002.02, "end": 3023.02, "text": " Agradecimientos muy especiales para este relato a Laura Kasman, a los maestros, Luis Enrique Su\u00e1rez, Fernanda Su\u00e1rez, Artistas de M\u00e1laga y de la provincia de Garc\u00eda Robira, y a Echom Belandia y Adriana Liscano, en pie de cuesta, con mucho cari\u00f1o se les agradece a todos ellos su amor y su relato."}, {"start": 3023.02, "end": 3032.02, "text": " Y siempre con la ayuda fuerte y poderosa, de Santiago Espinoza Uribe y Laura Rojasaponte, del podcast de Internet."}, {"start": 3053.02, "end": 3082.02, "text": " Y que yo no puedo ver con como vos desquietos, hay yo quiero que te quede, que me lleve de todos, donde no me encuentre, un labo sin ego, donde rompe el tiempo en ser verdad de ser silenzo, donde las acciones se vinieron y van a ser."}, {"start": 3082.02, "end": 3087.02, "text": " \u00a1Esto es tu mel\u554f\u984c!"}]