video_title
stringlengths
15
95
transcription
stringlengths
51
44.5k
Change of basis | Chapter 13, Essence of linear algebra
If I have a vector sitting here in 2D space, we have a standard way to describe it with coordinates. In this case, the vector has coordinates 3-2, which means going from its tail to its tip, involves moving 3 units to the right and 2 units up. Now, the more linear algebra-oriented way to describe coordinates is to think of each of these numbers as a scalar, a thing that stretches or squishes vectors. You think of that first coordinate as scaling i-hat, the vector with length 1 pointing to the right. While the second coordinate scales j-hat, the vector with length 1 pointing straight up. The tip to tail sum of those two scaled vectors is what the coordinates are meant to describe. You can think of these two special vectors as encapsulating all of the implicit assumptions of our coordinate system. The fact that the first number indicates rightward motion, that the second one indicates upward motion, exactly how far unit of distance is, all of that is tied up in the choice of i-hat and j-hat as the vectors which our scalar coordinates are meant to actually scale. Any way to translate between vectors and sets of numbers is called a coordinate system, and the two special vectors i-hat and j-hat are called the basis vectors of our standard coordinate system. What I'd like to talk about here is the idea of using a different set of basis vectors. For example, let's say you have a friend, Jennifer, who uses a different set of basis vectors, which I'll call b1 and b2. Her first basis vector, b1, points up into the right a little bit, and her second vector, b2, points left and up. Now take another look at that vector that I showed earlier. The one that you and I would describe using the coordinates 3,2, using our basis vectors i-hat and j-hat. Jennifer would actually describe this vector with the coordinates 5,3, and 1,3. What this means is that the particular way to get to that vector using her two basis vectors is to scale b1 by 5,3, scale b2 by 1,3, then add them both together. In a little bit, I'll show you how you could have figured out those two numbers 5,3, and 1,3. In general, whenever Jennifer uses coordinates to describe a vector, she thinks of her first coordinate as scaling b1, the second coordinate as scaling b2, and she adds the results. What she gets will typically be completely different from the vector that you and I would think of as having those coordinates. To be a little more precise about the setup here, her first basis vector b1 is something that we would describe with the coordinates 2,1, and her second basis vector b2 is something that we would describe as negative 1,1. But it's important to realize, from her perspective in her system, those vectors have coordinates 1,0, and 1. They are what define the meaning of the coordinates 1,0, and 1 in her world. So, in effect, we're speaking different languages. We're all looking at the same vectors in space, but Jennifer uses different words and numbers to describe them. Let me say a quick word about how I'm representing things here. When I animate 2D space, I typically use this square grid. But that grid is just a construct, a way to visualize our coordinate system, and so it depends on our choice of basis. Space itself has no intrinsic grid. Jennifer might draw her own grid, which would be an equally made up construct, meant it's nothing more than a visual tool to help follow the meaning of her coordinates. Her origin, though, would actually line up with ours, since everybody agrees on what the coordinates 0,0 should mean. It's the thing that you get when you scale any vector by 0. But the direction of her axes and the spacing of her grid lines will be different, depending on her choice of basis vectors. So, after all this is set up, a pretty natural question to ask is how we translate between coordinate systems? If, for example, Jennifer describes a vector with coordinates negative 1,2, what would that be in our coordinate system? How do you translate from her language to ours? Well, what her coordinates are saying is that this vector is negative 1 times b1 plus 2 times b2. And from our perspective, b1 has coordinates 2,1, and b2 has coordinates negative 1,1. So, we can actually compute negative 1 times b1 plus 2 times b2 as they're represented in our coordinate system. And working this out, you get a vector with coordinates negative 4,1. So, that's how we would describe the vector that she thinks of as negative 1,2. This process here of scaling each of her basis vectors by the corresponding coordinates of some vector, then adding them together, might feel somewhat familiar. It's matrix vector multiplication, with a matrix whose columns represent Jennifer's basis vectors in our language. In fact, once you understand matrix vector multiplication as applying a certain linear transformation, say by watching what I've you to be the most important video in this series, chapter 3, there's a pretty intuitive way to think about what's going on here. A matrix whose columns represent Jennifer's basis vectors can be thought of as a transformation that moves our basis vectors, i hat and j hat, the things we think of when we say 1,0, and 0,1, to Jennifer's basis vectors, the things she thinks of when she says 1,0, and 0,1. To show how this works, let's walk through what it would mean to take the vector that we think of as having coordinates negative 1,2, and applying that transformation. Before the linear transformation, we're thinking of this vector as a certain linear combination of our basis vectors, negative 1 times i hat plus 2 times j hat. And the key feature of a linear transformation is that the resulting vector will be that same linear combination but of the new basis vectors, negative 1 times the place where i hat lands plus 2 times the place where j hat lands. So what this matrix does is transform our misconception of what Jennifer means into the actual vector that she's referring to. I remember that when I was first learning this, it always felt kind of backwards to me. Geometrically, this matrix transforms our grid into Jennifer's grid, but numerically, it's translating a vector described in her language to our language. What made it finally click for me was thinking about how it takes our misconception of what Jennifer means, the vector we get using the same coordinates but in our system, then it transforms it into the vector that she really meant. What about going the other way around? In the example I used earlier this video, when I had the vector with coordinates 3, 2 in our system, how did I compute that it would have coordinates 5, 3, and 1, 3 in Jennifer's system? You start with that change of basis matrix that translates Jennifer's language into ours, then you take its inverse. Remember, the inverse of a transformation is a new transformation that corresponds to playing that first one backwards. In practice, especially when you're working in more than two dimensions, you'd use a computer to compute the matrix that actually represents this inverse. In this case, the inverse of the change of basis matrix that has Jennifer's basis as its columns, ends up working out to have columns 1, 3, negative 1, 3, and 1, 3, 2, 3. So for example, to see what the vector 3, 2 looks like in Jennifer's system, we multiply this inverse change of basis matrix by the vector 3, 2, which works out to be 5, 3, 1, 3. So that, in a nutshell, is how to translate the description of individual vectors back and forth between coordinate systems. The matrix whose columns represent Jennifer's basis vectors, but written in our coordinates, translates vectors from her language into our language. And the inverse matrix does the opposite. But vectors aren't the only thing that we describe using coordinates. For this next part, it's important that you're all comfortable representing transformations with matrices, and that you know how matrix multiplication corresponds to composing successive transformations. Definitely pause and take a look at chapters 3 and 4 if any of that feels uneasy. Consider some linear transformation, like a 90 degree counterclockwise rotation. When you and I represent this with a matrix, we follow where the basis vectors i hat and j hat each go. i hat ends up at the spot with coordinates 0, 1, and j hat ends up at the spot with coordinates negative 1, 0. So those coordinates become the columns of our matrix. But this representation is heavily tied up in our choice of basis vectors, from the fact that we're following i hat and j hat in the first place, to the fact that we're recording their landing spots in our own coordinate system. How would Jennifer describe this same 90 degree rotation of space? You might be tempted to just translate the columns of our rotation matrix into Jennifer's language. But that's not quite right. Those columns represent where our basis vectors i hat and j hat go. But the matrix that Jennifer wants should represent where her basis vectors land, and it needs to describe those landing spots in her language. Here's a common way to think of how this is done. Start with any vector written in Jennifer's language. Rather than trying to follow what happens to it in terms of her language, first we're going to translate it into our language using the change of basis matrix. The one whose columns represent her basis vectors in our language. This gives us the same vector but now written in our language. Then apply the transformation matrix to what you get by multiplying it on the left. This tells us where that vector lands, but still in our language. Last step apply the inverse change of basis matrix, multiplied on the left as usual, to get the transformed vector but now in Jennifer's language. Since we could do this with any vector written in her language, first applying the change of basis, then the transformation, then the inverse change of basis, that composition of three matrices gives us the transformation matrix in Jennifer's language. It takes in a vector of her language and spits out the transformed version of that vector in her language. For this specific example, when Jennifer's basis vectors look like 2, 1 and negative 1, 1 in our language, and when the transformation is a 90 degree rotation, the product of these three matrices, if you work through it, has columns 1, 3, 5, 3, and negative 2, 3, negative 1, 3. So if Jennifer multiplies that matrix by the coordinates of a vector in her system, it will return the 90 degree rotated version of that vector expressed in her coordinate system. In general, whenever you see an expression like A inverse times M times A, it suggests a mathematical sort of empathy. That middle matrix represents a transformation of some kind as you see it, and the outer two matrices represent the empathy, the shift in perspective. And the full matrix product represents that same transformation but as someone else sees it. For those of you wondering why we care about alternate coordinate systems, the next video on eigenvectors and eigenvalues will give a really important example of this. See you then.
From Newton’s method to Newton’s fractal (which Newton knew nothing about)
You've seen the title, so you know this is leading to a certain fractal. And actually it's an infinite family of fractals. And yeah, it'll be one of those mind-bogglingly intricate shapes that has infinite detail no matter how far you zoom in. But this is not really a video about generating some pretty picture for us to gochit. Well, okay maybe that's part of it, but the real story here has a much more pragmatic starting point than the story behind a lot of other fractals. And more than that, the final images that we get to will become a lot more meaningful if we make an effort to understand why, given what they represent, they kind of have to look as complicated as they do. And what this complexity reflects about an algorithm that is used all over the place in engineering. The starting point here will be to assume that you have some kind of polynomial and that you want to know when it equals zero. But for the one graph here, you can visually see there's three different places where it crosses the x-axis. You can kind of eyeball what those values might be. We'd call those the roots of the polynomial. But how do you actually compute them? Exactly. Now this is the kind of question where if you're already bought into math, maybe it's interesting enough in its own right to move forward. But if you just pull someone on the street and ask them this, I mean they're already falling asleep because who cares. But the thing is this kind of question comes up all the time in engineering, where I'm personally most familiar with equations like this popping up is in the setting of computer graphics, where polynomials are just littered all over the place. So it's not uncommon that when you're figuring out how a given pixel should be colored, that somehow involves solving an equation that uses these polynomials. Here let me give you one fun example. When a computer renders text on the screen, those fonts are typically not defined using pixel values. They're defined as a bunch of polynomial curves, were to known in the business as Bayzi-A curves. And any of you who've messed around with vector graphics, maybe in some design software, would be well familiar with these kinds of curves. But to actually display one of them on the screen, you need a way to tell each one of the pixels of your screen whether it should be colored in or not. These curves can be displayed either with some kind of stroke width or if they enclose a region, some kind of fill for that region. But if you step back and you really think about it, it's an interesting puzzle to figure out how each one of the pixels knows whether it should be colored in or not, just based on the pure mathematical curve. I mean, take the case of stroke width. This comes down to understanding how far away a given pixel is from this pure mathematical curve, which itself is some platonic ideal it has zero width. You would think of it as a parametric curve that has some parameter t. Now one thing that you could do to figure out this distance is to compute the distance between your pixel and a bunch of sample points on that curve and then figure out the smallest. But that's both inefficient and imprecise. Better is to get a little mathematical and acknowledge that this distance to the curve at all the possible points is itself some smooth function of the parameter. And as it happens, the square of that distance will itself be a polynomial, which makes it pretty nice to deal with. And if this were meant to be a full lesson on rendering vector graphics, we could expand all that out and embrace the mess. But right now, the only salient point that I want to highlight is that, in principle, this function whose minimum you want to know is some polynomial. Finding this minimum and hence determining how close the pixel is to the curve and whether it should get filled in is now just a classic calculus problem. What you do is figure out the slope of this function graph, which is to say it's derivative again some polynomial. And you ask, when does that equal zero? So, to actually carry out this seemingly simple task of just displaying a curve, wouldn't it be nice if you had a systematic and general way to figure out when a given polynomial equals zero? Of course, we could draw a hundred other examples from a hundred other disciplines. I just want you to keep in mind that as we seek the roots of polynomials, even though we always display it in a way that's cleanly abstracted away from the messiness of any real world problem, the task is hardly just an academic one. But again, ask yourself, how do you actually compute one of those roots? If whatever problem you're working on leads you to a quadratic function, then happy days. You can use the quadratic formula that we all know and love. And as a fun side note, by the way, again relevant to root finding in computer graphics, I once had a Pixar engineer give me the estimate that considering how many lights were used in some of the scenes for the movie Coco, and given the nature of some of these per pixel calculations when polynomially defined things like spheres are involved, the quadratic formula was easily used multiple trillions of times in the production of that film. Now when your problem leads you to a higher order polynomial, things start to get trickier. For cubic polynomials, there is also a formula, which Mathologer has done a wonderful video on, and there's even a quartic formula, something that solves degree four polynomials, although honestly that one is such a god-offel nightmare of a formula that essentially no one actually uses it in practice. But after that, and I find this one of the most fascinating results in all of Math, you cannot have an analogous formula to solve polynomials that have a degree five or more. More specifically, for a pretty extensive set of standard functions, you can prove that there is no possible way that you can combine those functions together that allows you to plug in the coefficients of a quintic polynomial and always get out a root. This is known as the unsolvability of the quintic, which is a whole other can of worms we can hopefully get into at some other time, but in practice, it kind of doesn't matter, because we have algorithms to approximate solutions to these kinds of equations with whatever level of precision you want. A common one, and the main topic for you and me today, is Newton's method. And yes, this is what will lead us to the fractals, but I want you to pay attention to just how innocent and benign the whole procedure seems at first. The algorithm begins with a random guess, let's call it x-not. Well certainly, the output of your polynomial at x-not is not zero, so you haven't found a solution. It's some other value visible as the height of this graph at that point. So to improve the guess, the idea is to ask, when does a linear approximation to the function around that value equal zero? In other words, if you were to draw a tangent line to the graph at this point, when does that tangent line cross the x-axis? Now assuming this tangent line is a decent approximation of the function in the loose vicinity of some true root, the place where this approximation equals zero should take you closer to that true root. And as long as you're able to take a derivative of this function, and with polynomials, you'll always be able to do that, you can concretely compute the slope of this line. So here's where the active viewers among you might want to pause and ask, how do you figure out the difference between the current guess and the improved guess? What is the size of this step? One way to think of it is to consider the fact that the slope of this tangent line, that's rise over run, looks like the height of this graph, divided by the length of that step. But on the other hand, of course, the slope of the tangent line is the derivative of the polynomial at that point. If we kind of rearrange this equation here, this gives you a super concrete way that you can compute that step size. So the next guess, which we might call x1, is the previous guess, adjusted by this step size. And after that, you can just repeat the process. You compute the value of this function, and the slope at this new guess, which gives you a new linear approximation. And then you make the next guess, x2, wherever that tangent line crosses the x-axis. And then apply the same calculation to x2, and this gives you x3. And before too long, you find yourself extremely close to a true root, pretty much as close as you could ever want to be. It's always worth gut checking that a formula actually makes sense. And in this case, hopefully it does, if p of x is large, meaning the graph is really high, you need to take a bigger step to get down to a root. But if p prime of x is also large, meaning the graph is quite steep, you should maybe ease off on just how big you make that step. Now as the name suggests, this was a method that Newton used to solve polynomial expressions. But he sort of made it look a lot more complicated than it needed to be. And a fellow named Joseph Raphson published a much simpler version, more like what you and I are looking at now. So you also often hear this algorithm called the Newton Raphson method. These days it's a common topic in calculus classes. One nice little exercise to try to get a feel for it, by the way, is to try using this method to approximate square roots by hand. But what most calculus students don't see, which is unfortunate, is just how deep things can get when you let yourself play around with this seemingly simple procedure and start kind of picking at some of its scabs. You see, while Newton's method works great if you start near a root, where it converges really quickly, if your initial guess is far from a root, it can have a couple foibles. For example, let's take the function we were just looking at, but shift it upward and play the same game with the same initial guess. Notice how the sequence of new guesses that we're getting kind of bounces around the local minimum of this function sitting above the x-axis. This should kind of make sense, I mean a linear approximation of the function around these values, all the way to the right, is pretty much entirely unrelated to the nature of the function around the one true root that it has off to the left. So they're sort of giving you no useful information about that true root. It's only when this process just happens to throw the new guess off far enough to the left by chance that the sequence of new guesses does anything productive and actually approaches that true root. Where things get especially interesting is if we ask about finding roots in the complex plane. Even if a polynomial like the one shown here has only a single real number root, you'll always be able to factor this polynomial into five terms like this if you allow these roots to potentially be complex numbers. This is the famous fundamental theorem of algebra. Now, in the happy-go-lucky land of functions with real number inputs and real number outputs, where you can picture the association between inputs and outputs as a graph, Newton's method has this really nice visual meaning with tangent lines and intersecting the x-axis. But if you want to allow these inputs to be any complex number, which means our corresponding outputs might also be any complex number, you can't think about tangent lines and graphs anymore. But the formula doesn't really care how you visualize it. You can still play the same game, starting with a random guess, and evaluating the polynomial at this point, as well as its derivative, then using this update rule to generate a new guess. And hopefully that new guess is closer to the true root. But I do want to be clear, even if we can't visualize these steps with a tangent line, it really is the same logic. We're figuring out where a linear approximation of the function around your guess would equal zero. And then you use that zero of the linear approximation as your next guess. It's not like we're blindly applying the rule to a new context with no reason to expect it to work. And indeed, with at least the one I'm showing here after a few iterations, you can see that we land on a value whose corresponding output is essentially zero. Now here's the fun part. Let's apply this idea to many different possible initial guesses. But reference all put up the five true roots of this particular polynomial in the complex plane. With each iteration, each one of our little dots takes some step based on Newton's method. Most of the dots will quickly converge to one of the five true roots. But there are some noticeable stragglers which seem to spin to wild bouncing around. In particular, notice how the ones that are trapped on the positive real number line? Well, they look a little bit lost. And this is exactly what we already saw before for the same polynomial when we're looking at the real number case with its graph. Now what I'm going to do is color each one of these dots based on which of those five roots it ended up closest to, and then we'll kind of roll back the clock so that every dot goes back to where it started. Now as I've done it here, this isn't quite enough resolution to get the full story. So let me show you what it would look like if we started with an even finer grid of initial guesses and played the same game, applying Newton's method a whole bunch of times, letting each root march forward, coloring each dot based on what root it lands on, then rolling back the clock to see where originally came from. But even this isn't really a high enough resolution to appreciate the pattern. If we did this process for every single pixel on the plane, here's what you would get. And at this level of detail, the color scheme is a little jarring to my eye at least, so let me comment down a little. Really, whatever resolution I try to use to show this to you here could never possibly be enough, because the finer details of the shape we get, go on with endless complexity. But take a moment to think about what this is actually saying. It means that there are regions in the complex plane, where if you slightly adjust that seed value, you know, you just kind of bump it to the side by 1.1 millionth or 1.1 trillionth, it can completely change which of the five true roots it ends up landing on. We saw some foreshadowing of this kind of chaos with the real graph and the problematic guess shown earlier, but picturing all of this in the complex plane really shines a light on just how unpredictable this kind of root-finding algorithm can be, and how there are whole swaths of initial values where the sort of unpredictability will take place. Now, if I grab one of these roots and change it around, meaning that we're using a different polynomial for the process, you can see how the resulting fractal pattern changes. And notice, for example, how the regions around a given root always have the same color, since those are the points that are close enough to the root, where this linear approximation scheme works as a way of finding that root with no problem. All of the chaos seems to be happening at the boundaries between the regions. Remember that. And it seems like no matter where I place these roots, those fractal boundaries are always there. It clearly wasn't just some one-off for the polynomial we happen to start with, it seems to be a general fact for any given polynomial. Another facet we can tweak here just to better illustrate what's going on is how many steps of Newton's method we're using. For example, if I had the computer just take zero steps, meaning it just colors each point of the plane based on whatever root it's already closest to, this is what we'd get. And this kind of diagram actually has a special name, it's called a Vorinoid diagram. And if we let each point of the plane take a single step of Newton's method, and then color it based on what root that single step result is closest to, here's what we would get. Similarly if we allow for two steps we get a slightly more intricate pattern, and so on and so on, where the more steps you allow, the more intricate an image you get bringing us closer to the original fractal. And this is important, keep in mind that the true shape we're studying here is not any one of these, it's the limit as we allow for an arbitrarily large number of iterations. At this point there are so many questions we might ask, like maybe you want to try this out with some other polynomials, see how general it is, or maybe you want to dig deeper into what dynamics are exactly possible with these iterated points, or see if there's connections with some other pieces of math that have a similar theme. But I think the most pertinent question should be something like, what the f*** is going on here? I mean, all we're doing here is repeatedly solving linear approximations. Why would that produce something that's so endlessly complicated? It almost feels like the underlying rule here just shouldn't carry enough information to actually produce an image like this. And before seeing this, don't you think a reasonable initial guess might have been that each seed value simply tends towards whichever root it's closest to? And in that case, you know, if you colored each point based on the root it lands on and move it back to the original position, the final image would look like one of these Voronoid diagrams, with straight line boundaries. And since I referenced earlier the unsolvability of the quintic, maybe you would wonder if the complexity here has anything to do with that. That would be cool, but there are essentially unrelated ideas. In fact, using only degree 5 polynomials so far might have been a little misleading. Watch what happens if we play the same game, but with a cubic polynomial, with three roots somewhere in the complex plane. Notice how, again, while most points nestle into a root, some of them are kind of flying all over the place more chaoticly. In fact, those ones are the most noticeable ones in an animation like this, with the ones going towards the roots just quietly nestled in in their ending points. And again, if we stopped this at some number of iterations and we colored all the points based on what root their close is to, and roll back the clock, the relevant picture for all possible starting points forms this fractal pattern with infinite detail. However, quadratic polynomials with only two roots are different. In that case, each seed value does simply tend towards whichever root it's closest to, the way that you might expect. There is a little bit of meandering behavior from all the points that are in equal distance from each root. It's kind of like they're not able to decide which one to go to, but that's just a single line of points, and when we play the game of coloring, the diagram we end up with is decidedly more boring. So something new seems to happen when you jump from two to three, and the question is what exactly? And if you had asked me a month ago, I probably would have shrugged and just said, you know, math is what it is. Sometimes the answers look simple, sometimes not. Not always clear what it would mean to ask why in a setting like this? But I would have been wrong. There actually is a reason that we can give for why this image has to look as complicated as it does. You see, there's a very peculiar property that we can prove this diagram must have. Focus your attention on just one of the colored regions, say this blue one. In other words, the set of all points that eventually tend towards just one particular root of the polynomial. Now consider the boundary of that region, which for the example shown on screen has this kind of nice threefold symmetry. What's surprising is that if you look at any other color and consider its boundary, you get precisely the same set. Now when I say the word boundary, you probably have an intuitive sense of what it means, but mathematicians have a pretty clever way to formalize it. And this makes it easier to reason about in the context of more wild sets like our fractal. We say that a point is on the boundary of a set if when you draw a small circle centered at that point, no matter how small, it will always contain points that are both inside that set and outside. So if you have a point that's on the interior, a small enough circle would eventually only contain points inside the set. And for a point on the exterior, a small enough circle contains no points of the set at all. But when it's on the boundary, what it means to be on the boundary is that your tiny, tiny circles will always contain both. So looking back at our property, one way to read it is to say that if you draw a circle, no matter how small that circle, it either contains all of the colors, which happens when this shared boundary of the colors is inside that circle, or it contains just one color, and this happens when it's in the interior of one of the regions. In particular, what this implies is you should never be able to find a circle that contains just two of the colors. Since that would require that you have points on the boundary between two regions, but not all of them. And before explaining where this fact actually comes from, it's fun to try just wrapping your mind around it a little bit. You could imagine presenting this to someone as a kind of art puzzle, completely out of context never mentioning Newton's method or anything like that, where you say that the challenge is to construct a picture with at least three colors, maybe we say red, green, and blue, so that the boundary of one color is the boundary of all of them. So if you started with something simple like this, that clearly doesn't work because we have this whole line of points that are on the boundary of green and red, but not touching any blue. And likewise, you have these other lines of disallowed points. So to correct that, you might go and add some blue blobs along the boundary. And then likewise, add some green blobs between the red and blue and some red blobs between the green and blue. But of course, now the boundary of those blobs are a problem, for example touching just blue and red, but no green. So maybe you go and you try to add even smaller blobs, with the relevant third color around those smaller boundaries to help try to correct. And likewise, you have to do this for every one of the blobs that you initially added. But then all the boundaries of those tiny blobs are problems of their own, and you would have to somehow keep doing this process forever. And if you look at Newton's fractal itself, this sort of blobs on blobs on blobs pattern seems to be exactly what it's doing. The main thing I want you to notice is how this property implies you could never have a boundary which is smooth, or even partially smooth on some small segment, since any smooth segment would only be touching two colors. Instead, the boundary has to consist entirely of sharp corners, so to speak. So if you believe the property, it explains why the boundary remains rough no matter how far you zoom in. And for those of you who are familiar with the concept of fractal dimension, you can measure the dimension of the particular boundary I'm showing you right now to be around 1.44. Considering what our colors actually represent, remember this isn't just a picture for picture sake, think about what the property is really telling us. It says that if you're near a sensitive point where some of the seed values go to one root, but other seed values nearby would go to another root, then in fact, every possible root has to be accessible from within that small neighborhood. For any tiny little circle that you draw, either all of the points in that circle tend to just one root, or they tend to all of the roots. But there's never going to be anything in between, just tending to a subset of the roots. For a little intuition, I found it enlightening to simply watch a cluster like the one on screen undergo this process. It starts off mostly sticking together, but at one iteration, they all kind of explode outward. And after that, it feels a lot more reasonable that any root is up for grabs. And keep in mind, I'm just showing you finitely many points, but in principle, you would want to think about what happens to all, uncountably infinitely many points inside some small disk. This property also kind of explains why it's okay for things to look normal in the case of quadratic polynomials with just two roots, because there a smooth boundary is fine. There's only two colors to touch anyway. To be clear, it doesn't guarantee that the quadratic case would have a smooth boundary. It is perfectly possible to have a fractal boundary between two colors. It just looks like our Newton's method diagram is not doing anything more complicated than it needs to under the constraint of this strange boundary condition. But of course, all of this simply raises the question of why this bizarre boundary property would have to be true in the first place. Where does it even come from? For that, I'd like to tell you about a field of math, which studies this kind of question. It's called holomorphic dynamics. And I think we've covered enough ground today, and there's certainly enough left to tell, so it makes sense to pull that out as a separate video. To close things off here, there is something sort of funny to me about the fact that we call this Newton's fractal. Like the fact that Newton had no clue about any of this, and could never have possibly played with these images the way that you and I can with modern technology. And it happens a lot through math that people's names get attached to things well beyond what they could have dreamed of. Hamiltonians are central to quantum mechanics, despite Hamilton knowing nothing about quantum mechanics. Fourier himself never once computed a fast Fourier transform. The list goes on. But this overextension of nomenclature carries with it what I think is an inspiring point. It reflects how even the simple ideas, ones that could be discovered centuries ago, often hold within them some new angle or a new domain of relevance that can sit waiting to be discovered hundreds of years later. It's not just that Newton had no idea about Newton's fractal. There are probably many other facts about Newton's method, or about all sorts of math that may seem like old news that come from questions that no one has thought to ask yet. Things that are just sitting there, waiting for someone like you to ask them. For example, if you were to ask about whether this process we've been talking about today ever gets trapped in a cycle, it leads you to a surprising connection with the Mandelbrot set. And we'll talk a bit about that in the next part. At the time that I'm posting this, that second part by the way is available as an early release to patrons. I always like to give new content a little bit of time there to gather feedback and catch errors. The finalized version should be out shortly. And on the topic of patrons, I do just want to say a quick thanks to everyone whose name is on the screen. I know that in recent history new videos have been a little slow coming. Part of this has to do with other projects that have been in the works. Things I'm proud of, by the way, things like the Summer of MathEx position, which was a surprising amount of work, to be honest, but so worth it given the outcome. I will be talking all about that and announcing winners very shortly, so stay tuned. I just want you to know that the plan for the foreseeable future is definitely to shift gears more wholeheartedly back to making new videos. And more than anything, I want to say thanks for your continued support, even during times of trying a few new things. It means a lot to me, it's what keeps the channel going, and I'll do my best to make the new lessons in the pipeline live up to your vote of confidence there.
Q&A #2 + Net Neutrality Nuance
Hey everyone, no math here, I just want to post two quick announcements for you. First, the number of you who have opted to subscribe to this channel has once again rolled over a power of two, which is just mind-boggling to me. I'm still touched that there are so many of you who enjoy math like this, just for fun, apparently, and who have helped to support the channel by watching it, by sharing some of the content here, and of course for a very special subset of you, directly supporting through Patreon. Really, each and every one of the two to the 19th of you mean a lot to me. And as a thanks, and just for fun, I'm going to be doing a second Q&A session. I left a link to where you can ask and upvote questions it should be on the screen here, and in the description. And just like with the first one, I'll answer questions in the podcast that I do with Ben Eater and Ben Stenhag, since I don't know, I think discussions are more fun than answers in a vacuum. By the way, if you guys don't already know about Eater's channel, what are you doing? This man is the Bob Ross of computer engineering. If you want to understand how a computer works, and I mean really understand it from the ground up, his channel is 100% the place to go. Trust me, he is a very good explainer. But anyway, the reason I bring him up is that he and I just recorded a pretty interesting conversation about net neutrality. And I want to be very clear, Ben Eater is not against net neutrality, and nor am I. However, the issue is a lot more nuance than I first realized. And because there are already many great videos about net neutrality, and why it's a good thing, which by agree with, to offer content that you may not have seen before, this conversation was a chance to honor some of the trade-offs that are at play here. You see, the thing about Eater is that before he was creating phenomenal educational content, he worked for many years in the networking industry, so he has a pretty clear view of both sides of the equation, and a pretty intelligent way of articulating what they are. I'm just going to play you a minute of the conversation here, and then link to the full video, if you want to go see it on Ben's channel, which you should be going to check out anyway. But for those of you who would prefer to listen to a 45-minute conversation in podcast form, maybe over commute, we did also publish it as an episode of Ben Ben and have. You know, consumers kind of want an unlimited service. Or we want to believe we have. You want to believe you have an unlimited service. It's essentially guaranteed that we'll never exercise that. Right. And the one or two people way off on the end that are abusing. Providers would call it abusing. I think the customers would say they're using what they're paying for. But the ones that are really off the charts, the providers would just limit the traffic. We'll slow them down or slow down the applications. Be sure not to interrupt, but it's one such an interesting example just because this is a peer-to-peer type thing. And then there's a whole pile of hype around decentralized possibilities with like blockchain and whatnot. And to the extent that that is an aspect of the future of the internet, that you have a little bit more possibility for some services to be decentralized in this way. And I think there's even a lot of things that just have a straight up bit torrent type flavor when it comes to file sharing and things of that sort. Like do you see that as a little bit more on the horizon? And would you say that as an example of like potentially harmful, harmful things that come about when you are very strict about net neutrality, about abiding by it? I don't know. I would necessarily categorize this as a peer-to-peer. I think the... I also understand what you're not saying that it is like necessarily dangerous to abide by it strictly. I'm sort of eking that out of you. But... No, I don't... I mean, I think the thing that happened with bit torrent, and we were going a little bit too far down that rabbit hole, but the thing that happened with bit torrent was this was an unusual thing. This was...
Fractals are typically not self-similar
Who doesn't like fractals? They're a beautiful blend of simplicity and complexity, often including these infinitely repeating patterns. Programmers in particular tend to be especially fond of them, because it takes a shockingly small amount of code to produce images that are way more intricate than any human hand ever could hope to draw. But a lot of people don't actually know the definition of a fractal, at least not the one that Benoit Mandelbrote, the father of fractal geometry, had in mind. A common misconception is that fractals are shapes that are perfectly self-similar. For example, this snowflake-looking shape right here, called the Von Koch snowflake, consists of three different segments. And each one of these is perfectly self-similar, in that when you zoom in on it, you get a perfectly identical copy of the original. Likewise, the famous Sir Penske triangle consists of three smaller, identical copies of itself. And don't give me wrong, self-similar shapes are definitely beautiful, and they're a good toy model for what fractals really are. But Mandelbrote had a much broader conception in mind. One motivated not by beauty, but more by a pragmatic desire to model nature in a way that actually captures roughness. In some ways, fractal geometry is a rebellion against calculus, whose central assumption is that things tend to look smooth if you zoom in far enough. But Mandelbrote saw this as overly idealized, or at least needlessly idealized, resulting in models that neglect the finer details of the thing that they're actually modeling, which can matter. What he observed is that self-similar shapes give a basis for modeling the regularity in some forms of roughness. But the popular perception that fractals only include perfectly self-similar shapes is another over-idealization, one that ironically goes against the pragmatic spirit of fractal geometry's origins. The real definition of fractals has to do with this idea of fractal dimension, the main topic of this video. You see, there is a sense, a certain way to define the word dimension in which the syropinsky triangle is approximately 1.585 dimensional. That the van Gogh curve is approximately 1.262 dimensional. The coastline of Britain turns out to be around 1.21 dimensional, and in general, it's possible to have shapes whose dimension is any positive real number, not just whole numbers. I think when I first heard someone reference fractional dimension like this, I just thought it was nonsense, right? I mean, mathematicians are clearly just making stuff up. Dimension is something that usually only makes sense for natural numbers, right? A line is one dimensional, a plane that's two dimensional, the space that we live in, that's three dimensional, and so on. And in fact, any linear algebra student who just learned the formal definition of dimension in that context would agree, it only makes sense for counting numbers. And of course, the idea of fractal dimension is just made up. I mean, this is math, everything's made up. But the question is whether or not it turns out to be a useful construct for modeling the world. And I think you'll agree, once you learn how fractal dimension is defined, it's something that you start seeing almost everywhere that you look. It actually helps to start the discussion here by only looking at perfectly self-similar shapes. In fact, I'm going to start with four shapes, the first three of which aren't even fractals, a line, a square, a cube, and a syropinsky triangle. All of these shapes are self-similar. A line can be broken up into two smaller lines, each of which is a perfect copy of the original, just scaled down by a half. A square can be broken down into four smaller squares, each of which is a perfect copy of the original just scaled down by a half. Likewise a cube can be broken down into eight smaller cubes, again, each one is a scaled down version by one half, and the core characteristic of the syropinsky triangle is that it's made of three smaller copies of itself, and the length of the side of one of those smaller copies is one half the side length of the original triangle. Now it's fun to compare how we measure these things. We'd say that the smaller line is one half the length of the original line, the smaller square is one quarter the area of the original square, the smaller cube is one eighth the volume of the original cube, and that smaller syropinsky triangle? Well we'll talk about how to measure that in just a moment. What I want is a word that generalizes the idea of length, area, and volume, but that I can apply to all of those shapes and more. And typically in math, the word that you'd use for this is measure, but I think it might be more intuitive to talk about mass. As in, imagine that each of these shapes is made out of metal, a thin wire, a flat sheet, a solid cube, and some kind of syropinsky mesh. Fractal dimension has everything to do with understanding how the mass of these shapes changes as you scale them. The benefit of starting the discussion with self-similar shapes is that it gives us a nice clear cut way to compare masses. When you scale down that line by one half, the masses also scale down by one half, which you can viscerally see because it takes two copies of that smaller one to form the whole. When you scale down a square by one half, its masses scale down by one fourth, where again you can see this by piecing together four of the smaller copies to get the original. Likewise when you scale down that cube by one half, the mass is scaled down by one eighth or one half cubed because it takes eight copies of that smaller cube to rebuild the original. When you scale down the syropinsky triangle by a factor of a half, wouldn't you agree that it makes sense to say that its mass goes down by a factor of one third? I mean, it takes exactly three of those smaller ones to form the original. But notice that for the line the square and the cube, the factor by which the mass changed is this nice clean integer power of one half. In fact, that exponent is the dimension of each shape. And what's more, you could say that what it means for a shape to be, for example, two dimensional, what puts the two in two dimensional is that when you scale it by some factor, its mass is scaled by that factor raised to the second power. And maybe what it means for a shape to be three dimensional is that when you scale it by some factor, the mass is scaled by the third power of that factor. So if this is our conception of dimension, what should the dimensionality of a syropinsky triangle be? You'd want to say that when you scale it down by a factor of one half, its mass goes down by one half to the power of, well, whatever its dimension is. And because itself is similar, we know that we want its mass to go down by a factor of one third. So what's the number D such that raising one half to the power of D gives you one third? Well, that's the same as asking two to the what equals three, the quintessential type of question that logarithms are meant to answer. And when you go and plug in, log base two of three to a calculator, what you'll find is that it's about 1.585. So in this way, the syropinsky triangle is not one dimensional, even though you could define a curve that passes through all its points. And nor is it two-dimensional, even though it lives in the plane. Instead, it's 1.585 dimensional. And if you want to describe its mass, neither length nor area seem like the fitting notions. If you tried, its length would turn out to be infinite, and its area would turn out to be zero. Instead what you want is whatever the 1.585 dimensional analog of length is. Here, let's look at another self-similar fractal, the von Koch curve. This one is composed of four smaller, identical copies of itself, each of which is a copy of the original scaled down by 1.3. So the scaling factor is 1.3, and the mass has gone down by a factor of 1.4. So that means the dimension should be some number d so that when we raise 1.3 to the power of d, it gives us 1.4. Well that's the same as saying three to the what equals four, so you can go and plug into a calculator, log base three of four, and that comes out to be around 1.262. So in a sense, the von Koch curve is a 1.262 dimensional shape. Here's another fun one. This is kind of the right angled version of the Koch curve. It's built up of eight scaled down copies of itself, where the scaling factor here is 1.4. So if you want to know its dimension, it should be some number d such that 1.4 to the power of d equals 1.8, the factor by which the mass just decreased. And in this case, the value we want is log base four of eight, and that's exactly three halves. So evidently, this fractal is precisely 1.5 dimensional. Does that kind of make sense? It's weird, but it's all just about scaling and comparing masses while you scale. And what I've described so far, everything up to this point is what you might call self-similarity dimension. It does a good job making the idea of fractional dimensions seem at least somewhat reasonable, but there's a problem. It's not really a general notion. I mean, when we were reasoning about how a mass's shape should change, it relied on the self-similarity of the shapes, that you could build them up from smaller copies of themselves. But that seems unnecessarily restrictive. After all, most two dimensional shapes are not at all self-similar. Consider the disc, the interior of a circle. We know that's two-dimensional, and you can say that this is because when you scale it up by a factor of two, its mass, proportional to the area, gets scaled by the square of that factor, in this case four. But it's not like there's some way to piece together four copies of that smaller circle to rebuild the original. So how do we know that that bigger disc is exactly four times the mass of the original? Answering that requires a way to make this idea of mass a little more mathematically rigorous. Since we're not dealing with physical objects made of matter, are we? We're dealing with purely geometric ones living in an abstract space. And there's a couple ways to think about this, but here's a common one. Cover the plane with a grid, and highlight all of the grid squares that are touching the disc, and now count how many there are. In the back of our minds, we already know that a disc is two-dimensional, and the number of grid squares that it touches should be proportional to its area. A clever way to verify this empirically is to scale up that disc by some factor, like two, and count how many grid squares touch this new scaled up version. What you should find is that that number has increased approximately in proportion to the square of our scaling factor, which in this case means about four times as many boxes. Well, admittedly, what's on the screen here might not look that convincing, but it's just because the grid is really coarse. If instead you took a much finer grid, one that more tightly captures the intent we're going for here by measuring the size of the circle, that relationship of quadrupling the number of boxes touched when you scale the disc by a factor of two should shine through more clearly. I'll admit though that when I was animating this, I was surprised by just how slowly this value can register for. Here's one way to think about this. If you were to plot the scaling factor, compared to the number of boxes that the scaled disc touches, your data should very closely fit a perfect parabola, since the number of boxes touched is roughly proportional to the square of the scaling factor. For larger and larger scaling values, which is actually equivalent to just looking at a finer grid, that data is going to more perfectly fit that parabola. Now getting back to fractals, let's play this game with a Sirpinski triangle, counting how many boxes are touching points in that shape. How would you imagine that number compares to scaling up the triangle by a factor of two and counting the new number of boxes touched? Well, the proportion of boxes touched by the big one to the number of boxes touched by the small one should be about three. After all, that bigger version is just built up of three copies of the smaller version. You could also think about this as two raised to the dimension of the fractal, which we just saw is about 1.585. And so if you were to go and plot the scaling factor in this case, against the number of boxes touched by the Sirpinski triangle, the data would closely fit a curve with the shape of y equals x to the power 1.585, just multiplied by some proportionality constant. But importantly, the whole reason that I'm talking about this is that we can play the same game with non-self-similar shapes that still have some kind of roughness. And the classic example here is the coastline of Britain. If you plot that coastline into the plane and count how many boxes are touching it, and then scale it by some amount, and count how many boxes are touching that new scaled version, what you'd find is that the number of boxes touching the coastline increases approximately in proportion to the scaling factor raised to the power of 1.21. Here, it's kind of fun to think about how you would actually compute that number empirically. As an, imagine I give you some shape, and you're a savvy programmer, how would you find this number? So what I'm saying here is that if you scale this shape by some factor, which I'll call S, the number of boxes touching that shape should equal some constant multiplied by that scaling factor raised to whatever the dimension is, the value that we're looking for. Now, if you have some data plot that closely fits a curve that looks like the input raised to some power, it can be hard to see exactly what that power should be. So a common trick is to take the logarithm of both sides. That way, the dimension is going to drop down from the exponent, and we'll have a nice clean linear relationship. What this suggests is that if you would plot the log of the scaling factor against the log of the number of boxes touching the coastline, the relationship should look like a line, and that line should have a slope equal to the dimension. So what that means is that if you tried out a whole bunch of scaling factors, counted the number of boxes touching the coast in each instant, and then plotted the points on the log-log plot, you could then do some kind of linear regression to find the best fit line to your data set, and when you look at the slope of that line, that tells you the empirical measurement for the dimension of what you're examining. I just think that makes this idea of fractal dimensions so much more real and visceral compared to abstract artificially perfect shapes. And once you're comfortable thinking about dimension like this, you, my friend, have become ready to hear the definition of a fractal. Essentially, fractals are shapes whose dimension is not an integer, but instead some fractional amount. What's cool about that is that it's a quantitative way to say that there's shapes that are rough, and that they stay rough even as you zoom in. Technically, there's a slightly more accurate definition, and I've included it in the video description, but this idea here of a non-integer dimension almost entirely captures the idea of roughness that we're going for. There is one nuance though that I haven't brought up yet, but it's worth pointing out, which is that this dimension, at least as I've described it so far using the box counting method, can sometimes change based on how far zoomed in you are. For example, here's a shape sitting in three dimensions, which at a distance looks like a line. In 3D, by the way, when you do a box counting, you have a 3D grid full of little cubes instead of little squares, but it works the same way. At this scale, where the shape's thickness is smaller than the size of the boxes, it looks one-dimensional, meaning the number of boxes it touches is proportional to its length. But when you scale it up, it starts behaving a lot more like a tube, touching the boxes on the surface of that tube. And so it'll look two-dimensional, with the number of boxes touched being proportional to the square of the scaling factor. But it's not really a tube, it's made of these rapidly winding little curves. So once you scale it up even more, to the point where the boxes can pick up on the details of those curves, it looks one-dimensional again, with the number of boxes touched, scaling directly in proportion to the scaling constant. So actually assigning a number to a shape for its dimension can be tricky, and it leaves room for differing definitions and differing conventions. In a pure math setting, there are indeed numerous definitions for dimension, but all of them focus on what the limit of this dimension is at closer and closer zoom levels. You can think of that in terms of the plot as the limit of this slope as you move farther and farther to the right. So for a purely geometric shape to be a genuine fractal, it has to continue looking rough, even as you zoom in infinitely far. But in a more applied setting, like looking at the coastline of Britain, it doesn't really make sense to talk about the limit as you zoom in more and more. I mean, at some point you just be hitting atoms. Instead what you do is you look at a sufficiently wide range of scales, from very zoomed out up to very zoomed in, and compute the dimension at each one. And in this more applied setting, a shape is typically considered to be a fractal only when the measured dimension stays approximately constant even across multiple different scales. For example, the coastline of Britain doesn't just look 1.21 dimensional at a distance, even if you zoom in by a factor of a thousand, the level of roughness is still around 1.21. That right there is the sense in which many shapes from nature actually are self-similar, albeit not perfect self-similarity. Perfectly self-similar shapes do play an important role in fractal geometry. What they give us are simple to describe low information examples of this phenomenon of roughness, roughness that persists at many different scales, and at arbitrarily close scales. And that's important, it gives us the primitive tools for modeling these fractal phenomena. But I think it's also important not to view them as the prototypical example of fractals, since fractals in general actually have a lot more character to them. I really do think that this is one of those ideas where once you learn it, it makes you start looking at the world completely differently. What this number is, what this fractional dimension gives us is a quantitative way to describe roughness. For example, the coastline of Norway is about 1.52 dimensional, which is a numerical way to communicate the fact that it's way more jaggedy than Britain's coastline. The surface of a calm ocean might have a fractal dimension only barely above two, while a stormy one might have a dimension closer to 2.3. In fact, fractal dimension doesn't just arise frequently in nature. It seems to be the core differentiator between objects that arise naturally and those that are just manmade.
How to lie using visual proofs
Today I'd like to share with you three fake proofs in increasing order of subtlety, and then discuss what each one of them has to tell us about math. The first proof is for a formula for the surface area of a sphere, and the way that it starts is to subdivide that sphere into vertical slices, the way you might chop up an orange or paint a beach ball. We then unravel all of those wedge slices from the northern hemisphere so that they poke up like this, and then symmetrically unravel all of those from the southern hemisphere below, and now interlace those pieces to get a shape whose area we want to figure out. The base of this shape came from the circumference of the sphere, it's an unravelled equator, though its length is 2 pi times the radius of the sphere, and then the other side of this shape came from the height of one of these wedges, which is a quarter of a walk around the sphere, and so it has a length of pi halves times r. The idea is that this is only an approximation, the edges might not be perfectly straight, but if we think of the limit as we do finer and finer slices of the sphere, this shape whose area we want to know gets closer to being a perfect rectangle, one whose area will be pi halves r times 2 pi r, or in other words pi squared times r squared. The proof is elegant, it translates a hard problem into a situation that's easier to understand, it has that element of surprise while still being intuitive, it's only fault, really, is that it's completely wrong, the true surface area of a sphere is 4 pi r squared. I originally saw this example thanks to Henry Reich, and to be fair, it's not necessarily inconsistent with the 4 pi r squared formula, just so long as pi is equal to 4. For the next proof, I'd like to show you a simple argument, for the fact that pi is equal to 4. We start off with a circle, say with radius 1, and we ask how can we figure out its circumference? After all, pi is by definition the ratio of this circumference to the diameter of the circle. We start off by drawing the square whose sidelines are all tangent to that circle. It's not too hard to see that the perimeter of the square is 8, then, and some of you may have seen this before, it's a kind of classic argument, the argument proceeds by producing a sequence of curves all of whom also have this perimeter of 8, but which more and more closely approximate the circle. But the full nuance of this example is not always emphasized. First of all, just to make things crystal clear, the way each of these iterations works is to fold in each of the corners of the previous shape, so that they just barely kiss the circle. And you can take a moment to convince yourself that in each region where a fold happened, the perimeter doesn't change. For example, in the upper right here, instead of walking up and then left, the new curve goes left and then up. And something similar is true at all of the folds of all of the different iterations. Wherever the previous iteration went direction A, then direction B, the new iteration goes direction B, then direction A, but no length is lost or gained. Some of you might say, well, obviously this isn't going to give the true perimeter of the circle, because no matter how many iterations you do when you zoom in, it remains jagged, it's not a smooth curve. You're taking these very inefficient steps along the circle. While that is true, and ultimately the reason things are wrong, if you want to appreciate the less in this example is teaching us, the claim of the example is not that any one of these approximations equals the curve. It's that the limit of all of the approximations equals our circle. And to appreciate the lesson that this example teaches us, it's worth thinking a moment to be a little more mathematically precise about what I mean by the limit of a sequence of curves. Let's say we describe the very first shape this square as a parametric function, something that has an input T and it outputs a point in 2D space, though that as T ranges from 0 to 1, it traces that square. I'll call that function C0. And likewise we can parameterize the next iteration with a function I'll call C1, as the parameter T ranges from 0 up to 1, the output of this function traces along that curve. This is just so that we can think of these shapes as instead being functions. Now I want you to consider our particular value of T, maybe 0.2, and then consider the sequence of points that you get by evaluating the sequence of functions we have at this particular point. Now I want you to consider the limit as n approaches infinity of C sub n of 0.2. This limit is a well-defined point in 2D space. In fact that point sits on the circle. And there's nothing specific about 0.2. We could do this limiting process for any input T, and so I can define a new function that I'll call C infinity, which by definition at any input T is whatever this limiting value for all the curves is. So here's the point. That limiting function C infinity is the circle. It's not an approximation of the circle, it's not some jagged version of the circle. It is the genuine smooth circular curve whose perimeter we want to know. And what's also true is that the limit of the lengths of all of our curves really is 8, because each individual curve really does have a perimeter of 8. And there are all sorts of examples throughout calculus when we talk about approximating one thing we want to know as a limit of a bunch of other things that are easier to understand. So the question at the heart here is why exactly is it not okay to do that in this example? And maybe at this point you step back and say, you know it's just not enough for things to look the same. This is why we need rigor, it's why we need proofs. It's why since the days of Euclid, mathematicians have followed in his footsteps and deduced truths step-by-step from axioms forward. But for this last example, I would like to do something that doesn't lean as hard on visual intuition and instead give a Euclid-style proof for the claim that all triangles are isosceles. The way this will work is we'll take any particular triangle and make no assumptions about it, I'll label its vertices A, B and C. And what I would like to prove for you is that the side length A, B is necessarily equal to the side length A, C. Now to be clear, the result is obviously false. Just in the diagram I've drawn, you can visually see that these lengths are not equal to each other. But I challenge you to see if you can identify what's wrong about the proof I'm about to show you. Honestly, it's very subtle and three gold stars for anyone who can identify it. The first thing I'll do is draw the perpendicular bisector, the line B, C. So that means this angle here is 90 degrees and this length is by definition the same as this length. And we'll label that intersection point D. And then next I will draw the angle bisector at A, which means by definition this little angle here is the same as this little angle here, I'll label both of them alpha. And we'll say that the point where these two intersect is P. And now like a lot of Euclid-style proofs we're just going to draw some new lines, figure out what things must be equal and get some conclusions. For instance, let's draw the line from P, which is perpendicular to the side length AC. And we'll label that intersection point E. And likewise, we'll draw the line from P down to the other side length AC. Again, it's perpendicular. And we'll label that intersection point F. My first claim is that this triangle here, which is AFP, is the same or at least congruent to this triangle over here, AEP. Essentially, this follows from symmetry across that angle bisector. Here, more specifically, we can say they share a side length and then they both have an angle alpha and both have an angle 90 degrees. So it follows by the side angle angle congruence relation. Maybe my drawing is a little bit sloppy, but the logic helps us see that they do have to be the same. Next, I'll draw a line from P down to B, and then from P down to C. And I claim that this triangle here is congruent to its reflection across that perpendicular bisector. Again, the symmetry maybe helps make this clear, but more rigorously, they both have the same base. They both have a 90 degree angle and they both have the same height. So it follows by the side angle side relation. So based on that first pair of triangles, I'm going to mark this side length here as being the same as this side length here, marking them with double tick marks. And based on the second triangle relation, I'll mark this side length here as the same as this line over here, marking them with triple tick marks. And so from that, we have two more triangles that need to be the same. Namely, this one over here, and the one with corresponding two side lengths over here, and the reasoning here is they both have that triple ticked side, a double ticked side, and they're both 90 degree triangles. So this follows by the side side angle congruence relation. And all of those are valid congruence relations. I'm not pulling the wall over your eyes with one of those. And all of this will basically be enough to show us why AB has to be the same as BC. That first pair of triangles implies that the length AF is the same as the length AE. Those are corresponding sides to each other. I'll just color them in red here. And then that last triangle relation guarantees for us that the side FB is going to be the same as the side EC. I'll kind of color both of those in blue. And finally, the result we want basically comes from adding up these two equations. The length AF plus FB is clearly the same as the total length AB. And likewise, the length AE plus EC is the same as the total length AC. Though, all in all, the side length AB has to be the same as the side length AC. And because we made no assumptions about the triangle, this implies that any triangle is isosceles. Actually, for that matter, since we made no assumptions about the specific two sides we chose, it implies that any triangle is equilateral. So this leaves us somewhat disturbingly with three different possibilities. All triangles really are equilateral, that's just the truth of the universe. Or, you can use Euclid-style reasoning to derive false results. Or, there's something wrong in the proof. But if there is, where exactly is it? So, what exactly is going on with these three examples? Now, the thing that's a little bit troubling about that first example with the sphere is that it is very similar in spirit to a lot of other famous and supposedly true visual proofs from geometry. For example, there's a very famous proof about the area of a circle that starts off by dividing it into a bunch of little pizza wedges, and you take all those wedges and you straighten them out, essentially lining up the crust of that pizza. And then we take half of the wedges and interslice them with the other half. And the idea is that this might not be a perfect rectangle, it's got some bumps and curves, but, as you take thinner and thinner slices, you get something that's closer and closer to a true rectangle, and the width of that rectangle comes from half the circumference of the circle, which is, by definition, pi times r, and then the height of that rectangle comes from the radius of the circle, r, meaning that the whole area is pi r squared. This time, the result is valid, but why is it not okay to do what we did with the spheres, but somehow it is okay to do this with the pizza slices? The main problem with the sphere argument is that when we flatten out all of those orange wedges, if we were to do it accurately in a way that preserves their area, they don't look like triangles, they should bulge outward. And if you want to see this, let's think really critically about just one particular one of those wedges on the sphere, and ask yourself, how does the width across that wedge, this little portion of a line of latitude, vary as you go up and down the wedge? In particular, if you consider the angle phi from the z-axis down to a point on this wedge as we walk down it, what's the length of that width as a function of phi? For those of you curious about the details of these sorts of things, you'd start off by drawing this line up here from the z-axis to a point on the wedge. Its length will be the radius of the sphere r times the sine of this angle. That lets us deduce how long the total line of latitude is, where we're sitting. It'll basically be 2 pi times that radial line, 2 pi r sine of phi, and then the width of the wedge that we care about is just some constant proportion of that full line of latitude. Now the details don't matter too much, the one thing I want you to notice is that this is not a linear relationship. As you walk from the top of that wedge down to the bottom, letting phi range from zero up to pi halves, the width of the wedge doesn't grow linearly, instead it grows according to a sine curve. And so, when we're unwrapping all of these wedges, if we want those widths to be preserved, they should end up a little bit chubbier around the base, their side lengths are not linear. What this means is when we try to interlace all of the wedges from northern hemisphere with those from the southern, there's a meaningful amount of overlap between those non-linear edges, and we can't wave our hands about a limiting argument. This is an overlap that persists as you take finer and finer subdivisions. And ultimately, it's that overlap that accounts for the difference between our false answer with a pi squared from the true answer that has four pi. It reminds me of one of those rearrangement puzzles, where you have a number of pieces and just by moving them around, you can seemingly create area out of nowhere. For example, right now I've arranged all these pieces to form a triangle, except it's missing two units of area in the middle. And now I want you to focus on the vertices of that triangle, these white dots. Those don't move, I'm not pulling any trickery with that, but I can rearrange all of the pieces back to how they originally were, so that those two units of area in the middle seem to disappear. All the constituent parts remain the same, the triangle that they form remains the same, and yet two units of area seem to appear out of nowhere. If you've never seen this one before, by the way, I highly encourage you to pause and try to think it through. It's a very fun little puzzle. The answer starts to reveal itself, if we carefully draw the edges of this triangle, and zoom in close enough to see that our pieces don't actually fit inside the triangle, they bulge out ever so slightly. Or at least arranged like this, they bulge out ever so slightly. When we rearrange them, and we zoom back in, we can see that they dent inward ever so slightly. And that very subtle difference between the bulge out and the dent inward accounts for all of the difference in area. The slope of the edge of this blue triangle works out to be 5 divided by 2, whereas the slope of the edge of this red triangle works out to be 7 divided by 3. Those numbers are close enough to look similar as slope, but they allow for this dinting inward and the bulging outward. You have to be wary of lines that are made to look straight when you haven't had explicit confirmation that they actually are straight. One quick added comment on the sphere, the fundamental issue here is that the geometry of a curved surface is fundamentally different from the geometry of flat space. The relevant search term here would be Gaussian curvature. You can't flatten things out from a sphere without losing geometric information. Now when you do see limiting arguments that relate to little pieces on a sphere, that somehow get flattened out and are reasoned through there, those only can work if the limiting pieces that you're talking about get smaller in both directions. It's only when you zoom in close to a curved surface that it appears locally flat. The issue with our orange wedge argument is that our pieces never got exposed to that local flatness because they only got thin in one direction, they maintain the curvature in that other direction. Now on the topic of the subtlety of limiting arguments, let's turn back to our limit of jagged curves that approaches the smooth circular curve. As I said, the limiting curve really is a circle and the limiting value for the length of your approximations really is 8. Here, the basic issue is that there is no reason to expect that the limit of the lengths of the curves is the same as the length of the limits of the curves. And in fact, this is a nice counter example to show why that's not the case. The real point of this example is not the fear that anyone is ever going to believe that it shows that pi is equal to 4. Instead, it shows why care is required in other cases where people apply limiting arguments. For example, this happens all throughout calculus. It is the heart of calculus, where say you want to know the area under a given curve, the way we typically think about it, is to approximate that with a set of rectangles because those are the things we know how to compute the areas of. You just take the base times height in each case. Now this is a very jagged approximation, but the thought, or I guess the hope, is that as you take a finer and finer subdivision into thinner and thinner rectangles, the sums of those areas approaches the thing we actually care about. If you want to make it rigorous, you have to be explicit about the error between these approximations and the true thing we care about, the area under this curve. For example, you might start your argument by saying that that error has to be strictly less than the area of these red rectangles. Essentially, the deviation between the curve and our approximating rectangles sits strictly inside that red region. And then, what you would want to argue is that in this limiting process, the cumulative area of all of those red rectangles has to approach zero. Now as to the final example, our proofs that all triangles are isosceles, let me show you what it looks like if I'm a little bit more careful about actually constructing the angle bisector rather than just eyeballing it. When I do that, the relevant intersection point actually sits outside of the triangle. And then from there, if I go through everything that we did in the original argument, drawing the relevant perpendicular lines, all of that, every triangle that I claimed was congruent really is congruent. All of those were genuinely true. And the corresponding lengths of those triangles that I claimed were the same really are the same. The one place where the proof breaks down is at the very end when I said that the full side length AC was equal to AE plus EC. That was only true under the hidden assumption that that point E sat in between them. But in reality, for many triangles, that point would sit outside of those two. It's pretty subtle, isn't it? The point in all of this is that while visual intuition is great, and visual proofs often give you a nice way of elucidating what's going on with otherwise opaque rigor, visual arguments and snazzy diagrams will never obviate the need for critical thinking. In math, you cannot escape the need to look out for hidden assumptions and edge cases.
Dot products and duality | Chapter 9, Essence of linear algebra
Traditionally, dot products are something that's introduced really early on in a linear algebra course, typically right at the start. So it might seem strange that I've pushed them back this far in the series. I did this because there's a standard way to introduce the topic, which requires nothing more than a basic understanding of vectors, but a fuller understanding of the role that dot products play in math can only really be found under the light of linear transformations. Before that though, let me just briefly cover the standard way that dot products are introduced, which I'm assuming is at least partially review for a number of viewers. Numerically, if you have two vectors of the same dimension, two lists of numbers with the same lengths, taking their dot product means pairing up all of the coordinates, multiplying those pairs together, and adding the result. So the vector 1, 2 dotted with 3, 4 would be 1 times 3 plus 2 times 4. The vector 6, 2, 8, 3 dotted with 1, 8, 5, 3 would be 6 times 1 plus 2 times 8 plus 8 times 5 plus 3 times 3. Luckily, this computation has a really nice geometric interpretation. To think about the dot product between two vectors, V and W, imagine projecting W onto the line that passes through the origin and the tip of V. Multiplying the length of this projection by the length of V, you have the dot product V dot W. Except when this projection of W is pointing in the opposite direction from V, that dot product will actually be negative. So when two vectors are generally pointing in the same direction, their dot product is positive. When they're perpendicular, meaning the projection of one onto the other is the zero vector, their dot product is zero, and if they point in generally the opposite direction, their dot product is negative. Now, this interpretation is weirdly asymmetric. It treats the two vectors very differently. So when I first learned this, I was surprised that order doesn't matter. You could instead project V onto W, multiply the length of the projected V by the length of W, and get the same result. I mean, doesn't that feel like a really different process? Here's the intuition for why order doesn't matter. If V and W happened to have the same length, we could leverage some symmetry. Since projecting W onto V, then multiplying the length of that projection by the length of V is a complete mirror image of projecting V onto W, then multiplying the length of that projection by the length of W. Now, if you scale one of them, say V by some constant like two, so that they don't have equal length, the symmetry is broken. But let's think through how to interpret the dot product between this new vector, two times V, and W. If you think of W as getting projected onto V, then the dot product, two V dot W, will be exactly twice the dot product V dot W. This is because when you scale V by two, it doesn't change the length of the projection of W, but it doubles the length of the vector that you're projecting onto. But on the other hand, let's say you were thinking about V getting projected onto W. Well, in that case, the length of the projection is the thing to get scaled when we multiply V by two, but the length of the vector that you're projecting onto stays constant. So the overall effect is still to just double the dot product. So even though symmetry is broken in this case, the effect that this scaling has on the value of the dot product is the same under both interpretations. There's also one other big question that confused me when I first learned this stuff. Why on earth does this numerical process of matching coordinates multiplying pairs and adding them together have anything to do with projection? Well, to give a satisfactory answer, and also to do full justice to the significance of the dot product, we need to unearth something a little bit deeper going on here, which often goes by the name duality. But before getting into that, I need to spend some time talking about linear transformations from multiple dimensions to one dimension, which is just the number line. These are functions that take in a 2D vector and spit out some number. But linear transformations are, of course, much more restricted than your run-of-the-mill function with a 2D input and a 1D output. As with transformations in higher dimensions, like the ones I talked about in chapter 3, there are some formal properties that make these functions linear. But I'm going to purposefully ignore those here so as to not distract from our integral, and instead focus on a certain visual property that's equivalent to all the formal stuff. If you take a line of evenly spaced dots and apply a transformation, a linear transformation will keep those dots evenly spaced once they land in the output space, which is the number line. Otherwise, if there's some line of dots that gets unevenly spaced, then your transformation is not linear. As with the cases we've seen before, one of these linear transformations is completely determined by where it takes i-hat and j-hat. But this time, each one of those basis vectors just lands on a number. So when we record where they land as the columns of a matrix, each of those columns just has a single number. This is a 1 by 2 matrix. Let's walk through an example of what it means to apply one of these transformations to a vector. Let's say you have a linear transformation that takes i-hat to 1 and j-hat to negative 2. To follow where a vector with coordinates, say, 4-3 ends up, think of breaking up this vector as 4 times i-hat plus 3 times j-hat. A consequence of linearity is that after the transformation, the vector will be 4 times the place where i-hat lands, 1, plus 3 times the place where j-hat lands, negative 2, which in this case implies that it lands on negative 2. When you do this calculation purely numerically, it's matrix vector multiplication. Now, this numerical operation of multiplying a 1 by 2 matrix by a vector feels just like taking the dot product of 2 vectors. Doesn't that 1 by 2 matrix just look like a vector that we tipped on its side? In fact, we could say right now that there's a nice association between 1 by 2 matrices and 2D vectors, defined by tilting the numerical representation of a vector on its side to get the associated matrix, or to tip the matrix back up to get the associated vector. Since we're just looking at numerical expressions right now, going back and forth between vectors and 1 by 2 matrices might feel like a silly thing to do. But this suggests something that's truly awesome from the geometric view. There's some kind of connection between linear transformations that take vectors to numbers and vectors themselves. Let me show an example that clarifies the significance, and which just so happens to also answer the dot product puzzle from earlier. On-learn what you have learned, and imagine that you don't already know that the dot product relates to projection. What I'm going to do here is take a copy of the number line and place it diagonally in space somehow with the number 0 sitting at the origin. Now, think of the two-dimensional unit vector whose tip sits where the number 1 on the number line is. I want to give that guy a name, U hat. This little guy plays an important role in what's about to happen, so just keep him in the back of your mind. If we project 2D vectors straight onto this diagonal number line, in effect, we've just defined a function that takes 2D vectors to numbers. What's more, this function is actually linear since it passes our visual test that any line of evenly spaced dots remains evenly spaced once it lands on the number line. Just to be clear, even though I've embedded the number line in 2D space like this, the outputs of the function are numbers, not 2D vectors. You should think of a function that takes in 2 coordinates and outputs a single coordinate. But that vector U hat is a 2-dimensional vector living in the input space. It's just situated in such a way that overlaps with the embedding of the number line. With this projection, we just defined a linear transformation from 2D vectors to numbers, so we're going to be able to find some kind of 1 by 2 matrix that describes that transformation. To find that 1 by 2 matrix, let's zoom in on this diagonal number line setup and think about where i hat and j hat each land, since those landing spots are going to be the columns of the matrix. This part's super cool. We can reason through it with a really elegant piece of symmetry. Since i hat and U hat are both unit vectors, projecting i hat onto the line passing through U hat looks totally symmetric to projecting U hat onto the x axis. So when we ask what number does i hat land on when it gets projected, the answer is going to be the same as whatever U hat lands on when it's projected onto the x axis. But projecting U hat onto the x axis just means taking the x coordinate of U hat. So by symmetry, the number where i hat lands when it's projected onto that diagonal number line is going to be the x coordinate of U hat. Isn't that cool? The reasoning is almost identical for the j hat case. Think about it for a moment. For all the same reasons, the y coordinate of U hat gives us the number where j hat lands when it's projected onto the number line copy. Pause and ponder that for a moment. I just think that's really cool. So the entries of the 1 by 2 matrix describing the projection transformation are going to be the coordinates of U hat. And computing this projection transformation for arbitrary vectors in space, which requires multiplying that matrix by those vectors, is computationally identical to taking a dot product with U hat. This is why taking the dot product with a unit vector can be interpreted as projecting a vector onto the span of that unit vector and taking the length. So what about non-unit vectors? For example, let's say we take that unit vector U hat, but we scale it up by a factor of 3. Numerically, each of its components gets multiplied by 3. So looking at the matrix associated with that vector, it takes i hat and j hat to 3 times the values where they landed before. Since this is all linear, it implies more generally that the new matrix can be interpreted as projecting any vector onto the number line copy and multiplying where it lands by 3. This is why the dot product with a non-unit vector can be interpreted as first projecting onto that vector, then scaling up the length of that projection by the length of the vector. Take a moment to think about what happened here. We had a linear transformation from 2D space to the number line, which was not defined in terms of numerical vectors or numerical dot products, it was just defined by projecting space onto a diagonal copy of the number line. But because the transformation is linear, it was necessarily described by some 1 by 2 matrix. And since multiplying a 1 by 2 matrix by a 2D vector is the same as turning that matrix on its side and taking a dot product, this transformation was, basically, related to some 2D vector. The lesson here is that any time you have one of these linear transformations whose output space is the number line, no matter how it was defined, there's going to be some unique vector v corresponding to that transformation. In the sense that applying the transformation is the same thing as taking a dot product with that vector. To me, this is utterly beautiful. It's an example of something in math called duality. Duality shows up in many different ways and forms throughout math, and it's super tricky to actually define. Loosely speaking, it refers to situations where you have a natural but surprising correspondence between two types of mathematical thing. For the linear algebra case that you just learned about, you'd say that the dual of a vector is the linear transformation that it encodes. And the dual of a linear transformation from some space to one dimension is a certain vector in that space. So to sum up, on the surface, the dot product is a very useful geometric tool for understanding projections, and for testing whether or not vectors tend to point in the same direction. And that's probably the most important thing for you to remember about the dot product. But at a deeper level, dotting two vectors together is a way to translate one of them into the world of transformations. Again, numerically this might feel like a silly point to emphasize. It's just two computations that happen to look similar. But the reason I find this so important is that throughout math, when you're dealing with a vector, once you really get to know its personality, sometimes you realize that it's easier to understand it not as an arrow in space, but as the physical embodiment of a linear transformation. It's as if the vector is really just a conceptual shorthand for a certain transformation, since it's easier for us to think about arrows in space rather than moving all of that space to the number line. In the next video, you'll see another really cool example of this duality in action, as I talk about the cross product....
But what is a convolution?
Suppose I give you two different lists of numbers, or maybe two different functions, and I ask you to think of all the ways you might combine those two lists to get a new list of numbers, or combine the two functions to get a new function. Maybe one simple way that comes to mind is to simply add them together term by term, like ways with the functions you can add all the corresponding outputs. In a similar vein, you could also multiply the two lists, term by term, and do the same thing with the functions. But there's another kind of combination, just as fundamental as both of those, but a lot less commonly discussed, known as a convolution. But unlike the previous two cases, it's not something that's merely inherited from an operation you can do to numbers. It's something genuinely new for the context of lists of numbers or combining functions. They show up all over the place, they are ubiquitous in image processing. It's a core construct in the theory of probability. They're used a lot in solving differential equations, and one context where you've almost certainly seen it, if not by this name, is multiplying two polynomials together. As someone in the business of visual explanations, this is an especially great topic, because the formulaic definition, in isolation and without context, can look kind of intimidating, but if we take the time to really unpack what it's saying, and before that, actually motivate why you would want something like this, it's an incredibly beautiful operation. And I have to admit, I actually learned a little something while putting together the visuals for this project. In the case of convolving two different functions, I was trying to think of different ways you might picture what that could mean. And with one of them, I had a little bit of an aha moment for why it is that normal distributions play the role that they do in probability, why it's such a natural shape for a function. But I'm getting ahead of myself, there's a lot of setup for that one. In this video, our primary focus is just going to be on the discrete case, and in particular, building up to a very unexpected but very clever algorithm for computing these. And I'll pull out the discussion for the continuous case into a second part. It's very tempting to open up with the image processing examples since they're visually the most intriguing, but there are a couple bits of finiciness that make the image processing case less representative of convolutions overall. So instead, let's kick things off with probability. And in particular, one of the simplest examples that I'm sure everyone here is thought about at some point in their life, which is rolling a pair of dice, and figuring out the chances of seeing various different sums. And you might say, not a problem, not a problem. Each of your two dice has six different possible outcomes, which gives us a total of 36 distinct possible pairs of outcomes. And if we just look through them all, we can count up how many pairs have a given sum. And arranging all the pairs in a grid like this, one pretty nice thing is that all of the pairs that have a constant sum are visible along one of these different diagonals. So simply counting how many exist on each of those diagonals will tell you how likely you are to see a particular sum. And I'd say very good, very good. But can you think of any other ways that you might visualize the same question? Other images that can come to mind to think of all the distinct pairs that have a given sum. And maybe one of you raises your hand and says, yeah, I've got one. Let's say you picture these two different sets of possibilities each in a row, but you flip around that second row. That way, all of the different pairs which add up to seven line up vertically like this. And if we slide that bottom row all the way to the right, then the unique pair that adds up to two, the snake eyes, are the only ones that align. And if I shlunk that over one unit to the right, the pairs which align are the two different pairs that add up to three. And in general, different offset values of this lower array, which remember I had to flip around first, reveal all the distinct pairs that have a given sum. As far as probability questions go, this still isn't especially interesting because all we're doing is counting how many outcomes there are in each of these categories. But that is with the implicit assumption that there's an equal chance for each of these faces to come up. But what if I told you I have a special set of dice that's not uniform? Maybe the blue die has its own set of numbers describing the probabilities for each face coming up. And the red die has its own unique distinct set of numbers. In that case, if you wanted to figure out, say, the probability of seeing a two, you would multiply the probability that the blue die is a one times the probability that the red die is a one. And for the chances of seeing a three, you look at the two distinct pairs where that's possible. And again, multiply the corresponding probabilities and then add those two products together. Similarly, the chances of seeing a four involves multiplying together three different pairs of possibilities and adding them all together. And in the spirit of setting up some formulas, let's name these top probabilities, a one, a two, a three, and so on and name the bottom ones, b one, b two, b three, and so on. And in general, this process where we're taking two different arrays of numbers, flipping the second one around and then lining them up at various different offset values, taking a bunch of pairwise products and adding them up, that's one of the fundamental ways to think about what a convolution is. So just to spell it out a little more exactly, through this process, we just generated probabilities for seeing two, three, four on and on up to 12. And we got them by mixing together one list of values, a, and another list of values, b. In the lingo, we'd say the convolution of those two sequences gives us this new sequence, the new sequence of 11 values, each of which looks like some sum of pairwise products. If you prefer, another way you could think about the same operation is to first create a table of all the pairwise products and then add up along all these diagonals. Again, that's a way of mixing together these two sequences of numbers to get us a new sequence of 11 numbers. It's the same operation as the sliding windows thought, just another perspective. Putting a little notation to it, here's how you might see it written down. The convolution of a and b, denoted with this little asterisk, is a new list, and the nth element of that list looks like a sum. And that sum goes over all different pairs of indices, i and j, so that the sum of those indices is equal to n. It's kind of a mouthful, but for example, if n was 6, the pairs were going over r 1 and 5, 2 and 4, 3 and 3, 4 and 2, 5 and 1, all the different pairs that add up to 6. But honestly, however you write it down, the notation is secondary and important to the visual you might hold near head for the process. Here, maybe it helps to do a super simple example, where I might ask you what's the convolution of the list 1, 2, 3 with the list 4, 5, 6. You might picture taking both of these lists, flipping around that second one, and then starting with its slid all the way over to the left. Then the pair values which align are 1 and 4, multiply them together, and that gives us our first term of our output. Slide that bottom array 1 unit to the right, the pairs which align are 1 and 5 and 2 and 4, multiply those pairs, add them together, and that gives us 13, the next entry in our output. Slide things over once more, and we'll take 1 times 6 plus 2 times 5 plus 3 times 4, which happens to be 28, 1 more slide and we get 2 times 6 plus 3 times 5, and that gives us 27, and finally, the last term will look like 3 times 6. If you'd like, you can pull up whatever your favorite programming languages and your favorite library that includes various numerical operations, and you can confirm I'm not lying to you. If you take the convolution of 1, 2, 3 against 4, 5, 6, this is indeed the result that you'll get. We've seen one case where this is a natural and desirable operation, adding up to probability distributions, and another common example would be a moving average. Imagine you have some long list of numbers, and you take another smaller list of numbers that all add up to 1, in this case I just have a little list of 5 values and they're all equal to 1-5th. Then if we do this sliding window convolution process, and kind of close our eyes and sweep under the rug what happens at the very beginning of it, once our smaller list of values entirely overlaps with the bigger one, think about what each term in this convolution really means. At each iteration, what you're doing is multiplying each of the values from your data by 1-5th and adding them all together, which is to say you're taking an average of your data inside this little window. Overall, the process gives you a smoothed out version of the original data, and you could modify this, starting with a different little list of numbers, and as long as that little list all adds up to 1, you can still interpret it as a moving average. In the example shown here, that moving average would be giving more weight towards the central value, this also results in a smoothed out version of the data. If you do kind of a two-dimensional analog of this, it gives you a fine algorithm for blurring a given image. And I should say the animations I'm about to show are modified from something I originally made for part of a set of lectures I did with the Julia Lab at MIT for a certain open courseware class that included an image processing unit. There we did a little bit more to dive into the code behind all of this, so if you're curious, I'll leave you some links, but focusing back on this blurring example. What's going on is I've got this little 3x3 grid of values that's marching along our original image. And if we zoom in, each one of those values is 1-9th. And what I'm doing at each iteration is multiplying each of those values by the corresponding pixel that it sits on top of. And of course, in computer science, we think of colors as little vectors of three values, representing the red, green, and blue components. When I multiply all these little values by 1-9th and I add them together, it gives us an average along each color channel, and the corresponding pixel for the image on the right is defined to be that sum. The overall effect, as we do this for every single pixel on the image, is that each one kind of bleeds into all of its neighbors, which gives us a blurrier version than the original. In the lingo, we'd say that the image on the right is a convolution of our original image with the little grid of values. Or, more technically, maybe I should say that it's the convolution with a 180-degree rotated version of that little grid of values. Not that it matters when the grid is symmetric, but it's just worth keeping in mind that the definition of a convolution as inherited from the pure math context should always invite you to think about flipping around that second array. If we modify this slightly, we can get a much more elegant blurring effect by choosing a different grid of values. In this case, I have a little 5x5 grid, but the distinction is not so much its size. If we zoom in, we notice that the value in the middle is a lot bigger than the value towards the edges. And where this is coming from is they're all sampled from a bell curve, known as a Gaussian distribution. That way, when we multiply all of these values by the corresponding pixel that they're sitting on top of, we're giving a lot more weight to that central pixel, and much less towards the ones out at the edge. And just as before, the corresponding pixel on the right is defined to be this sum. As we do this process for every single pixel, it gives a blurring effect, which much more authentically simulates the notion of putting your lens out of focus or something like that. But blurring is far from the only thing that you can do with this idea. For instance, take a look at this little grid of values, which involves some positive numbers on the left, and some negative numbers on the right, which I'll color with blue and red respectively. Take a moment to see if you can predict and understand what effect this will have on the final image. So in this case, I'll just be thinking of the image as gray scale instead of colored, so each of the pixels is just represented by one number instead of three. And one thing worth noticing is that as we do this convolution, it's possible to get negative values. For example, at this point here, if we zoom in, the left half of our little grid sits entirely on top of black pixels, which would have a value of zero. But the right half of negative values also on top of white pixels, which would have a value of one. So when we multiply corresponding terms and add them together, the results will be very negative. And the way I'm displaying this with the image on the right is to color negative values red and positive values blue. Another thing to notice is that when you're on a patch that's all the same color, everything goes to zero, since the sum of the values in our little grid is zero. This is very different from the previous two examples where the sum of our little grid was one, which let us interpret it as a moving average and hence a blur. All in all, this little process basically detects wherever there's variation in the pixel value as you move from left to right. And so it gives you a kind of way to pick up on all the vertical edges from your image. And similarly, if we rotated that grid around so that it varies as you move from the top to the bottom, this will be picking up on all the horizontal edges, which in the case of our little pie creature image does result in some pretty demonic eyes. This smaller grid by the way is often called a kernel. And the beauty here is how just by choosing a different kernel, you can get different image processing effects, not just blurring your edge detection, but also things like sharpening. For those of you who have heard of a convolutional neural network, the idea there is to use data to figure out what the kernels should be in the first place, as determined by whatever the neural network wants to detect. Another thing I should maybe bring up is the length of the output. For something like the moving average example, you might only want to think about the terms when both of the windows fully align with each other. Or in the image processing example, maybe you want the final output to have the same size as the original. Now, convolutions as a pure math operation always produce an array that's bigger than the two arrays that you started with. At least assuming one of them doesn't have a length of one, just know that in certain computer science contexts, you often want to deliberately truncate that output. Another thing worth highlighting is that in the computer science context, this notion of flipping around that kernel before you let it march across the original often feels really weird and just uncalled for, but again, note that that's what's inherited from the pure math context, where like we saw with the probabilities, it's an incredibly natural thing to do. And actually, I can show you one more pure math example where even the programmers should care about this one, because it opens the doors for much faster algorithm to compute all of these. To set up what I mean by faster here, let me go back and pull up some Python again, and I'm going to create two different relatively big arrays. Each one will have 100,000 random elements in it, and I'm going to assess the run time of the convolve function from the numpy library. And in this case, it runs it for multiple different iterations, tries to find an average, and it looks like, on this computer at least, it averages at 4.87 seconds. By contrast, if I use a different function from the psi-py library, called fft-convolve, which is the same thing just implemented differently, that only takes 4.3 milliseconds on average, so three orders of magnitude improvement. And again, even though it flies under a different name, it's giving the same output that the other convolve function does is just doing something to go about it in a cleverer way. Remember how with the probability example, I said another way you could think about the convolution, was to create this table of all the pairwise products, and then add up those pairwise products along the diagonals. There's of course nothing specific to probability, any time you're convolving two different lists of numbers, you can think about it this way, create this kind of multiplication table with all pairwise products, and then each sum along the diagonal corresponds to one of your final outputs. One context where this view is especially natural is when you multiply together two polynomials. For example, let me take the little grid we already have, and replace the top terms with 1, 2x, and 3x squared, and replace the other terms with 4, 5x, and 6x squared. Now, think about what it means when we're creating all of these different pairwise products between the two lists. What you're doing is essentially expanding out the full product of the two polynomials I have written down, and then when you add up along the diagonal, that corresponds to collecting all like terms, which is pretty neat, expanding a polynomial and collecting like terms is exactly the same process as a convolution. But this allows us to do something that's pretty cool, because think about what we're saying here, we're saying if you take two different functions, and you multiply them together, which is a simple pointwise operation, that's the same thing as if you had first extracted the coefficients from each one of those, assuming they're polynomials, and then taken a convolution of those two lists of coefficients. What makes that so interesting is that convolutions feel, in principle, a lot more complicated than simple multiplication. And I don't just mean conceptually they're harder to think about. I mean, computationally, it requires more steps to perform a convolution than it does to perform a pointwise product of two different lists. For example, let's say I gave you two really big polynomials, say each one with a hundred different coefficients. Then if the way you multiply them was to expand out this product, you know, filling in this entire 100 by 100 grid of pairwise products, that would require you to perform 10,000 different products. And then, when you're collecting all the like terms along the diagonals, that's another set of around 10,000 operations. More generally, in the lingo, we'd say the algorithm is O of N squared, meaning for two lists of size N, the way that the number of operations scales is in proportion to the square of N. On the other hand, if I think of two polynomials in terms of their outputs, for example, sampling their values at some handful of inputs, then multiplying them only requires as many operations as the number of samples, since again, it's a pointwise operation. And with polynomials, you only need finitely many samples to be able to recover the coefficients. For example, two outputs are enough to uniquely specify a linear polynomial. Three outputs would be enough to uniquely specify a quadratic polynomial. And in general, if you know N distinct outputs, that's enough to uniquely specify a polynomial that has N different coefficients. Or if you prefer, we could phrase this in the language of systems of equations. Imagine I tell you, I have some polynomial, but I don't tell you what the coefficients are. Those are a mystery to you. In our example, you might think of this as the product that we're trying to figure out. And then, suppose I say, I'll just tell you what the outputs of this polynomial would be if you inputted various different inputs, like 0, 1, 2, 3 on and on. And I give you enough so that you have as many equations as you have unknowns. It even happens to be a linear system of equations, so that's nice. And in principle, at least, this should be enough to recover the coefficients. So the rough algorithm outline then would be whenever you want to convolve two lists of numbers, you treat them like their coefficients of two polynomials, you sample those polynomials at enough outputs, multiply those samples point-wise, and then solve this system to recover the coefficients as a sneaky backdoor way to find the convolution. And as I've stated it so far, at least, some of you could rightfully complain, grant, that is an idiotic plan. Because for one thing, just calculating all these samples for one of the polynomials we know already takes on the order of n-squared operations, not to mention, solving that system is certainly going to be computationally as difficult as just doing the convolution in the first place. So, like, sure, we have this connection between multiplication and convolutions, but all of the complexity happens in translating from one viewpoint to the other. But there is a trick. And those of you who know about Fourier transforms and the FFT algorithm might see where this is going. If you're unfamiliar with this topics, what I'm about to say might seem completely out of the blue, just know that there are certain paths you could have walked in math that make this more of an expected step. Basically, the idea is that we have a freedom of choice here. If instead of evaluating it some arbitrary set of inputs, like 0, 1, 2, 3 on and on, you choose to evaluate on a very specially selected set of complex numbers, specifically the ones that sit evenly spaced on the unit circle, what are known as the roots of unity, this gives us a friendlier system. The basic idea is that by finding a number where taking its powers falls into this cycling pattern, it means that the system we generate is going to have a lot of redundancy in the different terms that you're calculating, and by being clever about how you leverage that redundancy, you can save yourself a lot of work. This set of outputs that I've written has a special name, it's called the discrete Fourier transform of the coefficients, and if you want to learn more, I actually did another lecture for that same Julia MIT class all about discrete Fourier transforms, and there's also a really excellent video on the channel reducible talking about the fast Fourier transform, which is an algorithm for computing these more quickly, also Veritasium recently did a really good video on FFTs, so you've got lots of options, and that fast algorithm really is the point for us. Again, because of all this redundancy, there exists a method to go from the coefficients to all of these outputs, where instead of doing on the order of n squared operations, you do on the order of n times the log of n operations, which is much, much better as you scale to big lists. And, importantly, this FFT algorithm goes both ways, it also lets you go from the outputs to the coefficients. Though, bringing it all together, let's look back at our algorithm outline. Now we can say, whenever you're given two long lists of numbers and you want to take their convolution, first, compute the fast Fourier transform of each one of them, which in the back of your mind, you can just think of as treating them like they're the coefficients of a polynomial, and evaluating it at a very specially selected set of points. Then, multiply together the two results that you just got point wise, which is nice and fast, and then do an inverse fast Fourier transform, and what that gives you is the sneaky back doorway to compute the convolution that we were looking for. But this time, it only involves O of n log n operations. That's really cool to me. This very specific context where convolutions show up, multiplying two polynomials, opens the doors for an algorithm that's relevant everywhere else where convolutions might come up. If you want to add probability distributions, do some large image processing, whatever it might be, and I just think that's such a good example of why you should be excited when you see some operation or concept in math show up in a lot of seemingly unrelated areas. If you want a little homework, here's something that's fun to think about. Explain why when you multiply two different numbers, just ordinary multiplication the way we all learn in elementary school, what you're doing is basically a convolution between the digits of those numbers. There's some added steps with carries and the like, but the core step is a convolution. In light of the existence of a fast algorithm, what that means is if you have two very large integers, then there exists a way to find their product that's faster than the method we learn in elementary school, that instead of requiring O of n squared operations, only requires O of n log n, which doesn't even feel like it should be possible. The catch is that before this is actually useful in practice, your numbers would have to be absolutely monstrous. But still, it's cool that such an algorithm exists. Next up, we'll turn our attention to the continuous case, with a special focus on probability distributions.
Solving the heat equation | DE
We last left off studying the heat equation in the one dimensional case of a rod. The question is how the temperature distribution along such a rod will tend to change over time. And this gave us a nice first example for a partial differential equation. It told us that the rate at which the temperature at a given point changes over time depends on the second derivative of that temperature at that point with respect to space. Here we're going to look at how to solve that equation. And actually, it's a little misleading to refer to all of this as solving an equation. The PDE itself only describes one out of three constraints that our temperature function must satisfy if it's going to accurately describe heat flow. It must also satisfy certain boundary conditions, which is something we'll talk about momentarily, and a certain initial condition. That is, you don't get to choose how it looks at time t equals zero. That's part of the problem statement. These added constraints are really where all of the challenge actually lies. There is a vast ocean of function solving the PDE in the sense that when you take their partial derivatives, the thing is going to be equal. And a sizable subset of that ocean satisfies the right boundary conditions. When Joseph Fourier solved this problem in 1822, his key contribution was to gain control of this ocean, turning all of the right knobs and dials so as to be able to select from it the particular solution fitting a given initial condition. We can think of his solution as being broken down into three fundamental observations. Number one, certain sign waves offer a really simple solution to this equation. Number two, if you know multiple solutions, the sum of these functions is also a solution. And number three, most surprisingly, any function can be expressed as a sum of sign waves. Well, a pedantic mathematician might point out that there are some pathological exceptions, some weird functions where this isn't true, but basically any distribution that you would come across in practice, including discontinuous ones, can be written as a sum of sign waves, potentially infinitely many. And if you've ever heard of Fourier series, you've at least heard of this last idea. And if so, maybe you've wondered why on earth would anyone care about breaking down a function as a sum of sign waves? Well, in many applications, sign waves are nicer to deal with than anything else. And differential equations offers us a really nice context where you can see how that plays out. For our heat equation, when you write a function as a sum of these waves, the relatively clean second derivatives makes it easy to solve the heat equation for each one of them. And as you'll see, a sum of solutions to this equation gives us another solution. And so in turn, that will give us a recipe for solving the heat equation for any complicated distribution as an initial state. Here, let's dig into that first step. Why exactly would sign waves play nicely with the heat equation? To avoid messy constants, let's start simple and say that the temperature function at time t equals zero is simply sine of x, where x describes the point on the rod. Yes, the idea of a rod's temperature just happening to look like sine of x, varying around whatever temperature our conventions arbitrarily label as zero, is clearly absurd. But in math, you should always be happy to play with examples that are idealized, potentially well beyond the point of being realistic, because they can offer a good first step in the direction of something more general and hence more realistic. The right-hand side of this heat equation asks about the second derivative of our function, how much our temperature distribution curves as you move along space. The derivative of sine of x is cosine of x, whose derivative in turn is negative sine of x. What the wave curves is, in a sense, equal an opposite to its height at each point. So at least at the time t equals zero, this has the peculiar effect that each point changes its temperature at a rate proportional to the temperature of the point itself, with the same proportionality constant across all points. So after some tiny time step, everything scales down by the same factor. And after that, it's still the same sine curve shape, just scaled down a bit, so the same logic applies, and the next time step would scale it down uniformly again. And this applies just as well in the limit, as the size of these time steps approaches zero. So unlike other temperature distributions, sine waves are peculiar in that they'll get scaled down uniformly, looking like some constant time sine of x for all times t. Now when you see that the rate at which some value changes is proportional to that value itself, your mind should burn with the thought of an exponential. And if it's not, or if you're a little rusty on the idea of taking derivatives of exponentials, or what makes the number e special, I'd recommend you take a look at this video. The upshot is that the derivative of e to some constant times t is equal to that constant times itself. If the rate at which your investment grows, for example, is always say 0.05 times the total value, then its value over time is going to look like e to the 0.05 times t times whatever the initial investment was. If the rate at which the count of carbon 14 atoms in an old bone changes is always equal to some negative constant times that count itself, then over time that number will look approximately like e to that negative constant times t times whatever the initial count was. So when you look at our heat equation, and you know that for a sine wave, the right-hand side is going to be negative alpha times the temperature function itself. Hopefully it wouldn't be too surprising to propose that the solution is to scale down by a factor of e to the negative alpha t. Here, go ahead and check the partial derivatives. The proposed function of x and t is sine of x times e to the negative alpha t, taking the second partial derivative with respect to x, that e to the negative alpha t term looks like a constant. It doesn't have any x in it. So which just comes along for the right, as if it was any other constant, like 2. And the first derivative with respect to x is cosine of x times e to the negative alpha t. Likewise, the second partial derivative with respect to x becomes negative sine of x times e to the negative alpha t. And on the flip side, if you look at the partial derivative with respect to t, that sine of x term now looks like a constant, since it doesn't have a t in it. So we get negative alpha times e to the negative alpha t times sine of x. So indeed, this function does make the partial differential equation true. And oh, if it was only that simple, this narrative flow could be so nice. We would just be lying directly to the delicious Fourier series conclusion. Sadly, nature is not so nice, knocking us off onto an annoying but highly necessary detour. Here's the thing, even if nature were to somehow produce a temperature distribution on this rod, which looks like this perfect sine wave, the exponential decay is not actually how it would evolve. Assuming that no heat flows in or out of the rod, here's what that evolution would actually look like. The points on the left are heated up a little at first, and those on the right are cooled down by their neighbors to the interior. In fact, let me give you an even simpler solution to the PDE, which fails to describe actual heat flow, a straight line. It is the temperature function will be some non-zero constant times x and never change over time. The second partial derivative with respect to x is indeed zero. I mean, there is no curvature, and its partial derivative with respect to time is also zero, since it never changes over time. And yet, if I throw this into the simulator, it does actually change over time, slowly approaching a uniform temperature at the mean value. What's going on here is that the simulation I'm using treats the two boundary points of the rod differently from how it treats all the others, which is a more accurate reflection of what would actually happen in nature. If you'll recall from the last video, the intuition for where that second derivative, with respect to x actually came from, was rooted in having each point 10 towards the average value of its two neighbors on either side. But at the boundary, there is no neighbor to one side. If we went back to thinking of the discrete version, modeling only finite, laminated points on this rod, you could have each boundary point simply 10 towards its one neighbor at a rate proportional to their difference. As we do this for higher and higher resolutions, notice how pretty much immediately after the clock starts, our distribution looks flat at either of those two boundary points. In fact, in the limiting case, as these finer and finer discretized setups approach a continuous curve, the slope of our curve at the boundary will be zero for all times after the start. One way this is often described is that the slope at any given point is proportional to the rate of heat flow at that point. So if you want to model the restriction that no heat flows into or out of the rod, the slope at either end will be zero. That somewhat hand-wavy and incomplete, I know, so if you want the fuller details, I've left links and resources in the description. Taking the example of a straight line, whose slope at the boundary points is decidedly not zero, as soon as the clock starts, those boundary values will shift infinitismally, such that the slope there suddenly becomes zero and remains that way through the remainder of the evolution. In other words, finding a function satisfying the heat equation itself is not enough. It must also satisfy the property that it's flat at each of those end points for all times greater than zero. Fraised more precisely, the partial derivative with respect to x of our temperature function at zero t and at L t must be zero for all times t greater than zero, where L is the length of the rod. This is an example of a boundary condition, and pretty much any time that you have to solve a partial differential equation in practice, there will also be some boundary condition hanging along for the ride, which demands just as much attention as the PDE itself. All of this may make it feel like we've gotten nowhere, but the function which is a sine wave in space and an exponential decay in time actually gets us quite close. We just need to tweak it a little bit so that it's flat at both end points. First off, notice that we could just as well use a cosine function instead of a sine. I mean, it's the same wave, it's just shifted in phase by a quarter of the period, which would make it flat at x equal zero as we want. The second derivative of cosine of x is also negative one times itself. So for all the same reasons as before, the product cosine of x times e to the negative alpha t still satisfies the PDE. To make sure that it also satisfies the boundary condition on that right side, we're going to adjust the frequency of the wave. However, that will affect the second derivative since higher frequency waves curve more sharply and lower frequency ones curve more gently. Changing the frequency means introducing some constant, say omega, multiplied by the input of this function. A higher value of omega means the wave oscillates more quickly, since as you increase x, the input to the cosine increases more rapidly. Taking the derivative with respect to x, we still get negative sine, but the chain rule tells us to multiply that omega on the outside, and similarly the second derivative will still be negative cosine, but now with omega squared. This means that the right hand side of our equation has now picked up this omega squared term. So to balance things out on the left hand side, the exponential decay part should have an additional omega squared term up top. Unpacking what that actually means should feel intuitive. For a temperature function filled with sharper curves, it decays more quickly towards an equilibrium, and evidently it does this quadratically. For instance, doubling the frequency results in an exponential decay four times as fast. If the length of the rod is L, then the lowest frequency, where that rightmost point of the distribution will be flat, is when omega is equal to pi divided by L. You see that way, as x increases up to the value L, the input of our cosine expression goes up to pi, which is half the period of a cosine wave. Finding all the other frequencies which satisfy this boundary condition is sort of like finding harmonics. You essentially go through all the whole number multiples of this base frequency, pi over L. In fact, even multiplying it by zero works, since that gives us a constant function, which is indeed a valid solution, boundary condition and all. And with that, we're off the bumpy boundary condition detour and back onto the freeway. And forward, we're equipped with an infinite family of functions, satisfying both the PDE and the pesky boundary condition. Things are definitely looking more intricate now, but it all stems from the one basic observation that a function which looks like a sine curve in space and an exponential decay in time fits this equation, relating second derivatives in space with first derivatives in time. And of course, your formulas should start to look more intricate, you're solving a genuinely hard problem. This actually makes for a pretty good stopping point, so let's call it an end here, and in the next video, we'll look at how to use this infinite family to construct a more general solution. To any of you worried about dwelling too much on a single example in a series that's meant to give you a general overview of differential equations, it's worth emphasizing that many of the considerations which pop up here are frequent themes throughout the field. First off, the fact that we modeled the boundary with its own special rule, while the main differential equation only characterized the interior, is a very regular theme, and a pattern well worth getting used to, especially in the context of PDEs. Also, take note of how what we're doing is breaking down a general situation into simpler idealized cases. This strategy comes up all the time, and it's actually quite common for these simpler cases to look like some mixture of sine curves and exponentials, that's not at all unique to the heat equation, and as time goes on, we're going to get a deeper feel for why that's true.
Higher order derivatives | Chapter 10, Essence of calculus
In the next chapter about Taylor series, I make frequent reference to higher order derivatives. And if you're already comfortable with second derivatives, third derivatives, and so on, great, feel free to just skip ahead to the main event now. You won't hurt my feelings. But somehow, I've managed not to bring up higher order derivatives at all so far in this series. So for the sake of completeness, I thought I'd give you this little footnote, just to go over them very quickly. I'll focus mainly on the second derivative, showing what it looks like in the context of graphs and motion, and leave you to think about the analogies for higher orders. Given some function, f of x, the derivative can be interpreted as the slope of this graph above some point, right? Steep slope means a high value for the derivative, a downward slope means a negative derivative. So the second derivative, whose notation I'll explain in just a moment, is the derivative of the derivative, meaning it tells you how that slope is changing. The way to see that at a glance is to think about how the graph of f of x curves. At points where it curves upwards, like this, the slope is increasing, and that means the second derivative is positive. At points where it's curving downwards, the slope is decreasing, so the second derivative is negative. For example, a graph like this one has a very positive second derivative at the point four, since the slope is rapidly increasing around that point. Whereas a graph like this one still has a positive second derivative at the same point, but it's smaller. I mean, the slope only increases slowly. At points where there's not really any curvature, the second derivative is just zero. As far as notation goes, you could try writing it like this, indicating some small change to the derivative function, divided by some small change to x, where, as always, the use of this letter d suggests that what you really want to consider is what this ratio approaches as dx, both dx is in this case, approach zero. That's pretty awkward and clunky, so the standard is to abbreviate this as d squared f divided by dx squared. And even though it's not terribly important for getting an intuition for the second derivative, I think it might be worth showing you how you can read this notation. To start off, think of some input to your function, and then take two small steps to the right, each one with the size of dx. I'm choosing rather big steps here so that we'll be able to see what's going on, but in principle, keep in the back of your mind that dx should be rather tiny. The first step causes some change to the function, which I'll call df1, and the second step causes some similar but possibly slightly different change, which I'll call df2. The difference between these changes, the change in how the function changes, is what we'll call ddf. You should think of this as really small, typically proportional to the size of dx squared. So if, for example, you substituted in 0.01 for dx, you would expect this ddf to be about proportional to 0.0001. And the second derivative is the size of this change to the change, divided by the size of dx squared. Or more precisely, it's whatever that ratio approaches as dx approaches zero. Even though it's not like this letter d is a variable being multiplied by f, for the sake of more compact notation, you'd write it as d squared f divided by dx squared, and you don't typically bother with any parentheses on the bottom. Maybe the most visceral understanding of the second derivative is that it represents acceleration. Given some movement along a line, suppose you have some function that records the distance traveled versus time. Maybe its graph looks something like this, steadily increasing over time. Then its derivative tells you velocity at each point in time, right? For example, the graph might look like this bump, increasing up to some maximum, and then decreasing back to 0. So the second derivative tells you the rate of change for the velocity, which is the acceleration at each point in time. In this example, the second derivative is positive for the first half of the journey, which indicates speeding up. That's the sensation of being pushed back into your car seat, or rather having the car seat push you forward. A negative second derivative indicates slowing down, negative acceleration. The third derivative, and this is not a joke, is called jerk. So if the jerk is not zero, it means that the strength of the acceleration itself is changing. One of the most useful things about higher-order derivatives is how they help us in approximating functions, which is exactly the topic of the next chapter on Taylor series. So I'll see you there.
Tattoos on Math
Hey folks, just a short kind of out of the ordinary video for you today. A friend of mine, Cam recently got a math tattoo. It's not something I'd recommend, but he told his team at work that if they reached a certain stretch goal, it's something that he do. And well, the incentive worked. Cam's initials are CSC, which happens to be the shorthand for the cosecant function in trigonometry. So, what he decided to do is make his tattoo a certain geometric representation of what that function means. It's kind of like a wordless signature written in pure math. You can't be thinking though, about why on earth we teach students about the trigonometric functions, cosecant, secant, and cotangent. And it occurred to me that there's something kind of poetic about this particular tattoo. Just as tattoos are artificially painted on, but become permanent as if they were a core part of the recipient's flesh. The fact that the cosecant is a named function is kind of an artificial construct on math. Trigonometry could just as well have existed intact without the cosecant ever being named. But because it was, it has this strange and artificial permanence in our conventions and to some extent in our education system. In other words, the cosecant is not just a tattoo on Cam's chest. It's a tattoo on math itself, something which seemed reasonable and even worthy of immortality at its inception, but which doesn't necessarily hold up as time goes on. Here, let me actually show you all a picture of the tattoo that he chose, because not a lot of people know the geometric representation of the cosecant. Whenever you have an angle, typically represented with the Greek letter Theta, it's common in trigonometry to relate it to a corresponding point on the unit circle. The circle with the radius one centered at the origin in the xy plane. These trigonometry students learn that the distance between this point here on the circle and the x-axis is the sine of the angle, and the distance between that point and the y-axis is the cosine of the angle. These lengths give a really wonderful understanding for what cosine and sine are all about. People might learn that the tangent of an angle is sine divided by cosine, and that the cotangent is the other way around, cosine divided by sine. But relatively few learn that there's also a nice geometric interpretation for each of those quantities. If you draw a line tangent to the circle at this point, the distance from that point to the x-axis along that tangent is, well, the tangent of the angle. And the distance along that line to the point where it hits the y-axis, well that's the cotangent of the angle. Again, this gives a really intuitive feel for what those quantities mean. You kind of imagine tweaking that theta and seeing when cotangent gets smaller when tangent gets larger, and it's a good gut check for any students working with them. Likewise, secant, which is defined as one divided by the cosine, and cosecant, which is defined as one divided by the sine of theta, each have their own places on this diagram. If you look at that point where this tangent line crosses the x-axis, the distance from that point to the origin is the secant of the angle, that is, one divided by the cosine. Likewise, the distance between where this tangent line crosses the y-axis and the origin is the cosecant of the angle, that is, one divided by the sine. If you're wondering why on earth that's true, notice that we have two similar right triangles here. One small one inside the circle, and this larger triangle, whose hypotenuse is resting on the y-axis. I'll leave it to you to check that that interior angle up at the tip there is theta, the angle that we originally started with over inside the circle. Now, for each one of those triangles, I want you to think about the ratio of the length of the side opposite theta to the length of the hypotenuse. For the small triangle, the length of the opposite side is sine of theta, and the hypotenuse is that radius, the one that we defined to have length 1, so the ratio is just sine of theta divided by 1. Now when we look at the larger triangle, the side opposite theta is that radial line of length 1, and the hypotenuse is now this length on the y-axis, the one that I'm claiming is the cosecant. If you take the reciprocal of each side here, you see that this matches up with the fact that the cosecant of theta is 1 divided by sine. Kind of cool, right? It's also kind of nice that sine, tangent, and secant all correspond to lengths of lines that somehow go to the x-axis, and then the corresponding cosine, cotangent, and cosecant are all then lengths of lines going to the corresponding spots on the y-axis. And on a diagram like this, it might be pleasing that all six of these are separately named functions. But in any practical use of trigonometry, you can get by just using sine, cosine, and tangent. In fact, if you really wanted, you could define all six of these in terms of sine alone. But the sort of things that cosine and tangent correspond to come up frequently enough that it's more convenient to give them their own names. But cosecant, secant and cotangent, never really come up in problem solving in a way that's not just as convenient to write in terms of sine, cosine, and tangent. At that point, it's really just adding more words for students to learn with not that much added utility. And if anything, if you only introduce secant as one over cosine, and cosecant as one over sine, the mismatch of this coprefix is probably just an added point of confusion in a class that's prone enough to confusion for many of its students. The reason that all six of these functions have separate names, by the way, is that before computers and calculators, if you were doing trigonometry, maybe because you're a sailor or an astronomer or some kind of engineer, you'd find the values for these functions using large charts that just recorded known input output pairs. And when you can't easily plug in something like one divided by the sine of 30 degrees into a calculator, it might actually make sense to have a dedicated column to this value, with a dedicated name. And if you have a diagram like this one in mind when you're taking measurements, with sine, tangent, and secant having nicely mirrored meanings to cosine, cotangent, and cosecant, following this cosecant instead of one divided by sine, might actually make some sense, and it might actually make it easier to remember what it means geometrically. But times have changed, and most use cases for trig just don't involve charts of values and diagrams like this. Hence, the cosecant and its brothers are tattoos on math. Ideas whose permanence in our conventions is our own doing, not the result of nature itself. And in general, I actually think this is a good lesson for any student learning a new piece of math, at whatever level. You just gotta take a moment and ask yourself whether what you're learning is core to the flesh of math itself, and to nature itself, or if what you're looking at is actually just inked onto the subject, and could just as easily have been inked on in some completely other way.
How colliding blocks act like a beam of light...to compute pi
You know that feeling you get when you have two mirrors facing each other, and it gives the illusion of there being an infinite tunnel of rooms. Or if they're at an angle with each other, it makes you feel like you're a part of a strange, kaleidoscopic world with many copies of yourself all separated by angled pieces of glass. What many people may not realize is that the idea underlying these illusions can be surprisingly helpful for solving serious problems in math. We've already seen two videos describing the block collision puzzle, with its wonderfully surprising answer. Big block comes in from the bright, lots of clacks, the total number of clacks looks like pie, and we want to know why. Here we see one more perspective explaining what's going on, where if the connection to pie wasn't surprising enough, we add one more unexpected connection to optics. But we're doing more than just answering the same question twice. This alternate solution gives a much richer understanding of the whole setup, and it makes it easier to answer other questions. And fun side note, it happens to be core to how I coded the accurate simulations of these blocks, without requiring absurdly small time steps in huge computation time. The solution from the last video involved a coordinate plane, where each point encodes a pair of velocities. Here we'll do something similar, but the points of our plane are going to encode the pair of positions of both blocks. Again, the idea is that by representing the state of a changing system, with individual points in some space, problems in dynamics turn into problems in geometry, which hopefully are more solvable. Specifically, let the x-coordinate of a 2D plane represent the distance from the wall to the left edge of the first block, what I'll call D1, and let the y-coordinate represent the distance from the wall to the right edge of the second block, what we'll call D2. That way the line y equals x shows us where the two blocks clack into each other, since this happens whenever D1 is equal to D2. Here's what it looks like for our scenario to play out. As the two distances of our blocks change, the two dimensional points of our configuration space move around, with positions that always fully encode the information of those two distances. You may notice that at the bottom there, it's bounded by a line, where D2 is the same as the small blocks width, which, if you think about it, is what it means for the small block to hit the wall. You may be able to guess where we're going with this. The way this point bounces between the two bounding lines is a bit like a beam of light bouncing between two mirrors. The analogy doesn't quite work, though. In the lingo of optics, the angle of incidence doesn't equal the angle of reflection. Just think of the first collision. A beam of light coming in from the right would bounce off of a 45-degree angled mirror, this x equals y line, in such a way that it ends up going straight down, which would mean that only the second block is moving. This does happen in the simplest case, where the second block has the same mass as the first, and picks up all of its momentum like a croquet ball. But in the general case, for other mass ratios, that first block keeps much of its momentum, so the trajectory of our point in this configuration space won't be pointed straight down. It'll be down into the left a bit. And even if it's not immediately clear why this analogy with light would actually be helpful, and trust me, it will be helpful in many ways, run with me here and see if we can fix this for the general case. Seeking analogies in math is very often a good idea. As with the last video, it's helpful to rescale the coordinates. In fact, motivated by precisely what we did then, you might think to rescale the coordinates so that x is not equal to d1, but is equal to the square root of the first mass, m1, times d1. This has the effect of stretching our space horizontally, so changes in our big blocks position now result in larger changes to the x coordinate itself. And likewise, let's write the y coordinate as square root of m2 times d2, even though in this particular case the second mass is 1, so it doesn't make a difference, but let's keep things symmetric. Maybe this strikes you as making things uglier, and kind of a random thing to do, but as with last time, when we include square roots of masses like this, everything plays more nicely with the laws of conserving energy and momentum. Specifically, the conservation of energy will translate into the fact that our little point in the space is always moving at the same speed, which in our analogy you might think of meaning there's a constant speed of light. And the conservation of momentum will translate to the fact that as our point bounces off of the mirrors of our setup, so to speak, the angle of incidence equals the angle of reflection. Doesn't that seem bizarre in kind of a delightful way, that the laws of kinematics should translate to laws of optics like this. To see why it's true, let's roll up our sleeves and work out the actual math. Focus on the velocity vector of our point in the diagram, it shows which direction it's moving and how quickly. Now keep in mind, this is not a physical velocity, like the velocities of the moving blocks, instead it's a more abstract rate of change in the context of this configuration space, whose two dimensions worth of possible directions encode both velocities of the block. The x component of this little vector is the rate of change of x, and likewise it's y component is the rate of change of y. But what is that rate of change for the x coordinate? Well, x is the square root of m1 times d1, and the mass doesn't change, so it depends only on how d1 changes, and what's the rate at which d1 changes? Well, that's the velocity of the big block. Let's go ahead and call that v1. Likewise, the rate of change for y is going to be the square root of m2 times v2. Now, notice what the magnitude of our little configuration space changing vector is. Using the Pythagorean theorem, it's the square root of the sum of each of these component rates of change squared, which is square root of m1 times v1 squared, plus m2 times v2 squared. This inner expression should look awfully familiar. It's exactly twice the kinetic energy of our system, so the speed of our point in the configuration space is some function of the total energy, and that stays constant throughout the whole process. Remember, a core over idealizing assumption to this is that there's no energy lost to friction or to any of the collisions. All right, so that's pretty cool. With these rescaled coordinates, our little point is always moving with a constant speed, and I know it's not obvious why you would care, but among other things, it's important for the next step, where the conservation of momentum implies that these two bounding lines act like mirrors. First, let's understand this line d1 equals d2 a little bit better. In our new coordinates, it's no longer that nice 45 degree x equals y line. Instead, if we do a little algebraic manipulation here, we can see that that line is x over square root m1 equals y over square root m2. Rearranging a little bit more, we see that's a line with a slope of square root m2 over m1. That's a nice expression to tuck away in the back of your mind. After the blocks collide, meaning our point hits this line, the way to figure out how they move is to use the conservation of momentum, which says that the value m1 times v1 plus m2 times v2 is the same both before and after the collision. Now notice, this looks like a dot product between two column vectors, m1 m2 and v1 v2. Rewriting it slightly for our rescaled coordinates, the same thing could be written as a dot product between a column vector with the square roots of the masses and one with the rates of change for x and y. I know this probably seems like a complicated way to talk about a comparatively simple momentum equation, but there is a good reason for shifting the language to one of dot products in our new coordinates. Notice that second vector is simply the rate of change vector for the point in our diagram that we've been looking at. The key now is that this square root of the masses vector points in the same direction as our collision line, since the rise over run is square root m2 over square root of m1. Now if you're unfamiliar with the dot product, there is another video on this channel describing it, but real quick, let's go over what it means geometrically. The dot product of two vectors equals the length of the first one multiplied by the length of the projection of the second one onto that first, where it's considered negative if they point in opposite directions. You often see this written as the product of the lengths of the two vectors and the cosine of the angle between them. So look back at this conservation of momentum expression telling us that the dot product between this square root of the masses vector and our little change vector has to be the same, both before and after the collision. Since we just saw that this change vector has a constant magnitude, the only way for this dot product to stay the same is if the angle that it makes with the collision line stays the same. In other words, again using the lingo of optics, the angle of incidence and the angle of reflection off this collision line must be equal. Similarly, when the small block bounces off the wall, our little vector gets reflected about the x direction since only its y-coordinate changes. So our configuration point is bouncing off that horizontal line as if it was a mirror. So step back a moment and think about what this means for our original question of counting block collisions and trying to understand why on earth pi would show up. We can translate it to a completely different question. If you shine a beam of light at a pair of mirrors, meeting each other at some angle, let's say theta, how many times would that light bounce off of the mirrors as a function of that angle? Remember, the mass ratio of our blocks completely determines this angle theta in the analogy. Now I can hear some of you complaining. Haven't we just replaced one tricky setup with another? This might make for a cute analogy, but how is it progress? It's true that counting the number of light bounces is hard, but now we have a helpful trick. When the beam of light hits the mirror, instead of thinking of that beam as reflected about the mirror, think of the beam as going straight while the whole world gets flipped through the mirror. It's as if the beam is passing through a piece of glass into an illusory looking glass universe. Think of actual mirrors here. This wire on the left will represent a laser beam coming into the mirror and the one on the right will represent its reflection. The illusion is that the beam goes straight through the mirror as if passing through a window separating us from another room. But notice, crucially, for this illusion to work, the angle of incidence has to equal the angle of reflection. Otherwise, the flipped copy of the reflected beam won't line up with the first part. So all of that work we did, rescaling coordinates and fudzing through the momentum equations, was certainly necessary. But now, we get to enjoy the fruits of our labor. Watch how this helps us elegantly solve the question of how many mirror bounces there will be, which is also the question of how many block collisions there will be. Every time the beam hits a mirror, don't think of the beam as getting reflected, let it continue straight while the world gets reflected. As this goes on, the illusion to the beam of light is that instead of getting bounced around between two angled mirrors many times, it's passing through a sequence of angled pieces of glass all the same angle apart. Right now, I'm showing you all of the reflected copies of the bouncing trajectory, which I think has a very striking beauty to it. But for a clear review, let's just focus on the original bouncing beam and the illusory straight one. The question of counting bounces turns into a question of how many pieces of glass this illusory beam crosses. How many reflected copies of the world does it pass into? Well, calling the angle between the mirrors, theta, the answer here is however many times you can add theta to itself before you get more than half way around a circle, which is to say before you add up to more than pi total radians. Written as a formula, the answer to this question is the floor of pi divided by theta. So let's review. We started by drawing a configuration space for our colliding blocks, where the x and the y coordinates represented the two distances from the wall. This kind of looked like light bouncing between two mirrors, but to make the analogy work properly, we needed to rescale the coordinates by the square roots of the masses. This made it so that the slope of one of our lines was square root of m2 divided by square root of m1. So the angle between those bounding lines will be the inverse tangent of that slope. To figure out how many bounces there are between two mirrors like this, think of the illusion of the beam going straight through a sequence of looking glass universes separated by a semicircular fan of windows. The answer then comes down to how many times the value of this angle fits into 180 degrees, which is pi radians. From here, to understand why exactly the digits of pi show up when the mass ratio is a power of 100, it's exactly what we did in the last video, so I won't repeat myself here. And finally, as we reflect now on how absurd the initial appearance of pi seemed and on the two solutions we've now seen, and on how unexpectedly helpful it can be to represent the state of your system with points in some space. I leave you with this quote from the computer scientist Alan Kay. A change in perspective is worth 80 IQ points.
Essence of linear algebra preview
Hey everyone, so I'm pretty excited about the next sequence of videos that I'm doing. There'll be about linear algebra, which as a lot of you know, is one of those subjects that's required knowledge for just about any technical discipline. But it's also, I've noticed, generally poorly understood by students taking it for the first time. A student might go through a class and learn how to compute lots of things like matrix multiplication or the determinant or cross-products which use the determinant or eigenvalues, but they might come out without really understanding why matrix multiplication is defined the way that it is, why the cross-product has anything to do with the determinant or what an eigenvalue really represents. Oftentimes, students end up well-practiced in the numerical operations of matrices, but are only vaguely aware of the geometric intuitions underlying it all. But there's a fundamental difference between understanding linear algebra on a numeric level and understanding it on a geometric level. Each has its place, but roughly speaking, the geometric understanding is what lets you judge what tools to use to solve specific problems, feel why they work, and know how to interpret the results. And the numeric understanding is what lets you actually carry through the application of those tools. Now, if you learn linear algebra without getting a solid foundation in that geometric understanding, the problems can go unnoticed for a while until you've gone deeper into whatever field you happen to pursue, whether that's computer science, engineering, statistics, economics, or even math itself. Once you're in a class, or a job for that matter, that assumes fluency with linear algebra, the way that your professors or your coworkers apply that field could seem like utter magic. They'll very quickly know what the right tool to use is and what the answer roughly looks like in a way that would seem like computational wizardry if you assume that they're actually crunching all the numbers in their head. Here, as an analogy, imagine that when you first learned about the sine function in trigonometry, you were shown this infinite polynomial. This, by the way, is how your calculator evaluates the sine function. For homework, you might be asked to practice computing approximations of the sine function by plugging in various numbers to the formula and cutting it off at a reasonable point. An infernis, let's say you had a vague idea that this was supposed to be related to triangles, but exactly how had never really been clear and was just not the focus of the course. Later on, if you took a physics course, where signs and cosines are thrown around left and right, and people are able to tell pretty immediately how to apply them and roughly what the sine of a certain value will be, it would be pretty intimidating, wouldn't it? It would make it seem like the only people who are cut out for physics are those with computers for brains, and you would feel unduly slow or dumb for taking so long on each problem. It's not that different with linear algebra, and luckily, just as with trigonometry, there are a handful of intuitions, visual intuitions underlying much of the subject. And unlike the trig example, the connection between the computation and these visual intuitions is typically pretty straightforward. And when you digest these and really understand the relationship between the geometry and the numbers, the details of the subject, as well as how it's used in practice, start to feel a lot more reasonable. In fairness, most professors do make an effort to convey that geometric understanding, the sine example is a little extreme. But I do think that a lot of courses have students spending a disproportionate amount of time on the numerical side of things, especially given that in this day and age, we almost always get computers to handle that half, while in practice, humans worry about the conceptual half. So this brings me to the upcoming videos. The goal is to create a short, binge watchable series animating those intuitions from the basics of vectors up through the core topics that make up the essence of linear algebra. I'll put out one video per day for the next five days, then after that, put out a new chapter every one to two weeks. I think it should go without saying that you cannot learn a full subject with a short series of videos, and that's just not the goal here. But what you can do, especially with this subject, is lay down all the right intuitions, so the learning that you do moving forward is as productive and fruitful as it can be. I also hope this can be a resource for educators who are teaching courses that assume fluency with linear algebra, giving them a place to direct students that need a quick brush up. I'll do what I can to keep things well paced throughout, but it's hard to simultaneously account for different people's different backgrounds and levels of comfort, so I do encourage you to readily pause and ponder if you feel that it's necessary. Actually, I'd give that same advice for watching any math video, even if it doesn't feel too quick, since the thinking that you do on your own time is where all the learning really happens, don't you think? So with that as an introduction, I'll see you next video.
Q&A with Grant Sanderson (3blue1brown)
What is a grobner basis? If that is your intent for what this Q and A episode is going to be, as far as technicality and deep explanation is concerned, you're going to be grossly disappointed. Same goes to whoever asked about what the Fourier transform has to do with quantum computing. I can say at a high level it's because the Fourier transform gives you a unitary operation and quantum computing is very fast when it comes to anything that can be expressed as a unitary matrix. But those words won't make sense if you don't already know what it means. And this is not at all meant to be a video that's going to go into some deep math explanation. But when I do cover quantum computing, which I will at some point. What would you do professionally if it weren't for YouTube? Slash what are you doing professionally? So a lot of you might know I used to work for Khan Academy and I think if I wasn't doing this, I would definitely seek out some other way of doing math outreach online. Intentive the question is maybe more what would I do that has nothing to do with math outreach? I spent a lot of time doing random software engineering things through college. Like my summer internships were often spent at a tech company rather than doing something explicitly math related. But if I really turn on that parallel universe machine, I think going into data science was a very real possibility. One of the internships that I was doing at the end of it, they asked if I wanted to instead of going to college, again, just stick around and maybe have a full time job and just see what unfolded there. And I seriously considered it. You know, it was pretty compelling. Ultimately, the love for pure math won out. So I did go back to school as you're supposed to. But I kind of do wonder what would have been if instead I went that professional data science about. How arbitrary do you think our mathematical perspective is as humans on earth? If an alien civilization developed math from scratch, do you think we would see clearer similarities in their development of the fields? Like number theory, trigonometry and calculus. So this isn't an interesting question because it cuts right to the heart of is math invented or discovered. And I like the phrasing where you kind of imagine an alien civilization coming and comparing your math to theirs. It's hard to speculate on this, right? Like I have no idea. Some aliens came, we have no way of knowing whether their math would look completely different from ours. One thing you can be pretty sure of, and this might seem superficial, is that the notation would be entirely different, right? There's a lot of arbitrary choices in how we write things down. Newton's notation for calculus versus Leibniz notation for calculus. You know, a lot of the really silly things we have, like which side of the variable the function goes on? Writing out the letters S i n for sine cosine, I had the whole triangle of power video about, you know, notations for radicals and exponentials. And on the one hand, that might not feel substantive, but I think it's really interesting to contemplate other ways where the notation shapes the way that we think about it and shapes the axioms and theorems we even choose more so than we give it credit for. So a project I'm actively working on right now is about quaternions. And I was a little bit surprised to learn how up in the air the potential notations and conventions for teaching students about vectors was, like a lot of the actual notation and terminology we have for vectors and crossproducts dot products the way we think of them in 3D ultimately stems from quaternions. You know, even the fact that we use ij and k as letters to represent the x, y and z directions. And if Hamilton had had his way, we would still teach engineering students primarily about quaternions and then things like the dot product and crossproduct would be viewed as subsets of what the quaternions do and what quaternion multiplication is. And I think there's a compelling case to be made for the fact that we would use that if we could visualize four dimensions better. But the reason that quaternions never really won out as the notation to jur is because they're confusing because no one really understood them. There's all sorts of hilarious quotes from Lord Kelvin and the like about how quaternions are just needlessly confuddling when you're trying to phrase some fact about the universe. Like Maxwell's equations were originally written much more quaternionically than we teach them to students now. And arguably they're much more elegant that way. But it's confusing because we can't visualize it. So I think if you had some alien civilization that came but they had a very good spatial conception for four dimensions. They would look at our vector notation and think that it was not capturing the deeper realities of math. Arguably, who knows. What do you think is the main thing that drives people away from math? Always hard to answer on these kind of things. But I really suspect that as soon as you wrap something in a certain kind of judgment of there's a notion of being correct or incorrect or an implicit statement there's a notion of being good at math. Some people are math people. Some people aren't math people. As soon as you get someone identifying that they're not a math person, first you know insinuating that that even makes sense and then insinuating that they fall into that. Like of course you're not going to like it. Of course your natural mind churnings aren't going to go in the direction of some puzzle because you'd much rather think about things that you're good at and that make you feel happy. All of the latest stuff about growth mindsets and Carol, Deweykin, Joe Boehler really behind that. You know the idea that if you're trying to tell a student something about how they're doing with math rather than framing it around oh you must be so smart right? Famine around oh you must have worked very hard. You must have put a lot of a lot of time into that. There's a lot of much less judgmental things that we have out there like reading. Even though there's some notions of reading comprehension tests for students in school and you're reading at an eighth grade level, people usually aren't like oh I'm not a reading person right? Like I those words like some people that just make sense to them but for me those letters I don't know how they come together. When it comes to contest math like the AMC I think those can be really good for high schoolers as a bank of problems. I think they can be really bad for high schoolers as an insinuation of there's some like top tier math folk and they can do these problems really quickly. But if you give the same questions to the student and you say rather than being forced to go through all of these in 75 minutes you say let's let's spend 30 minutes on just one of them right? Really delving into it. There really solid problems that kind of engage in the spirit of problem solving and you know removing that judgmental aspect, removing that time aspect I think you know I can help out a lot. A lot of people ask about certain things that I've made promises for but haven't necessarily delivered on in a recent video you know I did one on divergence and curl and I mentioned at the end an example of using complex numbers to model fluid flow and a certain model for flow around a wing and you might notice I have yet to actually put out a video on that and I've certainly seen a number of commenters you know hampering on me for that fact. If there's ever a thing that I promise and then I don't make a video on it it's probably because I spent a good amount of time trying to write a script for it that I just didn't feel was compelling for whatever reason and I think maybe the granddaddy here is the probability series which at the moment I have five videos that I've made that are you know released to patrons. I don't I just don't feel great about them and I kind of want the stuff that I put out to you guys to be something I feel is you know if not original something that wouldn't be out there otherwise from other creators and there's a lot of good probability material online I will probably do something to release the material that I have either just as it is but on some second channel with the acknowledgement hey this isn't the greatest work I think I've done or trying to rework them and make them standalones but as far as you know essence of blank content I feel much clearer about how I would want to extend the linear algebra series rather than spinning my wheels on certain scripts and animations that I ultimately don't think are going to deliver something to you guys that I would feel proud of. Do you have any questions to Brille? I'm just reading from reading from some reddit ones here but we do something live. How much compromise if any do you have to give with like what you can animate versus like what your script is trying to convey? Usually if I can't animate a thing and it's a mathematical thing it's not like a frivolous cartoonish type thing I change the tool so that it can animate that thing right and then that might take more time and it's possible to subconsciously that means I resist topics that I know would be more difficult to animate I don't think that happens I like to use that to encourage creation of new things right like on the divergence curl didn't have good fluid flow stuff but it was fun to play around with that. For Quaternions right now I think there's a lot of 3D related things that I wanted to sort of upgrade because the previous way I was doing a lot of 3D animations was clunky and not as extensible as I wanted it to be so usually that is a good excuse to just improve the graphics tool. We have somewhere a question on here what sort of music do you listen to which I mostly wanted to answer to like mention my renewed love for the punch brothers I don't know if any of you know about them they're actually super weird they're like a avant-garde bluegrass band it's just five geniuses who get together and put out phenomenal art so can't complain about that how do you compare making your videos to making videos for Khan Academy so very different processes right like Khan Academy you imagine sitting next to someone and tutoring them and just explaining it you're writing everything by hand for the most part you do at live on this channel I obviously script things I put a lot of time into creating the visuals for it sometimes in a way that makes me feel you know if it kind of Academy it could sit down and make like three videos in an afternoon and here it's taking me like three weeks to do one video you know which of these actually carries more of an impact I think there's a proper balance for both of them and I think there's a lot of people out there who do the Khan style stuff to include Khan Academy but also many others the way I like to think about things is what wouldn't happen if I wasn't doing it but there is that little part of me that thinks maybe I should start some sort of shit second channel of the super cheap just like me and a notebook and a pencil like scrapping through some sort of explanation super quickly who makes the awesome music playing in your videos Vince Rubinetti link in the description link in all of the description's actually he does really good work and just go you know download some of the music and leave him a little tip if you feel like it's something that you enjoy what is your favorite Palomano? Palomano? Palimano? this one I'll figure it out later and insert it on the screen all right folks thanks for watching stick around for whenever the next upload is it's gonna be on Quaternions and I hope you like it this is your cold stuff this is your wine does that probably mean I should be looking at the wide one what do I do a dramatic like camera number two
Lockdown math announcement
As many of you know, with the coronavirus outbreak still very much underway, there's a huge number of students who are left to learn remotely from home, whether that means doing distance classes over video conference, or trying to find resources like Khan Academy and Brilliant to learn online. So one thing that I wanted to do in the next coming weeks, which is very different for me on this channel, is to do some live streamed lectures specifically targeted at high school students. With each lecture, I want to cover something that's a standard high school topic that most high schoolers will be expected to learn, but at the same time to have some kind of intriguing angle on it that's a little bit different from what most people might have seen, just so that if you aren't a high school student and you're someone like me, there's still something interesting about it. For example, the very first lesson is going to be on a simpler version of the quadratic formula, and while I was putting together this lesson, I honestly felt a little bit mad that this isn't the way that I learned things when I was in high school, so if you can get that same feeling, I'm going to call that mission success. One thing I'm particularly excited about is a little piece of technology that two good friends of mine who I used to work at Khan Academy with have been working on, which I think should make the dynamic between the audience and the progression of the lecture feel a little bit more tight than it usually does in some kind of live stream situation. I don't want to say anything more, I would just say show up, be prepared to answer some questions, and to ask questions too. My goal is for it to feel as much like a real class as possible. Most of the dynamic is just going to be you and me talking through problems on a piece of paper, which even though I love to visualize stuff and put out animations and that's kind of what the whole channel is about. To be honest, I think just working through things on paper feels more like what actual math is to me, and what the process of finding new ideas and coming to terms with them yourself looks like. The tentative plan right now is to do every Friday and Tuesday at noon Pacific time, but if anything changes on that, you'll see the schedule on the banner of the channel. So tune in, I hope to see you there, and be prepared to do some math.

3Blue1Brown transcripts

Data

This dataset provides transcriptions of all videos of the amazing 3Blue1Brown.

Last update was on 09.02.2022.

Schema

 #   Column         Non-Null Count  Dtype 
---  ------         --------------  ----- 
 0   video_title    116 non-null    object
 1   transcription  116 non-null    object
Downloads last month
0
Edit dataset card