idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
1
Making sense of principal component analysis, eigenvectors & eigenvalues
Imagine a big family dinner where everybody starts asking you about PCA. First, you explain it to your great-grandmother; then to your grandmother; then to your mother; then to your spouse; finally, to your daughter (a mathematician). Each time the next person is less of a layman. Here is how the conversation might go. Great-grandmother: I heard you are studying "Pee-See-Ay". I wonder what that is... You: Ah, it's just a method of summarizing some data. Look, we have some wine bottles standing here on the table. We can describe each wine by its colour, how strong it is, how old it is, and so on. Visualization originally found here. We can compose a whole list of different characteristics of each wine in our cellar. But many of them will measure related properties and so will be redundant. If so, we should be able to summarize each wine with fewer characteristics! This is what PCA does. Grandmother: This is interesting! So this PCA thing checks what characteristics are redundant and discards them? You: Excellent question, granny! No, PCA is not selecting some characteristics and discarding the others. Instead, it constructs some new characteristics that turn out to summarize our list of wines well. Of course, these new characteristics are constructed using the old ones; for example, a new characteristic might be computed as wine age minus wine acidity level or some other combination (we call them linear combinations). In fact, PCA finds the best possible characteristics, the ones that summarize the list of wines as well as only possible (among all conceivable linear combinations). This is why it is so useful. Mother: Hmmm, this certainly sounds good, but I am not sure I understand. What do you actually mean when you say that these new PCA characteristics "summarize" the list of wines? You: I guess I can give two different answers to this question. The first answer is that you are looking for some wine properties (characteristics) that strongly differ across wines. Indeed, imagine that you come up with a property that is the same for most of the wines - like the stillness of wine after being poured. This would not be very useful, would it? Wines are very different, but your new property makes them all look the same! This would certainly be a bad summary. Instead, PCA looks for properties that show as much variation across wines as possible. The second answer is that you look for the properties that would allow you to predict, or "reconstruct", the original wine characteristics. Again, imagine that you come up with a property that has no relation to the original characteristics - like the shape of a wine bottle; if you use only this new property, there is no way you could reconstruct the original ones! This, again, would be a bad summary. So PCA looks for properties that allow reconstructing the original characteristics as well as possible. Surprisingly, it turns out that these two aims are equivalent and so PCA can kill two birds with one stone. Spouse: But darling, these two "goals" of PCA sound so different! Why would they be equivalent? You: Hmmm. Perhaps I should make a little drawing (takes a napkin and starts scribbling). Let us pick two wine characteristics, perhaps wine darkness and alcohol content -- I don't know if they are correlated, but let's imagine that they are. Here is what a scatter plot of different wines could look like: Each dot in this "wine cloud" shows one particular wine. You see that the two properties ($x$ and $y$ on this figure) are correlated. A new property can be constructed by drawing a line through the centre of this wine cloud and projecting all points onto this line. This new property will be given by a linear combination $w_1 x + w_2 y$, where each line corresponds to some particular values of $w_1$ and $w_2$. Now, look here very carefully -- here is what these projections look like for different lines (red dots are projections of the blue dots): As I said before, PCA will find the "best" line according to two different criteria of what is the "best". First, the variation of values along this line should be maximal. Pay attention to how the "spread" (we call it "variance") of the red dots changes while the line rotates; can you see when it reaches maximum? Second, if we reconstruct the original two characteristics (position of a blue dot) from the new one (position of a red dot), the reconstruction error will be given by the length of the connecting red line. Observe how the length of these red lines changes while the line rotates; can you see when the total length reaches minimum? If you stare at this animation for some time, you will notice that "the maximum variance" and "the minimum error" are reached at the same time, namely when the line points to the magenta ticks I marked on both sides of the wine cloud. This line corresponds to the new wine property that will be constructed by PCA. By the way, PCA stands for "principal component analysis", and this new property is called "first principal component". And instead of saying "property" or "characteristic", we usually say "feature" or "variable". Daughter: Very nice, papa! I think I can see why the two goals yield the same result: it is essentially because of the Pythagoras theorem, isn't it? Anyway, I heard that PCA is somehow related to eigenvectors and eigenvalues; where are they in this picture? You: Brilliant observation. Mathematically, the spread of the red dots is measured as the average squared distance from the centre of the wine cloud to each red dot; as you know, it is called the variance. On the other hand, the total reconstruction error is measured as the average squared length of the corresponding red lines. But as the angle between red lines and the black line is always $90^\circ$, the sum of these two quantities is equal to the average squared distance between the centre of the wine cloud and each blue dot; this is precisely Pythagoras theorem. Of course, this average distance does not depend on the orientation of the black line, so the higher the variance, the lower the error (because their sum is constant). This hand-wavy argument can be made precise (see here). By the way, you can imagine that the black line is a solid rod, and each red line is a spring. The energy of the spring is proportional to its squared length (this is known in physics as Hooke's law), so the rod will orient itself such as to minimize the sum of these squared distances. I made a simulation of what it will look like in the presence of some viscous friction: Regarding eigenvectors and eigenvalues. You know what a covariance matrix is; in my example it is a $2\times 2$ matrix that is given by $$\begin{pmatrix}1.07 &0.63\\0.63 & 0.64\end{pmatrix}.$$ What this means is that the variance of the $x$ variable is $1.07$, the variance of the $y$ variable is $0.64$, and the covariance between them is $0.63$. As it is a square symmetric matrix, it can be diagonalized by choosing a new orthogonal coordinate system, given by its eigenvectors (incidentally, this is called spectral theorem); corresponding eigenvalues will then be located on the diagonal. In this new coordinate system, the covariance matrix is diagonal and looks like that: $$\begin{pmatrix}1.52 &0\\0 & 0.19\end{pmatrix},$$ meaning that the correlation between points is now zero. It becomes clear that the variance of any projection will be given by a weighted average of the eigenvalues (I am only sketching the intuition here). Consequently, the maximum possible variance ($1.52$) will be achieved if we simply take the projection on the first coordinate axis. It follows that the direction of the first principal component is given by the first eigenvector of the covariance matrix. (More details here.) You can see this on the rotating figure as well: there is a gray line there orthogonal to the black one; together, they form a rotating coordinate frame. Try to notice when the blue dots become uncorrelated in this rotating frame. The answer, again, is that it happens precisely when the black line points at the magenta ticks. Now I can tell you how I found them (the magenta ticks): they mark the direction of the first eigenvector of the covariance matrix, which in this case is equal to $(0.81, 0.58)$. Per popular request, I shared the Matlab code to produce the above animations.
Making sense of principal component analysis, eigenvectors & eigenvalues
Imagine a big family dinner where everybody starts asking you about PCA. First, you explain it to your great-grandmother; then to your grandmother; then to your mother; then to your spouse; finally, t
Making sense of principal component analysis, eigenvectors & eigenvalues Imagine a big family dinner where everybody starts asking you about PCA. First, you explain it to your great-grandmother; then to your grandmother; then to your mother; then to your spouse; finally, to your daughter (a mathematician). Each time the next person is less of a layman. Here is how the conversation might go. Great-grandmother: I heard you are studying "Pee-See-Ay". I wonder what that is... You: Ah, it's just a method of summarizing some data. Look, we have some wine bottles standing here on the table. We can describe each wine by its colour, how strong it is, how old it is, and so on. Visualization originally found here. We can compose a whole list of different characteristics of each wine in our cellar. But many of them will measure related properties and so will be redundant. If so, we should be able to summarize each wine with fewer characteristics! This is what PCA does. Grandmother: This is interesting! So this PCA thing checks what characteristics are redundant and discards them? You: Excellent question, granny! No, PCA is not selecting some characteristics and discarding the others. Instead, it constructs some new characteristics that turn out to summarize our list of wines well. Of course, these new characteristics are constructed using the old ones; for example, a new characteristic might be computed as wine age minus wine acidity level or some other combination (we call them linear combinations). In fact, PCA finds the best possible characteristics, the ones that summarize the list of wines as well as only possible (among all conceivable linear combinations). This is why it is so useful. Mother: Hmmm, this certainly sounds good, but I am not sure I understand. What do you actually mean when you say that these new PCA characteristics "summarize" the list of wines? You: I guess I can give two different answers to this question. The first answer is that you are looking for some wine properties (characteristics) that strongly differ across wines. Indeed, imagine that you come up with a property that is the same for most of the wines - like the stillness of wine after being poured. This would not be very useful, would it? Wines are very different, but your new property makes them all look the same! This would certainly be a bad summary. Instead, PCA looks for properties that show as much variation across wines as possible. The second answer is that you look for the properties that would allow you to predict, or "reconstruct", the original wine characteristics. Again, imagine that you come up with a property that has no relation to the original characteristics - like the shape of a wine bottle; if you use only this new property, there is no way you could reconstruct the original ones! This, again, would be a bad summary. So PCA looks for properties that allow reconstructing the original characteristics as well as possible. Surprisingly, it turns out that these two aims are equivalent and so PCA can kill two birds with one stone. Spouse: But darling, these two "goals" of PCA sound so different! Why would they be equivalent? You: Hmmm. Perhaps I should make a little drawing (takes a napkin and starts scribbling). Let us pick two wine characteristics, perhaps wine darkness and alcohol content -- I don't know if they are correlated, but let's imagine that they are. Here is what a scatter plot of different wines could look like: Each dot in this "wine cloud" shows one particular wine. You see that the two properties ($x$ and $y$ on this figure) are correlated. A new property can be constructed by drawing a line through the centre of this wine cloud and projecting all points onto this line. This new property will be given by a linear combination $w_1 x + w_2 y$, where each line corresponds to some particular values of $w_1$ and $w_2$. Now, look here very carefully -- here is what these projections look like for different lines (red dots are projections of the blue dots): As I said before, PCA will find the "best" line according to two different criteria of what is the "best". First, the variation of values along this line should be maximal. Pay attention to how the "spread" (we call it "variance") of the red dots changes while the line rotates; can you see when it reaches maximum? Second, if we reconstruct the original two characteristics (position of a blue dot) from the new one (position of a red dot), the reconstruction error will be given by the length of the connecting red line. Observe how the length of these red lines changes while the line rotates; can you see when the total length reaches minimum? If you stare at this animation for some time, you will notice that "the maximum variance" and "the minimum error" are reached at the same time, namely when the line points to the magenta ticks I marked on both sides of the wine cloud. This line corresponds to the new wine property that will be constructed by PCA. By the way, PCA stands for "principal component analysis", and this new property is called "first principal component". And instead of saying "property" or "characteristic", we usually say "feature" or "variable". Daughter: Very nice, papa! I think I can see why the two goals yield the same result: it is essentially because of the Pythagoras theorem, isn't it? Anyway, I heard that PCA is somehow related to eigenvectors and eigenvalues; where are they in this picture? You: Brilliant observation. Mathematically, the spread of the red dots is measured as the average squared distance from the centre of the wine cloud to each red dot; as you know, it is called the variance. On the other hand, the total reconstruction error is measured as the average squared length of the corresponding red lines. But as the angle between red lines and the black line is always $90^\circ$, the sum of these two quantities is equal to the average squared distance between the centre of the wine cloud and each blue dot; this is precisely Pythagoras theorem. Of course, this average distance does not depend on the orientation of the black line, so the higher the variance, the lower the error (because their sum is constant). This hand-wavy argument can be made precise (see here). By the way, you can imagine that the black line is a solid rod, and each red line is a spring. The energy of the spring is proportional to its squared length (this is known in physics as Hooke's law), so the rod will orient itself such as to minimize the sum of these squared distances. I made a simulation of what it will look like in the presence of some viscous friction: Regarding eigenvectors and eigenvalues. You know what a covariance matrix is; in my example it is a $2\times 2$ matrix that is given by $$\begin{pmatrix}1.07 &0.63\\0.63 & 0.64\end{pmatrix}.$$ What this means is that the variance of the $x$ variable is $1.07$, the variance of the $y$ variable is $0.64$, and the covariance between them is $0.63$. As it is a square symmetric matrix, it can be diagonalized by choosing a new orthogonal coordinate system, given by its eigenvectors (incidentally, this is called spectral theorem); corresponding eigenvalues will then be located on the diagonal. In this new coordinate system, the covariance matrix is diagonal and looks like that: $$\begin{pmatrix}1.52 &0\\0 & 0.19\end{pmatrix},$$ meaning that the correlation between points is now zero. It becomes clear that the variance of any projection will be given by a weighted average of the eigenvalues (I am only sketching the intuition here). Consequently, the maximum possible variance ($1.52$) will be achieved if we simply take the projection on the first coordinate axis. It follows that the direction of the first principal component is given by the first eigenvector of the covariance matrix. (More details here.) You can see this on the rotating figure as well: there is a gray line there orthogonal to the black one; together, they form a rotating coordinate frame. Try to notice when the blue dots become uncorrelated in this rotating frame. The answer, again, is that it happens precisely when the black line points at the magenta ticks. Now I can tell you how I found them (the magenta ticks): they mark the direction of the first eigenvector of the covariance matrix, which in this case is equal to $(0.81, 0.58)$. Per popular request, I shared the Matlab code to produce the above animations.
Making sense of principal component analysis, eigenvectors & eigenvalues Imagine a big family dinner where everybody starts asking you about PCA. First, you explain it to your great-grandmother; then to your grandmother; then to your mother; then to your spouse; finally, t
2
Making sense of principal component analysis, eigenvectors & eigenvalues
The manuscript "A tutorial on Principal Components Analysis" by Lindsay I Smith really helped me grok PCA. I think it's still too complex for explaining to your grandmother, but it's not bad. You should skip first few bits on calculating eigens, etc. Jump down to the example in chapter 3 and look at the graphs. I have some examples where I worked through some toy examples so I could understand PCA vs. OLS linear regression. I'll try to dig those up and post them as well. edit: You didn't really ask about the difference between Ordinary Least Squares (OLS) and PCA but since I dug up my notes I did a blog post about it. The very short version is OLS of y ~ x minimizes error perpendicular to the independent axis like this (yellow lines are examples of two errors): If you were to regress x ~ y (as opposed to y ~ x in the first example) it would minimize error like this: and PCA effectively minimizes error orthogonal to the model itself, like so: More importantly, as others have said, in a situation where you have a WHOLE BUNCH of independent variables, PCA helps you figure out which linear combinations of these variables matter the most. The examples above just help visualize what the first principal component looks like in a really simple case. In my blog post I have the R code for creating the above graphs and for calculating the first principal component. It might be worth playing with to build your intuition around PCA. I tend to not really own something until I write code that reproduces it.
Making sense of principal component analysis, eigenvectors & eigenvalues
The manuscript "A tutorial on Principal Components Analysis" by Lindsay I Smith really helped me grok PCA. I think it's still too complex for explaining to your grandmother, but it's not bad. You shou
Making sense of principal component analysis, eigenvectors & eigenvalues The manuscript "A tutorial on Principal Components Analysis" by Lindsay I Smith really helped me grok PCA. I think it's still too complex for explaining to your grandmother, but it's not bad. You should skip first few bits on calculating eigens, etc. Jump down to the example in chapter 3 and look at the graphs. I have some examples where I worked through some toy examples so I could understand PCA vs. OLS linear regression. I'll try to dig those up and post them as well. edit: You didn't really ask about the difference between Ordinary Least Squares (OLS) and PCA but since I dug up my notes I did a blog post about it. The very short version is OLS of y ~ x minimizes error perpendicular to the independent axis like this (yellow lines are examples of two errors): If you were to regress x ~ y (as opposed to y ~ x in the first example) it would minimize error like this: and PCA effectively minimizes error orthogonal to the model itself, like so: More importantly, as others have said, in a situation where you have a WHOLE BUNCH of independent variables, PCA helps you figure out which linear combinations of these variables matter the most. The examples above just help visualize what the first principal component looks like in a really simple case. In my blog post I have the R code for creating the above graphs and for calculating the first principal component. It might be worth playing with to build your intuition around PCA. I tend to not really own something until I write code that reproduces it.
Making sense of principal component analysis, eigenvectors & eigenvalues The manuscript "A tutorial on Principal Components Analysis" by Lindsay I Smith really helped me grok PCA. I think it's still too complex for explaining to your grandmother, but it's not bad. You shou
3
Making sense of principal component analysis, eigenvectors & eigenvalues
Let's do (2) first. PCA fits an ellipsoid to the data. An ellipsoid is a multidimensional generalization of distorted spherical shapes like cigars, pancakes, and eggs. These are all neatly described by the directions and lengths of their principal (semi-)axes, such as the axis of the cigar or egg or the plane of the pancake. No matter how the ellipsoid is turned, the eigenvectors point in those principal directions and the eigenvalues give you the lengths. The smallest eigenvalues correspond to the thinnest directions having the least variation, so ignoring them (which collapses them flat) loses relatively little information: that's PCA. (1) Apart from simplification (above), we have needs for pithy description, visualization, and insight. Being able to reduce dimensions is a good thing: it makes it easier to describe the data and, if we're lucky to reduce them to three or less, lets us draw a picture. Sometimes we can even find useful ways to interpret the combinations of data represented by the coordinates in the picture, which can afford insight into the joint behavior of the variables. The figure shows some clouds of $200$ points each, along with ellipsoids containing 50% of each cloud and axes aligned with the principal directions. In the first row the clouds have essentially one principal component, comprising 95% of all the variance: these are the cigar shapes. In the second row the clouds have essentially two principal components, one about twice the size of the other, together comprising 95% of all the variance: these are the pancake shapes. In the third row all three principal components are sizable: these are the egg shapes. Any 3D point cloud that is "coherent" in the sense of not exhibiting clusters or tendrils or outliers will look like one of these. Any 3D point cloud at all--provided not all the points are coincident--can be described by one of these figures as an initial point of departure for identifying further clustering or patterning. The intuition you develop from contemplating such configurations can be applied to higher dimensions, even though it is difficult or impossible to visualize those dimensions.
Making sense of principal component analysis, eigenvectors & eigenvalues
Let's do (2) first. PCA fits an ellipsoid to the data. An ellipsoid is a multidimensional generalization of distorted spherical shapes like cigars, pancakes, and eggs. These are all neatly describe
Making sense of principal component analysis, eigenvectors & eigenvalues Let's do (2) first. PCA fits an ellipsoid to the data. An ellipsoid is a multidimensional generalization of distorted spherical shapes like cigars, pancakes, and eggs. These are all neatly described by the directions and lengths of their principal (semi-)axes, such as the axis of the cigar or egg or the plane of the pancake. No matter how the ellipsoid is turned, the eigenvectors point in those principal directions and the eigenvalues give you the lengths. The smallest eigenvalues correspond to the thinnest directions having the least variation, so ignoring them (which collapses them flat) loses relatively little information: that's PCA. (1) Apart from simplification (above), we have needs for pithy description, visualization, and insight. Being able to reduce dimensions is a good thing: it makes it easier to describe the data and, if we're lucky to reduce them to three or less, lets us draw a picture. Sometimes we can even find useful ways to interpret the combinations of data represented by the coordinates in the picture, which can afford insight into the joint behavior of the variables. The figure shows some clouds of $200$ points each, along with ellipsoids containing 50% of each cloud and axes aligned with the principal directions. In the first row the clouds have essentially one principal component, comprising 95% of all the variance: these are the cigar shapes. In the second row the clouds have essentially two principal components, one about twice the size of the other, together comprising 95% of all the variance: these are the pancake shapes. In the third row all three principal components are sizable: these are the egg shapes. Any 3D point cloud that is "coherent" in the sense of not exhibiting clusters or tendrils or outliers will look like one of these. Any 3D point cloud at all--provided not all the points are coincident--can be described by one of these figures as an initial point of departure for identifying further clustering or patterning. The intuition you develop from contemplating such configurations can be applied to higher dimensions, even though it is difficult or impossible to visualize those dimensions.
Making sense of principal component analysis, eigenvectors & eigenvalues Let's do (2) first. PCA fits an ellipsoid to the data. An ellipsoid is a multidimensional generalization of distorted spherical shapes like cigars, pancakes, and eggs. These are all neatly describe
4
Making sense of principal component analysis, eigenvectors & eigenvalues
Hmm, here goes for a completely non-mathematical take on PCA... Imagine you have just opened a cider shop. You have 50 varieties of cider and you want to work out how to allocate them onto shelves, so that similar-tasting ciders are put on the same shelf. There are lots of different tastes and textures in cider - sweetness, tartness, bitterness, yeastiness, fruitiness, clarity, fizziness etc etc. So what you need to do to put the bottles into categories is answer two questions: 1) What qualities are most important for identifying groups of ciders? e.g. does classifying based on sweetness make it easier to cluster your ciders into similar-tasting groups than classifying based on fruitiness? 2) Can we reduce our list of variables by combining some of them? e.g. is there actually a variable that is some combination of "yeastiness and clarity and fizziness" and which makes a really good scale for classifying varieties? This is essentially what PCA does. Principal components are variables that usefully explain variation in a data set - in this case, that usefully differentiate between groups. Each principal component is one of your original explanatory variables, or a combination of some of your original explanatory variables.
Making sense of principal component analysis, eigenvectors & eigenvalues
Hmm, here goes for a completely non-mathematical take on PCA... Imagine you have just opened a cider shop. You have 50 varieties of cider and you want to work out how to allocate them onto shelves, s
Making sense of principal component analysis, eigenvectors & eigenvalues Hmm, here goes for a completely non-mathematical take on PCA... Imagine you have just opened a cider shop. You have 50 varieties of cider and you want to work out how to allocate them onto shelves, so that similar-tasting ciders are put on the same shelf. There are lots of different tastes and textures in cider - sweetness, tartness, bitterness, yeastiness, fruitiness, clarity, fizziness etc etc. So what you need to do to put the bottles into categories is answer two questions: 1) What qualities are most important for identifying groups of ciders? e.g. does classifying based on sweetness make it easier to cluster your ciders into similar-tasting groups than classifying based on fruitiness? 2) Can we reduce our list of variables by combining some of them? e.g. is there actually a variable that is some combination of "yeastiness and clarity and fizziness" and which makes a really good scale for classifying varieties? This is essentially what PCA does. Principal components are variables that usefully explain variation in a data set - in this case, that usefully differentiate between groups. Each principal component is one of your original explanatory variables, or a combination of some of your original explanatory variables.
Making sense of principal component analysis, eigenvectors & eigenvalues Hmm, here goes for a completely non-mathematical take on PCA... Imagine you have just opened a cider shop. You have 50 varieties of cider and you want to work out how to allocate them onto shelves, s
5
Making sense of principal component analysis, eigenvectors & eigenvalues
I'd answer in "layman's terms" by saying that PCA aims to fit straight lines to the data points (everyone knows what a straight line is). We call these straight lines "principal components". There are as many principal components as there are variables. The first principal component is the best straight line you can fit to the data. The second principal component is the best straight line you can fit to the errors from the first principal component. The third principal component is the best straight line you can fit to the errors from the first and second principal components, etc., etc. If someone asks what you mean by "best" or "errors", then this tells you they are not a "layman", so can go into a bit more technical details such as perpendicular errors, don't know where the error is in x- or y- direction, more than 2 or 3 dimensions, etc. Further if you avoid making reference to OLS regression (which the "layman" probably won't understand either) the explanation is easier. The eigenvectors and eigenvalues are not needed concepts per se, rather they happened to be mathematical concepts that already existed. When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix.
Making sense of principal component analysis, eigenvectors & eigenvalues
I'd answer in "layman's terms" by saying that PCA aims to fit straight lines to the data points (everyone knows what a straight line is). We call these straight lines "principal components". There a
Making sense of principal component analysis, eigenvectors & eigenvalues I'd answer in "layman's terms" by saying that PCA aims to fit straight lines to the data points (everyone knows what a straight line is). We call these straight lines "principal components". There are as many principal components as there are variables. The first principal component is the best straight line you can fit to the data. The second principal component is the best straight line you can fit to the errors from the first principal component. The third principal component is the best straight line you can fit to the errors from the first and second principal components, etc., etc. If someone asks what you mean by "best" or "errors", then this tells you they are not a "layman", so can go into a bit more technical details such as perpendicular errors, don't know where the error is in x- or y- direction, more than 2 or 3 dimensions, etc. Further if you avoid making reference to OLS regression (which the "layman" probably won't understand either) the explanation is easier. The eigenvectors and eigenvalues are not needed concepts per se, rather they happened to be mathematical concepts that already existed. When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix.
Making sense of principal component analysis, eigenvectors & eigenvalues I'd answer in "layman's terms" by saying that PCA aims to fit straight lines to the data points (everyone knows what a straight line is). We call these straight lines "principal components". There a
6
Making sense of principal component analysis, eigenvectors & eigenvalues
I can give you my own explanation/proof of the PCA, which I think is really simple and elegant, and doesn't require anything except basic knowledge of linear algebra. It came out pretty lengthy, because I wanted to write in simple accessible language. Suppose we have some $M$ samples of data from an $n$-dimensional space. Now we want to project this data on a few lines in the $n$-dimensional space, in a way that retains as much variance as possible (that means, the variance of the projected data should be as big compared to the variance of original data as possible). Now, let's observe that if we translate (move) all the points by some vector $\beta$, the variance will remain the same, since moving all points by $\beta$ will move their arithmetic mean by $\beta$ as well, and variance is linearly proportional to $\sum_{i=1}^M \|x_i - \mu\|^2$. Hence we translate all the points by $-\mu$, so that their arithmetic mean becomes $0$, for computational comfort. Let's denote the translated points as $x_i' = x_i - \mu$. Let's also observe, that the variance can be now expressed simply as $\sum_{i=1}^M \|x_i'\|^2$. Now the choice of the line. We can describe any line as set of points that satisfy the equation $x = \alpha v + w$, for some vectors $v,w$. Note that if we move the line by some vector $\gamma$ orthogonal to $v$, then all the projections on the line will also be moved by $\gamma$, hence the mean of the projections will be moved by $\gamma$, hence the variance of the projections will remain unchanged. That means we can move the line parallel to itself, and not change the variance of projections on this line. Again for convenience purposes let's limit ourselves to only the lines passing through the zero point (this means lines described by $x = \alpha v$). Alright, now suppose we have a vector $v$ that describes the direction of a line that is a possible candidate for the line we search for. We need to calculate variance of the projections on the line $\alpha v$. What we will need are projection points and their mean. From linear algebra we know that in this simple case the projection of $x_i'$ on $\alpha v$ is $\langle x_i, v\rangle/\|v\|_2$. Let's from now on limit ourselves to only unit vectors $v$. That means we can write the length of projection of point $x_i'$ on $v$ simply as $\langle x_i', v\rangle$. In some of the previous answers someone said that PCA minimizes the sum of squares of distances from the chosen line. We can now see it's true, because sum of squares of projections plus sum of squares of distances from the chosen line is equal to the sum of squares of distances from point $0$. By maximizing the sum of squares of projections, we minimize the sum of squares of distances and vice versa, but this was just a thoughtful digression, back to the proof now. As for the mean of the projections, let's observe that $v$ is part of some orthogonal basis of our space, and that if we project our data points on every vector of that basis, their sum will cancel out (it's like that because projecting on the vectors from the basis is like writing the data points in the new orthogonal basis). So the sum of all the projections on the vector $v$ (let's call the sum $S_v$) and the sum of projections on other vectors from the basis (let's call it $S_o$) is 0, because it's the mean of the data points. But $S_v$ is orthogonal to $S_o$! That means $S_o = S_v = 0$. So the mean of our projections is $0$. Well, that's convenient, because that means the variance is just the sum of squares of lengths of projections, or in symbols $$\sum_{i=1}^M (x_i' \cdot v)^2 = \sum_{i=1}^M v^T \cdot x_i'^T \cdot x_i' \cdot v = v^T \cdot (\sum_{i=1}^M x_i'^T \cdot x_i) \cdot v.$$ Well well, suddenly the covariance matrix popped out. Let's denote it simply by $X$. It means we are now looking for a unit vector $v$ that maximizes $v^T \cdot X \cdot v$, for some semi-positive definite matrix $X$. Now, let's take the eigenvectors and eigenvalues of matrix $X$, and denote them by $e_1, e_2, \dots , e_n$ and $\lambda_1 , \dots, \lambda_n$ respectively, such that $\lambda_1 \geq \lambda_2 , \geq \lambda_3 \dots $. If the values $\lambda$ do not duplicate, eigenvectors form an orthonormal basis. If they do, we choose the eigenvectors in a way that they form an orthonormal basis. Now let's calculate $v^T \cdot X \cdot v$ for an eigenvector $e_i$. We have $$e_i^T \cdot X \cdot e_i = e_i^T \cdot (\lambda_i e_i) = \lambda_i (\|e_i\|_2)^2 = \lambda_i.$$ Pretty good, this gives us $\lambda_1$ for $e_1$. Now let's take an arbitrary vector $v$. Since eigenvectors form an orthonormal basis, we can write $v = \sum_{i=1}^n e_i \langle v, e_i \rangle$, and we have $\sum_{i=1}^n \langle v, e_i \rangle^2 = 1$. Let's denote $\beta_i = \langle v, e_i \rangle$. Now let's count $v^T \cdot X \cdot v$. We rewrite $v$ as a linear combination of $e_i$, and get: $$(\sum_{i=1}^n \beta_i e_i)^T \cdot X \cdot (\sum_{i=1}^n \beta_i e_i) = (\sum_{i=1}^n \beta_i e_i) \cdot (\sum_{i=1}^n \lambda_i \beta_i e_i) = \sum_{i=1}^n \lambda_i (\beta_i)^2 (\|e_i\|_2)^2.$$ The last equation comes from the fact the eigenvectors where chosen to be pairwise orthogonal, so their dot products are zero. Now, because all eigenvectors are also of unit length, we can write $v^T \cdot X \cdot v = \sum_{i=1}^n \lambda_i \beta_i^2$, where $\beta_i ^2$ are all positive, and sum to $1$. That means that the variance of the projection is a weighted mean of eigenvalues. Certainly, it is always less then the biggest eigenvalue, which is why it should be our choice of the first PCA vector. Now suppose we want another vector. We should chose it from the space orthogonal to the already chosen one, that means the subspace $\mathrm{lin}(e_2, e_3, \dots , e_n)$. By analogical inference we arrive at the conclusion, that the best vector to project on is $e_2$. And so on, and so on... By the way, it should be now clear, why the variance retained can be expressed by $\sum_{i=1}^k \lambda_i / \sum_{i=1}^n \lambda_i$. We should also justify the greedy choice of vectors. When we want to choose $k$ vectors to project onto, it might not be the best idea to first choose the best vector, then the best from what remains and so on. I'd like to argue that in this case it is justified and makes no difference. Lets denote the $k$ vector we wish to project onto by $v_1, \dots , v_k$. Also, let's assume the vectors are pairwise orthogonal. As we already know, the total variance of the projections on those vectors can be expressed by $$\sum_{j=1}^k \sum_{i=1}^n \lambda_i \beta_{ij}^2 = \sum_{i=1}^n \lambda_i \gamma_i$$ where $\gamma_i = \sum_{j=1}^k \beta_{ij}^2.$ Now, let's write $e_i$ in some orthonormal basis that includes $v_1, \dots , v_k$. Let's denote the rest of the basis as $u_1, \dots, u_{n-k}$. We can see that $e_i = \sum_{j=1}^k \beta_{ij} v_j + \sum_{j=1}^{n-k} \theta_j \langle e_i, u_j \rangle$. Because $\|e_i\|_2 = 1$, we have $\sum_{j=1}^k \beta_{ij}^2 + \sum_{j=1}^{n-k} \theta_j^2 = 1$, and hence $\gamma_i \leq 1$ for all $i$. Now we have a similar case to one vector only, we now know that the total variance of projections is $\sum_{i=1}^n \lambda_i \gamma_i$ with $\gamma_i \leq 1$ and $\sum_{i=1}^n \gamma_i = k$. This is yet another weighted mean, and is certainly no more than $\sum_{i=1}^k \lambda_i$ which corresponds to projecting on $k$ eigenvectors corresponding to biggest eigenvalues.
Making sense of principal component analysis, eigenvectors & eigenvalues
I can give you my own explanation/proof of the PCA, which I think is really simple and elegant, and doesn't require anything except basic knowledge of linear algebra. It came out pretty lengthy, becau
Making sense of principal component analysis, eigenvectors & eigenvalues I can give you my own explanation/proof of the PCA, which I think is really simple and elegant, and doesn't require anything except basic knowledge of linear algebra. It came out pretty lengthy, because I wanted to write in simple accessible language. Suppose we have some $M$ samples of data from an $n$-dimensional space. Now we want to project this data on a few lines in the $n$-dimensional space, in a way that retains as much variance as possible (that means, the variance of the projected data should be as big compared to the variance of original data as possible). Now, let's observe that if we translate (move) all the points by some vector $\beta$, the variance will remain the same, since moving all points by $\beta$ will move their arithmetic mean by $\beta$ as well, and variance is linearly proportional to $\sum_{i=1}^M \|x_i - \mu\|^2$. Hence we translate all the points by $-\mu$, so that their arithmetic mean becomes $0$, for computational comfort. Let's denote the translated points as $x_i' = x_i - \mu$. Let's also observe, that the variance can be now expressed simply as $\sum_{i=1}^M \|x_i'\|^2$. Now the choice of the line. We can describe any line as set of points that satisfy the equation $x = \alpha v + w$, for some vectors $v,w$. Note that if we move the line by some vector $\gamma$ orthogonal to $v$, then all the projections on the line will also be moved by $\gamma$, hence the mean of the projections will be moved by $\gamma$, hence the variance of the projections will remain unchanged. That means we can move the line parallel to itself, and not change the variance of projections on this line. Again for convenience purposes let's limit ourselves to only the lines passing through the zero point (this means lines described by $x = \alpha v$). Alright, now suppose we have a vector $v$ that describes the direction of a line that is a possible candidate for the line we search for. We need to calculate variance of the projections on the line $\alpha v$. What we will need are projection points and their mean. From linear algebra we know that in this simple case the projection of $x_i'$ on $\alpha v$ is $\langle x_i, v\rangle/\|v\|_2$. Let's from now on limit ourselves to only unit vectors $v$. That means we can write the length of projection of point $x_i'$ on $v$ simply as $\langle x_i', v\rangle$. In some of the previous answers someone said that PCA minimizes the sum of squares of distances from the chosen line. We can now see it's true, because sum of squares of projections plus sum of squares of distances from the chosen line is equal to the sum of squares of distances from point $0$. By maximizing the sum of squares of projections, we minimize the sum of squares of distances and vice versa, but this was just a thoughtful digression, back to the proof now. As for the mean of the projections, let's observe that $v$ is part of some orthogonal basis of our space, and that if we project our data points on every vector of that basis, their sum will cancel out (it's like that because projecting on the vectors from the basis is like writing the data points in the new orthogonal basis). So the sum of all the projections on the vector $v$ (let's call the sum $S_v$) and the sum of projections on other vectors from the basis (let's call it $S_o$) is 0, because it's the mean of the data points. But $S_v$ is orthogonal to $S_o$! That means $S_o = S_v = 0$. So the mean of our projections is $0$. Well, that's convenient, because that means the variance is just the sum of squares of lengths of projections, or in symbols $$\sum_{i=1}^M (x_i' \cdot v)^2 = \sum_{i=1}^M v^T \cdot x_i'^T \cdot x_i' \cdot v = v^T \cdot (\sum_{i=1}^M x_i'^T \cdot x_i) \cdot v.$$ Well well, suddenly the covariance matrix popped out. Let's denote it simply by $X$. It means we are now looking for a unit vector $v$ that maximizes $v^T \cdot X \cdot v$, for some semi-positive definite matrix $X$. Now, let's take the eigenvectors and eigenvalues of matrix $X$, and denote them by $e_1, e_2, \dots , e_n$ and $\lambda_1 , \dots, \lambda_n$ respectively, such that $\lambda_1 \geq \lambda_2 , \geq \lambda_3 \dots $. If the values $\lambda$ do not duplicate, eigenvectors form an orthonormal basis. If they do, we choose the eigenvectors in a way that they form an orthonormal basis. Now let's calculate $v^T \cdot X \cdot v$ for an eigenvector $e_i$. We have $$e_i^T \cdot X \cdot e_i = e_i^T \cdot (\lambda_i e_i) = \lambda_i (\|e_i\|_2)^2 = \lambda_i.$$ Pretty good, this gives us $\lambda_1$ for $e_1$. Now let's take an arbitrary vector $v$. Since eigenvectors form an orthonormal basis, we can write $v = \sum_{i=1}^n e_i \langle v, e_i \rangle$, and we have $\sum_{i=1}^n \langle v, e_i \rangle^2 = 1$. Let's denote $\beta_i = \langle v, e_i \rangle$. Now let's count $v^T \cdot X \cdot v$. We rewrite $v$ as a linear combination of $e_i$, and get: $$(\sum_{i=1}^n \beta_i e_i)^T \cdot X \cdot (\sum_{i=1}^n \beta_i e_i) = (\sum_{i=1}^n \beta_i e_i) \cdot (\sum_{i=1}^n \lambda_i \beta_i e_i) = \sum_{i=1}^n \lambda_i (\beta_i)^2 (\|e_i\|_2)^2.$$ The last equation comes from the fact the eigenvectors where chosen to be pairwise orthogonal, so their dot products are zero. Now, because all eigenvectors are also of unit length, we can write $v^T \cdot X \cdot v = \sum_{i=1}^n \lambda_i \beta_i^2$, where $\beta_i ^2$ are all positive, and sum to $1$. That means that the variance of the projection is a weighted mean of eigenvalues. Certainly, it is always less then the biggest eigenvalue, which is why it should be our choice of the first PCA vector. Now suppose we want another vector. We should chose it from the space orthogonal to the already chosen one, that means the subspace $\mathrm{lin}(e_2, e_3, \dots , e_n)$. By analogical inference we arrive at the conclusion, that the best vector to project on is $e_2$. And so on, and so on... By the way, it should be now clear, why the variance retained can be expressed by $\sum_{i=1}^k \lambda_i / \sum_{i=1}^n \lambda_i$. We should also justify the greedy choice of vectors. When we want to choose $k$ vectors to project onto, it might not be the best idea to first choose the best vector, then the best from what remains and so on. I'd like to argue that in this case it is justified and makes no difference. Lets denote the $k$ vector we wish to project onto by $v_1, \dots , v_k$. Also, let's assume the vectors are pairwise orthogonal. As we already know, the total variance of the projections on those vectors can be expressed by $$\sum_{j=1}^k \sum_{i=1}^n \lambda_i \beta_{ij}^2 = \sum_{i=1}^n \lambda_i \gamma_i$$ where $\gamma_i = \sum_{j=1}^k \beta_{ij}^2.$ Now, let's write $e_i$ in some orthonormal basis that includes $v_1, \dots , v_k$. Let's denote the rest of the basis as $u_1, \dots, u_{n-k}$. We can see that $e_i = \sum_{j=1}^k \beta_{ij} v_j + \sum_{j=1}^{n-k} \theta_j \langle e_i, u_j \rangle$. Because $\|e_i\|_2 = 1$, we have $\sum_{j=1}^k \beta_{ij}^2 + \sum_{j=1}^{n-k} \theta_j^2 = 1$, and hence $\gamma_i \leq 1$ for all $i$. Now we have a similar case to one vector only, we now know that the total variance of projections is $\sum_{i=1}^n \lambda_i \gamma_i$ with $\gamma_i \leq 1$ and $\sum_{i=1}^n \gamma_i = k$. This is yet another weighted mean, and is certainly no more than $\sum_{i=1}^k \lambda_i$ which corresponds to projecting on $k$ eigenvectors corresponding to biggest eigenvalues.
Making sense of principal component analysis, eigenvectors & eigenvalues I can give you my own explanation/proof of the PCA, which I think is really simple and elegant, and doesn't require anything except basic knowledge of linear algebra. It came out pretty lengthy, becau
7
Making sense of principal component analysis, eigenvectors & eigenvalues
Alright, I'll give this a try. A few months back I dug through a good amount of literature to find an intuitive explanation I could explain to a non-statistician. I found the derivations that use Lagrange multipliers the most intuitive. Let's say we have high dimension data - say 30 measurements made on an insect. The bugs have different genotypes and slightly different physical features in some of these dimensions, but with such high dimension data it's hard to tell which insects belong to which group. PCA is a technique to reduce dimension by: Taking linear combinations of the original variables. Each linear combination explains the most variance in the data it can. Each linear combination is uncorrelated with the others Or, in mathematical terms: For $Y_j = a_j' x$ (linear combination for jth component) For $k > j$, $V(Y_k) < V(Y_j)$ (first components explain more variation) $a_k' a_j = 0$ (orthogonality) Finding linear combinations that satisfy these constraints leads us to eigenvalues. Why? I recommend checking out the book An Introduction to Multivariate Data Analysis for the full derivation (p. 50), but the basic idea is successive optimizations problems (maximizing variance) constrained such that a'a = 1 for coefficients a (to prevent the case when variance could be infinite) and constrained to make sure the coefficients are orthogonal. This leads to optimization with Lagrange multipliers, which in turn reveals why eigenvalues are used. I am too lazy to type it out (sorry!) but, this PDF goes through the proof pretty well from this point. I would never try to explain this to my grandmother, but if I had to talk generally about dimension reduction techniques, I'd point to this trivial projection example (not PCA). Suppose you have a Calder mobile that is very complex. Some points in 3-d space close to each other, others aren't. If we hung this mobile from the ceiling and shined light on it from one angle, we get a projection onto a lower dimension plane (a 2-d wall). Now, if this mobile is mainly wide in one direction, but skinny in the other direction, we can rotate it to get projections that differ in usefulness. Intuitively, a skinny shape in one dimension projected on a wall is less useful - all the shadows overlap and don't give us much information. However, if we rotate it so the light shines on the wide side, we get a better picture of the reduced dimension data - points are more spread out. This is often what we want. I think my grandmother could understand that :-)
Making sense of principal component analysis, eigenvectors & eigenvalues
Alright, I'll give this a try. A few months back I dug through a good amount of literature to find an intuitive explanation I could explain to a non-statistician. I found the derivations that use Lagr
Making sense of principal component analysis, eigenvectors & eigenvalues Alright, I'll give this a try. A few months back I dug through a good amount of literature to find an intuitive explanation I could explain to a non-statistician. I found the derivations that use Lagrange multipliers the most intuitive. Let's say we have high dimension data - say 30 measurements made on an insect. The bugs have different genotypes and slightly different physical features in some of these dimensions, but with such high dimension data it's hard to tell which insects belong to which group. PCA is a technique to reduce dimension by: Taking linear combinations of the original variables. Each linear combination explains the most variance in the data it can. Each linear combination is uncorrelated with the others Or, in mathematical terms: For $Y_j = a_j' x$ (linear combination for jth component) For $k > j$, $V(Y_k) < V(Y_j)$ (first components explain more variation) $a_k' a_j = 0$ (orthogonality) Finding linear combinations that satisfy these constraints leads us to eigenvalues. Why? I recommend checking out the book An Introduction to Multivariate Data Analysis for the full derivation (p. 50), but the basic idea is successive optimizations problems (maximizing variance) constrained such that a'a = 1 for coefficients a (to prevent the case when variance could be infinite) and constrained to make sure the coefficients are orthogonal. This leads to optimization with Lagrange multipliers, which in turn reveals why eigenvalues are used. I am too lazy to type it out (sorry!) but, this PDF goes through the proof pretty well from this point. I would never try to explain this to my grandmother, but if I had to talk generally about dimension reduction techniques, I'd point to this trivial projection example (not PCA). Suppose you have a Calder mobile that is very complex. Some points in 3-d space close to each other, others aren't. If we hung this mobile from the ceiling and shined light on it from one angle, we get a projection onto a lower dimension plane (a 2-d wall). Now, if this mobile is mainly wide in one direction, but skinny in the other direction, we can rotate it to get projections that differ in usefulness. Intuitively, a skinny shape in one dimension projected on a wall is less useful - all the shadows overlap and don't give us much information. However, if we rotate it so the light shines on the wide side, we get a better picture of the reduced dimension data - points are more spread out. This is often what we want. I think my grandmother could understand that :-)
Making sense of principal component analysis, eigenvectors & eigenvalues Alright, I'll give this a try. A few months back I dug through a good amount of literature to find an intuitive explanation I could explain to a non-statistician. I found the derivations that use Lagr
8
Making sense of principal component analysis, eigenvectors & eigenvalues
Trying to be non-technical... Imagine you have a multivariate data, a multidimensional cloud of points. When you compute covariance matrix of those you actually (a) center the cloud, i.e. put the origin as the multidimensional mean, the coordinate system axes now cross in the centre of the cloud, (b) encrypt the information about the shape of the cloud and how it is oriented in the space by means of variance-covariance entries. So, most of the important info about the shape of the data as a whole is stored in the covariance matrix. Then you do eigen-decomposition of that martrix and obtain the list of eigenvalues and the corresponding number of eigenvectors. Now, the 1st principal component is the new, latent variable which can be displayed as the axis going through the origin and oriented along the direction of the maximal variance (thickness) of the cloud. The variance along this axis, i.e. the variance of the coordinates of all points on it, is the first eigenvalue, and the orientation of the axis in space referenced to the original axes (the variables) is defined by the 1st eigenvector: its entries are the cosines between it and those original axes. The aforementioned coordinates of data points on the 1st component are the 1st principal component values, or component scores; they are computed as the product of (centered) data matrix and the eigenvector. "After" the 1st pr. component got measured it is, to say, "removed" from the cloud with all the variance it accounted for, and the dimensionality of the cloud drops by one. Next, everything is repeated with the second eigenvalue and the second eigenvector - the 2nd pr. component is being recorded, and then "removed". Etc. So, once again: eigenvectors are direction cosines for principal components, while eigenvalues are the magnitude (the variance) in the principal components. Sum of all eigenvalues is equal to the sum of variances which are on the diagonal of the variance-covariance matrix. If you transfer the "magnitudinal" information stored in eigenvalues over to eigenvectors to add it to the "orientational" information stored therein you get what is called principal component loadings; these loadings - because they carry both types of information - are the covariances between the original variables and the principal components. Later P.S. I want especially to stress twice here the terminologic difference between eigenvectors and loadings. Many people and some packages (including some of R) flippantly use the two terms interchangeably. It is a bad practice because the objects and their meanings are different. Eigenvectors are the direction cosines, the angle of the orthogonal "rotation" which PCA amounts to. Loadings are eigenvectors inoculated with the information about the variability or magnitude of the rotated data. The loadings are the association coefficients between the components and the variables and they are directly comparable with the association coefficients computed between the variables - covariances, correlations or other scalar products, on which you base your PCA. Both eigenvectors and loadings are similar in respect that they serve regressional coefficients in predicting the variables by the components (not vice versa!$^1$). Eigenvectors are the coefficients to predict variables by raw component scores. Loadings are the coefficients to predict variables by scaled (normalized) component scores (no wonder: loadings have precipitated information on the variability, consequently, components used must be deprived of it). One more reason not to mix eigenvectors and loadings is that some other dimensionality reduction techiques besides PCA - such as some forms of Factor analysis - compute loadings directly, bypassing eigenvectors. Eigenvectors are the product of eigen-decomposition or singular-value decomposition; some forms of factor analysis do not use these decompositions and arrive at loadings other way around. Finally, it is loadings, not eigenvectors, by which you interpret the components or factors (if you need to interpret them). Loading is about a contribution of component into a variable: in PCA (or factor analysis) component/factor loads itself onto variable, not vice versa. In a comprehensive PCA results one should report both eigenvectors and loadings, as shown e.g. here or here. See also about loadings vs eigenvectors. $^1$ Since eigenvector matrix in PCA is orthonormal and its inverse is its transpose, we may say that those same eigenvectors are also the coefficients to back predict the components by the variables. It is not so for loadings, though.
Making sense of principal component analysis, eigenvectors & eigenvalues
Trying to be non-technical... Imagine you have a multivariate data, a multidimensional cloud of points. When you compute covariance matrix of those you actually (a) center the cloud, i.e. put the orig
Making sense of principal component analysis, eigenvectors & eigenvalues Trying to be non-technical... Imagine you have a multivariate data, a multidimensional cloud of points. When you compute covariance matrix of those you actually (a) center the cloud, i.e. put the origin as the multidimensional mean, the coordinate system axes now cross in the centre of the cloud, (b) encrypt the information about the shape of the cloud and how it is oriented in the space by means of variance-covariance entries. So, most of the important info about the shape of the data as a whole is stored in the covariance matrix. Then you do eigen-decomposition of that martrix and obtain the list of eigenvalues and the corresponding number of eigenvectors. Now, the 1st principal component is the new, latent variable which can be displayed as the axis going through the origin and oriented along the direction of the maximal variance (thickness) of the cloud. The variance along this axis, i.e. the variance of the coordinates of all points on it, is the first eigenvalue, and the orientation of the axis in space referenced to the original axes (the variables) is defined by the 1st eigenvector: its entries are the cosines between it and those original axes. The aforementioned coordinates of data points on the 1st component are the 1st principal component values, or component scores; they are computed as the product of (centered) data matrix and the eigenvector. "After" the 1st pr. component got measured it is, to say, "removed" from the cloud with all the variance it accounted for, and the dimensionality of the cloud drops by one. Next, everything is repeated with the second eigenvalue and the second eigenvector - the 2nd pr. component is being recorded, and then "removed". Etc. So, once again: eigenvectors are direction cosines for principal components, while eigenvalues are the magnitude (the variance) in the principal components. Sum of all eigenvalues is equal to the sum of variances which are on the diagonal of the variance-covariance matrix. If you transfer the "magnitudinal" information stored in eigenvalues over to eigenvectors to add it to the "orientational" information stored therein you get what is called principal component loadings; these loadings - because they carry both types of information - are the covariances between the original variables and the principal components. Later P.S. I want especially to stress twice here the terminologic difference between eigenvectors and loadings. Many people and some packages (including some of R) flippantly use the two terms interchangeably. It is a bad practice because the objects and their meanings are different. Eigenvectors are the direction cosines, the angle of the orthogonal "rotation" which PCA amounts to. Loadings are eigenvectors inoculated with the information about the variability or magnitude of the rotated data. The loadings are the association coefficients between the components and the variables and they are directly comparable with the association coefficients computed between the variables - covariances, correlations or other scalar products, on which you base your PCA. Both eigenvectors and loadings are similar in respect that they serve regressional coefficients in predicting the variables by the components (not vice versa!$^1$). Eigenvectors are the coefficients to predict variables by raw component scores. Loadings are the coefficients to predict variables by scaled (normalized) component scores (no wonder: loadings have precipitated information on the variability, consequently, components used must be deprived of it). One more reason not to mix eigenvectors and loadings is that some other dimensionality reduction techiques besides PCA - such as some forms of Factor analysis - compute loadings directly, bypassing eigenvectors. Eigenvectors are the product of eigen-decomposition or singular-value decomposition; some forms of factor analysis do not use these decompositions and arrive at loadings other way around. Finally, it is loadings, not eigenvectors, by which you interpret the components or factors (if you need to interpret them). Loading is about a contribution of component into a variable: in PCA (or factor analysis) component/factor loads itself onto variable, not vice versa. In a comprehensive PCA results one should report both eigenvectors and loadings, as shown e.g. here or here. See also about loadings vs eigenvectors. $^1$ Since eigenvector matrix in PCA is orthonormal and its inverse is its transpose, we may say that those same eigenvectors are also the coefficients to back predict the components by the variables. It is not so for loadings, though.
Making sense of principal component analysis, eigenvectors & eigenvalues Trying to be non-technical... Imagine you have a multivariate data, a multidimensional cloud of points. When you compute covariance matrix of those you actually (a) center the cloud, i.e. put the orig
9
Making sense of principal component analysis, eigenvectors & eigenvalues
It's easiest to do the maths in 2-D. Every matrix corresponds to a linear transformation. Linear transformations can be visualised by taking a memorable figure on the plane and seeing how that figure is distorted by the linear transform: (pic: Flanigan & Kazdan) Eigenvectors are the stay-the-same vectors. They point in the same direction after the transform as they used to. (blue stayed the same, so that direction is an eigenvector of $\tt{shear}$.) Eigenvalues are how much the stay-the-same vectors grow or shrink. (blue stayed the same size so the eigenvalue would be $\times 1$.) PCA rotates your axes to "line up" better with your data. (source: weigend.com) PCA uses the eigenvectors of the covariance matrix to figure out how you should rotate the data. Because rotation is a kind of linear transformation, your new dimensions will be sums of the old ones, like $\langle 1 \rangle = 23\% \cdot [1] + 46\% \cdot [2] + 39\% \cdot [3]$. The reason people who work with real data are interested in eigenvectors and linear transformations is that in different contexts, "linear" ($f(a\cdot x+b\cdot y)=a\cdot f(x)+b \cdot f(y)$) can cover really interesting stuff. For example think what that property means if $+$ and $\cdot$ are given new meanings, or if $a$ and $b$ come from some interesting field, or $x$ and $y$ from some interesting space. For example: PCA itself is another example, the one most familiar to statisticians. Some of the other answers like Freya's give real-world applications of PCA. $${}$$ $\dagger$ I find it totally surprising that something as simple as "rotation" could do so many things in different areas, like lining up products for a recommender system $\overset{\text{similar how?}}{\longleftarrow\!\!\!-\!\!-\!\!-\!\!-\!\!-\!\!\!\longrightarrow}$ explaining geopolitical conflict. But maybe it's not so surprising if you think about physics, where choosing a better basis (e.g. making the $\mathrm{x}$ axis the direction of motion rather than $42.8\% [\mathrm{x}] \oplus 57.2\% [\mathrm{y}]$ will change inscrutable equations into simple ones).
Making sense of principal component analysis, eigenvectors & eigenvalues
It's easiest to do the maths in 2-D. Every matrix corresponds to a linear transformation. Linear transformations can be visualised by taking a memorable figure on the plane and seeing how that figure
Making sense of principal component analysis, eigenvectors & eigenvalues It's easiest to do the maths in 2-D. Every matrix corresponds to a linear transformation. Linear transformations can be visualised by taking a memorable figure on the plane and seeing how that figure is distorted by the linear transform: (pic: Flanigan & Kazdan) Eigenvectors are the stay-the-same vectors. They point in the same direction after the transform as they used to. (blue stayed the same, so that direction is an eigenvector of $\tt{shear}$.) Eigenvalues are how much the stay-the-same vectors grow or shrink. (blue stayed the same size so the eigenvalue would be $\times 1$.) PCA rotates your axes to "line up" better with your data. (source: weigend.com) PCA uses the eigenvectors of the covariance matrix to figure out how you should rotate the data. Because rotation is a kind of linear transformation, your new dimensions will be sums of the old ones, like $\langle 1 \rangle = 23\% \cdot [1] + 46\% \cdot [2] + 39\% \cdot [3]$. The reason people who work with real data are interested in eigenvectors and linear transformations is that in different contexts, "linear" ($f(a\cdot x+b\cdot y)=a\cdot f(x)+b \cdot f(y)$) can cover really interesting stuff. For example think what that property means if $+$ and $\cdot$ are given new meanings, or if $a$ and $b$ come from some interesting field, or $x$ and $y$ from some interesting space. For example: PCA itself is another example, the one most familiar to statisticians. Some of the other answers like Freya's give real-world applications of PCA. $${}$$ $\dagger$ I find it totally surprising that something as simple as "rotation" could do so many things in different areas, like lining up products for a recommender system $\overset{\text{similar how?}}{\longleftarrow\!\!\!-\!\!-\!\!-\!\!-\!\!-\!\!\!\longrightarrow}$ explaining geopolitical conflict. But maybe it's not so surprising if you think about physics, where choosing a better basis (e.g. making the $\mathrm{x}$ axis the direction of motion rather than $42.8\% [\mathrm{x}] \oplus 57.2\% [\mathrm{y}]$ will change inscrutable equations into simple ones).
Making sense of principal component analysis, eigenvectors & eigenvalues It's easiest to do the maths in 2-D. Every matrix corresponds to a linear transformation. Linear transformations can be visualised by taking a memorable figure on the plane and seeing how that figure
10
Making sense of principal component analysis, eigenvectors & eigenvalues
After the excellent post by JD Long in this thread, I looked for a simple example, and the R code necessary to produce the PCA and then go back to the original data. It gave me some first-hand geometric intuition, and I want to share what I got. The dataset and code can be directly copied and pasted into R form Github. I used a data set that I found online on semiconductors here, and I trimmed it to just two dimensions - "atomic number" and "melting point" - to facilitate plotting. As a caveat the idea is purely illustrative of the computational process: PCA is used to reduce more than two variables to a few derived principal components, or to identify collinearity also in the case of multiple features. So it wouldn't find much application in the case of two variables, nor would there be a need to calculate eigenvectors of correlation matrices as pointed out by @amoeba. Further, I truncated the observations from 44 to 15 to ease the task of tracking individual points. The ultimate result was a skeleton data frame (dat1): compounds atomic.no melting.point AIN 10 498.0 AIP 14 625.0 AIAs 23 1011.5 ... ... ... The "compounds" column indicate the chemical constitution of the semiconductor, and plays the role of row name. This can be reproduced as follows (ready to copy and paste on R console): # install.packages('gsheet') library(gsheet) dat <- read.csv(url("https://raw.githubusercontent.com/RInterested/DATASETS/gh-pages/semiconductors.csv")) colnames(dat)[2] <- "atomic.no" dat1 <- subset(dat[1:15,1:3]) row.names(dat1) <- dat1$compounds dat1 <- dat1[,-1] The data were then scaled: X <- apply(dat1, 2, function(x) (x - mean(x)) / sd(x)) # This centers data points around the mean and standardizes by dividing by SD. # It is the equivalent to `X <- scale(dat1, center = T, scale = T)` The linear algebra steps followed: C <- cov(X) # Covariance matrix (centered data) $\begin{bmatrix} &\text{at_no}&\text{melt_p}\\ \text{at_no}&1&0.296\\ \text{melt_p}&0.296&1 \end{bmatrix}$ The correlation function cor(dat1) gives the same output on the non-scaled data as the function cov(X) on the scaled data. lambda <- eigen(C)$values # Eigenvalues lambda_matrix <- diag(2)*eigen(C)$values # Eigenvalues matrix $\begin{bmatrix} &\color{purple}{\lambda_{\text{PC1}}}&\color{orange}{\lambda_{\text{PC2}}}\\ &1.296422& 0\\ &0&0.7035783 \end{bmatrix}$ e_vectors <- eigen(C)$vectors # Eigenvectors $\frac{1}{\sqrt{2}}\begin{bmatrix} &\color{purple}{\text{PC1}}&\color{orange}{\text{PC2}}\\ &1&\,\,\,\,\,1\\ &1&-1 \end{bmatrix}$ Since the first eigenvector initially returns as $\sim \small [-0.7,-0.7]$ we choose to change it to $\small [0.7, 0.7]$ to make it consistent with built-in formulas through: e_vectors[,1] = - e_vectors[,1]; colnames(e_vectors) <- c("PC1","PC2") The resultant eigenvalues were $\small 1.2964217$ and $\small 0.7035783$. Under less minimalistic conditions, this result would have helped decide which eigenvectors to include (largest eigenvalues). For instance, the relative contribution of the first eigenvalue is $\small 64.8\%$: eigen(C)$values[1]/sum(eigen(C)$values) * 100, meaning that it accounts for $\sim\small 65\%$ of the variability in the data. The variability in the direction of the second eigenvector is $35.2\%$. This is typically shown on a scree plot depicting the value of the eigenvalues: We'll include both eigenvectors given the small size of this toy data set example, understanding that excluding one of the eigenvectors would result in dimensionality reduction - the idea behind PCA. The score matrix was determined as the matrix multiplication of the scaled data (X) by the matrix of eigenvectors (or "rotations"): score_matrix <- X %*% e_vectors # Identical to the often found operation: t(t(e_vectors) %*% t(X)) The concept entails a linear combination of each entry (row / subject / observation / superconductor in this case) of the centered (and in this case scaled) data weighted by the rows of each eigenvector, so that in each of the final columns of the score matrix, we'll find a contribution from each variable (column) of the data (the entire X), BUT only the corresponding eigenvector will have taken part in the computation (i.e. the first eigenvector $[0.7, 0.7]^{T}$ will contribute to $\text{PC}\,1$ (Principal Component 1) and $[0.7, -0.7]^{T}$ to $\text{PC}\,2$, as in: Therefore each eigenvector will influence each variable differently, and this will be reflected in the "loadings" of the PCA. In our case, the negative sign in the second component of the second eigenvector $[0.7, - 0.7]$ will change the sign of the melting point values in the linear combinations that produce PC2, whereas the effect of the first eigenvector will be consistently positive: The eigenvectors are scaled to $1$: > apply(e_vectors, 2, function(x) sum(x^2)) PC1 PC2 1 1 whereas the (loadings) are the eigenvectors scaled by the eigenvalues (despite the confusing terminology in the in-built R functions displayed below). Consequently, the loadings can be calculated as: > e_vectors %*% lambda_matrix [,1] [,2] [1,] 0.9167086 0.497505 [2,] 0.9167086 -0.497505 > prcomp(X)$rotation %*% diag(princomp(covmat = C)$sd^2) [,1] [,2] atomic.no 0.9167086 0.497505 melting.point 0.9167086 -0.497505 It is interesting to note that the rotated data cloud (the score plot) will have variance along each component (PC) equal to the eigenvalues: > apply(score_matrix, 2, function(x) var(x)) PC1 PC2 1.2964217 0.7035783 > lambda [1] 1.2964217 0.7035783 Utilizing the built-in functions the results can be replicated: # For the SCORE MATRIX: prcomp(X)$x # or... princomp(X)$scores # The signs of the PC 1 column will be reversed. # and for EIGENVECTOR MATRIX: prcomp(X)$rotation # or... princomp(X)$loadings # and for EIGENVALUES: prcomp(X)$sdev^2 # or... princomp(covmat = C)$sd^2 Alternatively, the singular value decomposition ($\text{U}\Sigma \text{V}^\text{T}$) method can be applied to manually calculate PCA; in fact, this is the method used in prcomp(). The steps can be spelled out as: svd_scaled_dat <-svd(scale(dat1)) eigen_vectors <- svd_scaled_dat$v eigen_values <- (svd_scaled_dat$d/sqrt(nrow(dat1) - 1))^2 scores<-scale(dat1) %*% eigen_vectors The result is shown below, with first, the distances from the individual points to the first eigenvector, and on a second plot, the orthogonal distances to the second eigenvector: If instead we plotted the values of the score matrix (PC1 and PC2) - no longer "melting.point" and "atomic.no", but really a change of basis of the point coordinates with the eigenvectors as basis, these distances would be preserved, but would naturally become perpendicular to the xy axis: The trick was now to recover the original data. The points had been transformed through a simple matrix multiplication by the eigenvectors. Now the data was rotated back by multiplying by the inverse of the matrix of eigenvectors with a resultant marked change in the location of the data points. For instance, notice the change in pink dot "GaN" in the left upper quadrant (black circle in the left plot, below), returning to its initial position in the left lower quadrant (black circle in the right plot, below). Now we finally had the original data restored in this "de-rotated" matrix: Beyond the change of coordinates of rotation of the data in PCA, the results must be interpreted, and this process tends to involve a biplot, on which the data points are plotted with respect to the new eigenvector coordinates, and the original variables are now superimposed as vectors. It is interesting to note the equivalence in the position of the points between the plots in the second row of rotation graphs above ("Scores with xy Axis = Eigenvectors") (to the left in the plots that follow), and the biplot (to the right): The superimposition of the original variables as red arrows offers a path to the interpretation of PC1 as a vector in the direction (or with a positive correlation) with both atomic no and melting point; and of PC2 as a component along increasing values of atomic no but negatively correlated with melting point, consistent with the values of the eigenvectors: PCA <- prcomp(dat1, center = T, scale = T) PCA$rotation PC1 PC2 atomic.no 0.7071068 0.7071068 melting.point 0.7071068 -0.7071068 As a final point, it is legitimate to wonder if, at the end of the day, we are simply doing ordinary least squares in a different way, using the eigenvectors to define hyperplanes through data clouds, because of the obvious similarities. To begin with the objective in both methods is different: PCA is meant to reduce dimensionality to understand the main drivers in the variability of datasets, whereas OLS is intended to extract the relationship between a "dependent" variable and one or multiple explanatory variables. In the case of a single explanatory variable as in the toy example in this post, we can also superimpose the OLS regression line on the data cloud to note how OLS reduces the sum of vertical squared distances from the fitted line to the points, as opposed to orthogonal lines to the eigenvector in question: In OLS the squared residuals are the hypothenuses of the perpendiculars from the points to the OLS line, and hence result in a higher sum of squared residuals (12.77) than the sum of the squared perpendicular segments from the points to the OLS line (11.74). The latter is what PCA is optimized for: (Wikipedia) "PCA quantifies data representation as the aggregate of the L2-norm of the data point projections into the subspace, or equivalently the aggregate Euclidean distance of the original points from their subspace-projected representations." This subspace has the orthogonal eigenvectors of the covariance matrix as a basis. The proof of this statement can be found here together with the pertinent credit to Marc Deisenroth. Naturally, the fact that the dataset has been scaled and centered at zero, reduces the intercept of the OLS to zero, and the slope to the correlation between the variables, 0.2964. This interactive tutorial by Victor Powell gives immediate feedback as to the changes in the eigenvectors as the data cloud is modified. All the code related to this post can be found here.
Making sense of principal component analysis, eigenvectors & eigenvalues
After the excellent post by JD Long in this thread, I looked for a simple example, and the R code necessary to produce the PCA and then go back to the original data. It gave me some first-hand geometr
Making sense of principal component analysis, eigenvectors & eigenvalues After the excellent post by JD Long in this thread, I looked for a simple example, and the R code necessary to produce the PCA and then go back to the original data. It gave me some first-hand geometric intuition, and I want to share what I got. The dataset and code can be directly copied and pasted into R form Github. I used a data set that I found online on semiconductors here, and I trimmed it to just two dimensions - "atomic number" and "melting point" - to facilitate plotting. As a caveat the idea is purely illustrative of the computational process: PCA is used to reduce more than two variables to a few derived principal components, or to identify collinearity also in the case of multiple features. So it wouldn't find much application in the case of two variables, nor would there be a need to calculate eigenvectors of correlation matrices as pointed out by @amoeba. Further, I truncated the observations from 44 to 15 to ease the task of tracking individual points. The ultimate result was a skeleton data frame (dat1): compounds atomic.no melting.point AIN 10 498.0 AIP 14 625.0 AIAs 23 1011.5 ... ... ... The "compounds" column indicate the chemical constitution of the semiconductor, and plays the role of row name. This can be reproduced as follows (ready to copy and paste on R console): # install.packages('gsheet') library(gsheet) dat <- read.csv(url("https://raw.githubusercontent.com/RInterested/DATASETS/gh-pages/semiconductors.csv")) colnames(dat)[2] <- "atomic.no" dat1 <- subset(dat[1:15,1:3]) row.names(dat1) <- dat1$compounds dat1 <- dat1[,-1] The data were then scaled: X <- apply(dat1, 2, function(x) (x - mean(x)) / sd(x)) # This centers data points around the mean and standardizes by dividing by SD. # It is the equivalent to `X <- scale(dat1, center = T, scale = T)` The linear algebra steps followed: C <- cov(X) # Covariance matrix (centered data) $\begin{bmatrix} &\text{at_no}&\text{melt_p}\\ \text{at_no}&1&0.296\\ \text{melt_p}&0.296&1 \end{bmatrix}$ The correlation function cor(dat1) gives the same output on the non-scaled data as the function cov(X) on the scaled data. lambda <- eigen(C)$values # Eigenvalues lambda_matrix <- diag(2)*eigen(C)$values # Eigenvalues matrix $\begin{bmatrix} &\color{purple}{\lambda_{\text{PC1}}}&\color{orange}{\lambda_{\text{PC2}}}\\ &1.296422& 0\\ &0&0.7035783 \end{bmatrix}$ e_vectors <- eigen(C)$vectors # Eigenvectors $\frac{1}{\sqrt{2}}\begin{bmatrix} &\color{purple}{\text{PC1}}&\color{orange}{\text{PC2}}\\ &1&\,\,\,\,\,1\\ &1&-1 \end{bmatrix}$ Since the first eigenvector initially returns as $\sim \small [-0.7,-0.7]$ we choose to change it to $\small [0.7, 0.7]$ to make it consistent with built-in formulas through: e_vectors[,1] = - e_vectors[,1]; colnames(e_vectors) <- c("PC1","PC2") The resultant eigenvalues were $\small 1.2964217$ and $\small 0.7035783$. Under less minimalistic conditions, this result would have helped decide which eigenvectors to include (largest eigenvalues). For instance, the relative contribution of the first eigenvalue is $\small 64.8\%$: eigen(C)$values[1]/sum(eigen(C)$values) * 100, meaning that it accounts for $\sim\small 65\%$ of the variability in the data. The variability in the direction of the second eigenvector is $35.2\%$. This is typically shown on a scree plot depicting the value of the eigenvalues: We'll include both eigenvectors given the small size of this toy data set example, understanding that excluding one of the eigenvectors would result in dimensionality reduction - the idea behind PCA. The score matrix was determined as the matrix multiplication of the scaled data (X) by the matrix of eigenvectors (or "rotations"): score_matrix <- X %*% e_vectors # Identical to the often found operation: t(t(e_vectors) %*% t(X)) The concept entails a linear combination of each entry (row / subject / observation / superconductor in this case) of the centered (and in this case scaled) data weighted by the rows of each eigenvector, so that in each of the final columns of the score matrix, we'll find a contribution from each variable (column) of the data (the entire X), BUT only the corresponding eigenvector will have taken part in the computation (i.e. the first eigenvector $[0.7, 0.7]^{T}$ will contribute to $\text{PC}\,1$ (Principal Component 1) and $[0.7, -0.7]^{T}$ to $\text{PC}\,2$, as in: Therefore each eigenvector will influence each variable differently, and this will be reflected in the "loadings" of the PCA. In our case, the negative sign in the second component of the second eigenvector $[0.7, - 0.7]$ will change the sign of the melting point values in the linear combinations that produce PC2, whereas the effect of the first eigenvector will be consistently positive: The eigenvectors are scaled to $1$: > apply(e_vectors, 2, function(x) sum(x^2)) PC1 PC2 1 1 whereas the (loadings) are the eigenvectors scaled by the eigenvalues (despite the confusing terminology in the in-built R functions displayed below). Consequently, the loadings can be calculated as: > e_vectors %*% lambda_matrix [,1] [,2] [1,] 0.9167086 0.497505 [2,] 0.9167086 -0.497505 > prcomp(X)$rotation %*% diag(princomp(covmat = C)$sd^2) [,1] [,2] atomic.no 0.9167086 0.497505 melting.point 0.9167086 -0.497505 It is interesting to note that the rotated data cloud (the score plot) will have variance along each component (PC) equal to the eigenvalues: > apply(score_matrix, 2, function(x) var(x)) PC1 PC2 1.2964217 0.7035783 > lambda [1] 1.2964217 0.7035783 Utilizing the built-in functions the results can be replicated: # For the SCORE MATRIX: prcomp(X)$x # or... princomp(X)$scores # The signs of the PC 1 column will be reversed. # and for EIGENVECTOR MATRIX: prcomp(X)$rotation # or... princomp(X)$loadings # and for EIGENVALUES: prcomp(X)$sdev^2 # or... princomp(covmat = C)$sd^2 Alternatively, the singular value decomposition ($\text{U}\Sigma \text{V}^\text{T}$) method can be applied to manually calculate PCA; in fact, this is the method used in prcomp(). The steps can be spelled out as: svd_scaled_dat <-svd(scale(dat1)) eigen_vectors <- svd_scaled_dat$v eigen_values <- (svd_scaled_dat$d/sqrt(nrow(dat1) - 1))^2 scores<-scale(dat1) %*% eigen_vectors The result is shown below, with first, the distances from the individual points to the first eigenvector, and on a second plot, the orthogonal distances to the second eigenvector: If instead we plotted the values of the score matrix (PC1 and PC2) - no longer "melting.point" and "atomic.no", but really a change of basis of the point coordinates with the eigenvectors as basis, these distances would be preserved, but would naturally become perpendicular to the xy axis: The trick was now to recover the original data. The points had been transformed through a simple matrix multiplication by the eigenvectors. Now the data was rotated back by multiplying by the inverse of the matrix of eigenvectors with a resultant marked change in the location of the data points. For instance, notice the change in pink dot "GaN" in the left upper quadrant (black circle in the left plot, below), returning to its initial position in the left lower quadrant (black circle in the right plot, below). Now we finally had the original data restored in this "de-rotated" matrix: Beyond the change of coordinates of rotation of the data in PCA, the results must be interpreted, and this process tends to involve a biplot, on which the data points are plotted with respect to the new eigenvector coordinates, and the original variables are now superimposed as vectors. It is interesting to note the equivalence in the position of the points between the plots in the second row of rotation graphs above ("Scores with xy Axis = Eigenvectors") (to the left in the plots that follow), and the biplot (to the right): The superimposition of the original variables as red arrows offers a path to the interpretation of PC1 as a vector in the direction (or with a positive correlation) with both atomic no and melting point; and of PC2 as a component along increasing values of atomic no but negatively correlated with melting point, consistent with the values of the eigenvectors: PCA <- prcomp(dat1, center = T, scale = T) PCA$rotation PC1 PC2 atomic.no 0.7071068 0.7071068 melting.point 0.7071068 -0.7071068 As a final point, it is legitimate to wonder if, at the end of the day, we are simply doing ordinary least squares in a different way, using the eigenvectors to define hyperplanes through data clouds, because of the obvious similarities. To begin with the objective in both methods is different: PCA is meant to reduce dimensionality to understand the main drivers in the variability of datasets, whereas OLS is intended to extract the relationship between a "dependent" variable and one or multiple explanatory variables. In the case of a single explanatory variable as in the toy example in this post, we can also superimpose the OLS regression line on the data cloud to note how OLS reduces the sum of vertical squared distances from the fitted line to the points, as opposed to orthogonal lines to the eigenvector in question: In OLS the squared residuals are the hypothenuses of the perpendiculars from the points to the OLS line, and hence result in a higher sum of squared residuals (12.77) than the sum of the squared perpendicular segments from the points to the OLS line (11.74). The latter is what PCA is optimized for: (Wikipedia) "PCA quantifies data representation as the aggregate of the L2-norm of the data point projections into the subspace, or equivalently the aggregate Euclidean distance of the original points from their subspace-projected representations." This subspace has the orthogonal eigenvectors of the covariance matrix as a basis. The proof of this statement can be found here together with the pertinent credit to Marc Deisenroth. Naturally, the fact that the dataset has been scaled and centered at zero, reduces the intercept of the OLS to zero, and the slope to the correlation between the variables, 0.2964. This interactive tutorial by Victor Powell gives immediate feedback as to the changes in the eigenvectors as the data cloud is modified. All the code related to this post can be found here.
Making sense of principal component analysis, eigenvectors & eigenvalues After the excellent post by JD Long in this thread, I looked for a simple example, and the R code necessary to produce the PCA and then go back to the original data. It gave me some first-hand geometr
11
Making sense of principal component analysis, eigenvectors & eigenvalues
OK, a totally non-math answer: If you have a bunch of variables on a bunch of subjects and you want to reduce it to a smaller number of variables on those same subjects, while losing as little information as possible, then PCA is one tool to do this. It differs from factor analysis, although they often give similar results, in that FA tries to recover a small number of latent variables from a larger number of observed variables that are believed to be related to the latent variables.
Making sense of principal component analysis, eigenvectors & eigenvalues
OK, a totally non-math answer: If you have a bunch of variables on a bunch of subjects and you want to reduce it to a smaller number of variables on those same subjects, while losing as little informa
Making sense of principal component analysis, eigenvectors & eigenvalues OK, a totally non-math answer: If you have a bunch of variables on a bunch of subjects and you want to reduce it to a smaller number of variables on those same subjects, while losing as little information as possible, then PCA is one tool to do this. It differs from factor analysis, although they often give similar results, in that FA tries to recover a small number of latent variables from a larger number of observed variables that are believed to be related to the latent variables.
Making sense of principal component analysis, eigenvectors & eigenvalues OK, a totally non-math answer: If you have a bunch of variables on a bunch of subjects and you want to reduce it to a smaller number of variables on those same subjects, while losing as little informa
12
Making sense of principal component analysis, eigenvectors & eigenvalues
From someone who has used PCA a lot (and tried to explain it to a few people as well) here's an example from my own field of neuroscience. When we're recording from a person's scalp we do it with 64 electrodes. So, in effect we have 64 numbers in a list that represent the voltage given off by the scalp. Now since we record with microsecond precision, if we have a 1-hour experiment (often they are 4 hours) then that gives us 1e6 * 60^2 == 3,600,000,000 time points at which a voltage was recorded at each electrode so that now we have a 3,600,000,000 x 64 matrix. Since a major assumption of PCA is that your variables are correlated, it is a great technique to reduce this ridiculous amount of data to an amount that is tractable. As has been said numerous times already, the eigenvalues represent the amount of variance explained by the variables (columns). In this case an eigenvalue represents the variance in the voltage at a particular point in time contributed by a particular electrode. So now we can say, "Oh, well electrode x at time point y is what we should focus on for further analysis because that is where the most change is happening". Hope this helps. Loving those regression plots!
Making sense of principal component analysis, eigenvectors & eigenvalues
From someone who has used PCA a lot (and tried to explain it to a few people as well) here's an example from my own field of neuroscience. When we're recording from a person's scalp we do it with 64 e
Making sense of principal component analysis, eigenvectors & eigenvalues From someone who has used PCA a lot (and tried to explain it to a few people as well) here's an example from my own field of neuroscience. When we're recording from a person's scalp we do it with 64 electrodes. So, in effect we have 64 numbers in a list that represent the voltage given off by the scalp. Now since we record with microsecond precision, if we have a 1-hour experiment (often they are 4 hours) then that gives us 1e6 * 60^2 == 3,600,000,000 time points at which a voltage was recorded at each electrode so that now we have a 3,600,000,000 x 64 matrix. Since a major assumption of PCA is that your variables are correlated, it is a great technique to reduce this ridiculous amount of data to an amount that is tractable. As has been said numerous times already, the eigenvalues represent the amount of variance explained by the variables (columns). In this case an eigenvalue represents the variance in the voltage at a particular point in time contributed by a particular electrode. So now we can say, "Oh, well electrode x at time point y is what we should focus on for further analysis because that is where the most change is happening". Hope this helps. Loving those regression plots!
Making sense of principal component analysis, eigenvectors & eigenvalues From someone who has used PCA a lot (and tried to explain it to a few people as well) here's an example from my own field of neuroscience. When we're recording from a person's scalp we do it with 64 e
13
Making sense of principal component analysis, eigenvectors & eigenvalues
I might be a bad person to answer this because I'm the proverbial grandmother who has had the concept explained to me and not much more, but here goes: Suppose you have a population. A large portion of the population is dropping dead of heart attacks. You are trying to figure out what causes the heart attacks. You have two pieces of data: height and weight. Now, it's clear that there's SOME relationship between weight and heart attacks, but the correlation isn't really strong. There are some heavy people who have a lot of heart attacks, but some don't. Now, you do a PCA, and it tells you that weight divided by height ('body mass') is a much more likely predictor of heart attacks then either weight or height, because, lo and behold, the "reality" is that it's body mass that causes the heart attacks. Essentially, you do PCA because you are measuring a bunch of things and you don't really know if those are really the principal components or if there's some deeper underlying component that you didn't measure. [Please feel free to edit this if it's completely off base. I really don't understand the concept any more deeply than this].
Making sense of principal component analysis, eigenvectors & eigenvalues
I might be a bad person to answer this because I'm the proverbial grandmother who has had the concept explained to me and not much more, but here goes: Suppose you have a population. A large portion o
Making sense of principal component analysis, eigenvectors & eigenvalues I might be a bad person to answer this because I'm the proverbial grandmother who has had the concept explained to me and not much more, but here goes: Suppose you have a population. A large portion of the population is dropping dead of heart attacks. You are trying to figure out what causes the heart attacks. You have two pieces of data: height and weight. Now, it's clear that there's SOME relationship between weight and heart attacks, but the correlation isn't really strong. There are some heavy people who have a lot of heart attacks, but some don't. Now, you do a PCA, and it tells you that weight divided by height ('body mass') is a much more likely predictor of heart attacks then either weight or height, because, lo and behold, the "reality" is that it's body mass that causes the heart attacks. Essentially, you do PCA because you are measuring a bunch of things and you don't really know if those are really the principal components or if there's some deeper underlying component that you didn't measure. [Please feel free to edit this if it's completely off base. I really don't understand the concept any more deeply than this].
Making sense of principal component analysis, eigenvectors & eigenvalues I might be a bad person to answer this because I'm the proverbial grandmother who has had the concept explained to me and not much more, but here goes: Suppose you have a population. A large portion o
14
Making sense of principal component analysis, eigenvectors & eigenvalues
This answer gives an intuitive and not-mathematical interpretation: The PCA will give you a set of orthogonal vectors within a high-dimensional point cloud. The order of the vectors is determined by the information conveyed aftter projecting all points onto the vectors. In different words: The first principal component vector will tell you the most about the point cloud after projecting all points onto the vector. This is an intuitve interpretation of course. Look at this ellipsoid (follow link for a 3D model): If you would have to choose one vector forming a one-dimensional sub-space onto which the points of the ellipsoids points will be projected. Which one would you choose because it conveys the most information about the original set in 3 dimensions? I guess the red one along the longest axis. And this is actually the calculated 1st principal component! Which one next - I would choose the blue one along the next longest axis. Typically you want to project a set of points from a high-dimensional space onto a two dimensional plane or into a three dimensional space. http://www.joyofdata.de/blog/illustration-of-principal-component-analysis-pca/
Making sense of principal component analysis, eigenvectors & eigenvalues
This answer gives an intuitive and not-mathematical interpretation: The PCA will give you a set of orthogonal vectors within a high-dimensional point cloud. The order of the vectors is determined by t
Making sense of principal component analysis, eigenvectors & eigenvalues This answer gives an intuitive and not-mathematical interpretation: The PCA will give you a set of orthogonal vectors within a high-dimensional point cloud. The order of the vectors is determined by the information conveyed aftter projecting all points onto the vectors. In different words: The first principal component vector will tell you the most about the point cloud after projecting all points onto the vector. This is an intuitve interpretation of course. Look at this ellipsoid (follow link for a 3D model): If you would have to choose one vector forming a one-dimensional sub-space onto which the points of the ellipsoids points will be projected. Which one would you choose because it conveys the most information about the original set in 3 dimensions? I guess the red one along the longest axis. And this is actually the calculated 1st principal component! Which one next - I would choose the blue one along the next longest axis. Typically you want to project a set of points from a high-dimensional space onto a two dimensional plane or into a three dimensional space. http://www.joyofdata.de/blog/illustration-of-principal-component-analysis-pca/
Making sense of principal component analysis, eigenvectors & eigenvalues This answer gives an intuitive and not-mathematical interpretation: The PCA will give you a set of orthogonal vectors within a high-dimensional point cloud. The order of the vectors is determined by t
15
Making sense of principal component analysis, eigenvectors & eigenvalues
Here's one for Grandma: In our town there are streets going north and south, some going east and west, and even some going northwest and southeast, some NE to SW. One day a guy measures all the traffic on all the streets, he finds that the most traffic is going diagonally, from northwest to southeast, the second biggest is perpendicular to this going northeast to southwest and all the rest is fairly small. So he draws a big square and puts a big line left to right and says that is the NW to SE, then draws another line vertically up and down through the middle. He says that's the second most crowded direction for traffic (NE to SW). The rest is small so it can be ignored. The left right line is the first eigenvector and the up down line is the second eigenvector. The total number of cars going left and right are the first eigenvalue and those going up and down are the second eigenvalue.
Making sense of principal component analysis, eigenvectors & eigenvalues
Here's one for Grandma: In our town there are streets going north and south, some going east and west, and even some going northwest and southeast, some NE to SW. One day a guy measures all the traffi
Making sense of principal component analysis, eigenvectors & eigenvalues Here's one for Grandma: In our town there are streets going north and south, some going east and west, and even some going northwest and southeast, some NE to SW. One day a guy measures all the traffic on all the streets, he finds that the most traffic is going diagonally, from northwest to southeast, the second biggest is perpendicular to this going northeast to southwest and all the rest is fairly small. So he draws a big square and puts a big line left to right and says that is the NW to SE, then draws another line vertically up and down through the middle. He says that's the second most crowded direction for traffic (NE to SW). The rest is small so it can be ignored. The left right line is the first eigenvector and the up down line is the second eigenvector. The total number of cars going left and right are the first eigenvalue and those going up and down are the second eigenvalue.
Making sense of principal component analysis, eigenvectors & eigenvalues Here's one for Grandma: In our town there are streets going north and south, some going east and west, and even some going northwest and southeast, some NE to SW. One day a guy measures all the traffi
16
Making sense of principal component analysis, eigenvectors & eigenvalues
Although there are many examples given to provide an intuitive understanding of PCA, that fact can almost make it more difficult to grasp at the outset, at least it was for me. "What was the one thing about PCA that all these different examples from different disciplines have in common??" What helped me intuitively understand were a couple of math parallels, since it's apparent the maths is the easy part for you, although this doesn't help explain it to your grandmother... Think of a regularization problem, trying to get $$|| XB - Y || = 0$$ Or in English, break down your data $Y$ into two other matrices which will somehow shed light on the data? If those two matrices work well, then the error between them and $Y$ shouldn't be too much. PCA gives you a useful factorizaton of $Y$, for all the reasons other people have said. It breaks the matrix of data you have, $Y$, down into two other useful matrices. In this case, $X$ would be a matrix where the columns are first $k$ PCs you kept, and $B$ is a matrix giving you a recipe to reconstruct the columns of matrix $Y$ using the columns of $X$. $B$ is the first $k$ rows of $S$, and all of $V$ transpose. The eigenvalues on the diagonal of $S$ basically weights which PCs are most important. That is how the math explicitly tells you which PCs are the most important: they are each weighted by their eigenvalues. Then, the matrix $V^\mathrm{T}$ tells the PCs how to combine. I think people gave many intuitive examples, so I just wanted to share that. Seeing that helped me understand how it works. There are a world of interesting algorithms and methods which do similar things as PCA. Sparse coding is a subfield of machine learning which is all about factoring matrix $A$ into two other useful and interesting ones that reflect patterns in $A$.
Making sense of principal component analysis, eigenvectors & eigenvalues
Although there are many examples given to provide an intuitive understanding of PCA, that fact can almost make it more difficult to grasp at the outset, at least it was for me. "What was the one thi
Making sense of principal component analysis, eigenvectors & eigenvalues Although there are many examples given to provide an intuitive understanding of PCA, that fact can almost make it more difficult to grasp at the outset, at least it was for me. "What was the one thing about PCA that all these different examples from different disciplines have in common??" What helped me intuitively understand were a couple of math parallels, since it's apparent the maths is the easy part for you, although this doesn't help explain it to your grandmother... Think of a regularization problem, trying to get $$|| XB - Y || = 0$$ Or in English, break down your data $Y$ into two other matrices which will somehow shed light on the data? If those two matrices work well, then the error between them and $Y$ shouldn't be too much. PCA gives you a useful factorizaton of $Y$, for all the reasons other people have said. It breaks the matrix of data you have, $Y$, down into two other useful matrices. In this case, $X$ would be a matrix where the columns are first $k$ PCs you kept, and $B$ is a matrix giving you a recipe to reconstruct the columns of matrix $Y$ using the columns of $X$. $B$ is the first $k$ rows of $S$, and all of $V$ transpose. The eigenvalues on the diagonal of $S$ basically weights which PCs are most important. That is how the math explicitly tells you which PCs are the most important: they are each weighted by their eigenvalues. Then, the matrix $V^\mathrm{T}$ tells the PCs how to combine. I think people gave many intuitive examples, so I just wanted to share that. Seeing that helped me understand how it works. There are a world of interesting algorithms and methods which do similar things as PCA. Sparse coding is a subfield of machine learning which is all about factoring matrix $A$ into two other useful and interesting ones that reflect patterns in $A$.
Making sense of principal component analysis, eigenvectors & eigenvalues Although there are many examples given to provide an intuitive understanding of PCA, that fact can almost make it more difficult to grasp at the outset, at least it was for me. "What was the one thi
17
Making sense of principal component analysis, eigenvectors & eigenvalues
I'll give a non-mathy response and a more detailed birds-eye view of the motivation-through-math in the second part. Non-Mathy: The non-math explanation is that PCA helps for high dimensional data by letting you see in which directions your data has the most variance. These directions are the principal components. Once you have this information you can then, in some cases, decide to use the principal components as the meaningful variables themselves, and vastly reduce the dimensionality of your data by only keeping the principal components with the most variance (explanatory power). For example, suppose you give out a political polling questionnaire with 30 questions, each can be given a response of 1 (strongly disagree) through 5 (strongly agree). You get tons of responses and now you have 30-dimensional data and you can't make heads or tails out of it. Then in desperation you think to run PCA and discover the 90% of your variance comes from one direction, and that direction does not correspond to any of your axis. After further inspection of the data you then conclude that this new hybrid axis corresponds to the political left-right spectrum i.e. democrat/republican spectrum, and go on to look at the more subtle aspects in the data. Mathy: It sometimes helps to zoom out and look at the mathematical motivation to shed some light on the meaning. There is a special family of matrices which can be transformed into diagonal matrices simply by changing your coordinate axis. Naturally, they are called the diagonalizeable matrices and elegantly enough, the new coordinate axis that are needed to do this are indeed the eigenvectors. As it turns out the covariance matrix are symmetric and will always be diagonalizeable! In this case the eigenvectors are called the principal components and when you write out the covariance matrix in eigenvector coordinates, the diagonal entries (the only ones left) correspond to the variance in the direction of your eigenvectors. This allows us to know which directions have the most variance. Moreover since the covariance matrix is diagonal in these coordinates, you have cleverly eliminated all correlation between your variables. As is common in practical applications, we assume that our variables are normally distributed and so its quite natural to try and change our coordinates to see the simplest picture. By knowing your principal components and their respective eigenvalues (variance) you'll be able to reduce the dimensionality of your data if needed and also have a quick general summary of where the variation in your data lies. But at the end of the day, the root of all this desirability comes from the fact that diagonal matrices are way easier to deal with in comparison to their messier, more general cousins.
Making sense of principal component analysis, eigenvectors & eigenvalues
I'll give a non-mathy response and a more detailed birds-eye view of the motivation-through-math in the second part. Non-Mathy: The non-math explanation is that PCA helps for high dimensional data b
Making sense of principal component analysis, eigenvectors & eigenvalues I'll give a non-mathy response and a more detailed birds-eye view of the motivation-through-math in the second part. Non-Mathy: The non-math explanation is that PCA helps for high dimensional data by letting you see in which directions your data has the most variance. These directions are the principal components. Once you have this information you can then, in some cases, decide to use the principal components as the meaningful variables themselves, and vastly reduce the dimensionality of your data by only keeping the principal components with the most variance (explanatory power). For example, suppose you give out a political polling questionnaire with 30 questions, each can be given a response of 1 (strongly disagree) through 5 (strongly agree). You get tons of responses and now you have 30-dimensional data and you can't make heads or tails out of it. Then in desperation you think to run PCA and discover the 90% of your variance comes from one direction, and that direction does not correspond to any of your axis. After further inspection of the data you then conclude that this new hybrid axis corresponds to the political left-right spectrum i.e. democrat/republican spectrum, and go on to look at the more subtle aspects in the data. Mathy: It sometimes helps to zoom out and look at the mathematical motivation to shed some light on the meaning. There is a special family of matrices which can be transformed into diagonal matrices simply by changing your coordinate axis. Naturally, they are called the diagonalizeable matrices and elegantly enough, the new coordinate axis that are needed to do this are indeed the eigenvectors. As it turns out the covariance matrix are symmetric and will always be diagonalizeable! In this case the eigenvectors are called the principal components and when you write out the covariance matrix in eigenvector coordinates, the diagonal entries (the only ones left) correspond to the variance in the direction of your eigenvectors. This allows us to know which directions have the most variance. Moreover since the covariance matrix is diagonal in these coordinates, you have cleverly eliminated all correlation between your variables. As is common in practical applications, we assume that our variables are normally distributed and so its quite natural to try and change our coordinates to see the simplest picture. By knowing your principal components and their respective eigenvalues (variance) you'll be able to reduce the dimensionality of your data if needed and also have a quick general summary of where the variation in your data lies. But at the end of the day, the root of all this desirability comes from the fact that diagonal matrices are way easier to deal with in comparison to their messier, more general cousins.
Making sense of principal component analysis, eigenvectors & eigenvalues I'll give a non-mathy response and a more detailed birds-eye view of the motivation-through-math in the second part. Non-Mathy: The non-math explanation is that PCA helps for high dimensional data b
18
Making sense of principal component analysis, eigenvectors & eigenvalues
Here is a math answer: the first principal component is the longest dimension of the data. Look at it and ask: where is the data widest? That's the first component. The next component is the perpendicular. So a cigar of data has a length and a width. It makes sense for anything that is sort of oblong.
Making sense of principal component analysis, eigenvectors & eigenvalues
Here is a math answer: the first principal component is the longest dimension of the data. Look at it and ask: where is the data widest? That's the first component. The next component is the perpendic
Making sense of principal component analysis, eigenvectors & eigenvalues Here is a math answer: the first principal component is the longest dimension of the data. Look at it and ask: where is the data widest? That's the first component. The next component is the perpendicular. So a cigar of data has a length and a width. It makes sense for anything that is sort of oblong.
Making sense of principal component analysis, eigenvectors & eigenvalues Here is a math answer: the first principal component is the longest dimension of the data. Look at it and ask: where is the data widest? That's the first component. The next component is the perpendic
19
Making sense of principal component analysis, eigenvectors & eigenvalues
The way I understand principal components is this: Data with multiple variables (height, weight, age, temperature, wavelength, percent survival, etc) can be presented in three dimensions to plot relatedness. Now if you wanted to somehow make sense of "3D data", you might want to know which 2D planes (cross-sections) of this 3D data contain the most information for a given suite of variables. These 2D planes are the principal components, which contain a proportion of each variable. Think of principal components as variables themselves, with composite characteristics from the original variables (this new variable could be described as being part weight, part height, part age, etc). When you plot one principal component (X) against another (Y), what you're doing is building a 2D map that can geometrically describe correlations between original variables. Now the useful part: since each subject (observation) being compared is associated with values for each variable, the subjects (observations) are also found somewhere on this X Y map. Their location is based on the relative contributions of each underlying variable (i.e. one observation may be heavily affected by age and temperature, while another one may be more affected by height and weight). This map graphically shows us the similarities and differences between subjects and explains these similarities/differences in terms of which variables are characterizing them the most.
Making sense of principal component analysis, eigenvectors & eigenvalues
The way I understand principal components is this: Data with multiple variables (height, weight, age, temperature, wavelength, percent survival, etc) can be presented in three dimensions to plot relat
Making sense of principal component analysis, eigenvectors & eigenvalues The way I understand principal components is this: Data with multiple variables (height, weight, age, temperature, wavelength, percent survival, etc) can be presented in three dimensions to plot relatedness. Now if you wanted to somehow make sense of "3D data", you might want to know which 2D planes (cross-sections) of this 3D data contain the most information for a given suite of variables. These 2D planes are the principal components, which contain a proportion of each variable. Think of principal components as variables themselves, with composite characteristics from the original variables (this new variable could be described as being part weight, part height, part age, etc). When you plot one principal component (X) against another (Y), what you're doing is building a 2D map that can geometrically describe correlations between original variables. Now the useful part: since each subject (observation) being compared is associated with values for each variable, the subjects (observations) are also found somewhere on this X Y map. Their location is based on the relative contributions of each underlying variable (i.e. one observation may be heavily affected by age and temperature, while another one may be more affected by height and weight). This map graphically shows us the similarities and differences between subjects and explains these similarities/differences in terms of which variables are characterizing them the most.
Making sense of principal component analysis, eigenvectors & eigenvalues The way I understand principal components is this: Data with multiple variables (height, weight, age, temperature, wavelength, percent survival, etc) can be presented in three dimensions to plot relat
20
Making sense of principal component analysis, eigenvectors & eigenvalues
I view PCA as a geometric tool. If you are given a bunch of points in 3-space which are pretty much all on a straight line, and you want to figure out the equation of that line, you get it via PCA (take the first component). If you have a bunch of points in 3-space which are mostly planar, and want to discover the equation of that plane, do it via PCA (take the least significant component vector and that should be normal to the plane).
Making sense of principal component analysis, eigenvectors & eigenvalues
I view PCA as a geometric tool. If you are given a bunch of points in 3-space which are pretty much all on a straight line, and you want to figure out the equation of that line, you get it via PCA (ta
Making sense of principal component analysis, eigenvectors & eigenvalues I view PCA as a geometric tool. If you are given a bunch of points in 3-space which are pretty much all on a straight line, and you want to figure out the equation of that line, you get it via PCA (take the first component). If you have a bunch of points in 3-space which are mostly planar, and want to discover the equation of that plane, do it via PCA (take the least significant component vector and that should be normal to the plane).
Making sense of principal component analysis, eigenvectors & eigenvalues I view PCA as a geometric tool. If you are given a bunch of points in 3-space which are pretty much all on a straight line, and you want to figure out the equation of that line, you get it via PCA (ta
21
Making sense of principal component analysis, eigenvectors & eigenvalues
Why so eigenvalues/eigenvectors ? When doing PCA, you want to compute some orthogonal basis by maximizing the projected variance on each basis vector. Having computed previous basis vectors, you want the next one to be: orthogonal to the previous norm 1 maximizing projected variance, i.e with maximal covariance norm This is a constrained optimization problem, and the Lagrange multipliers (here's for the geometric intuition, see wikipedia page) tell you that the gradients of the objective (projected variance) and the constraint (unit norm) should be "parallel" at the optimium. This is the same as saying that the next basis vector should be an eigenvector of the covariance matrix. The best choice at each step is to pick the one with the largest eigenvalue among the remaining ones.
Making sense of principal component analysis, eigenvectors & eigenvalues
Why so eigenvalues/eigenvectors ? When doing PCA, you want to compute some orthogonal basis by maximizing the projected variance on each basis vector. Having computed previous basis vectors, you want
Making sense of principal component analysis, eigenvectors & eigenvalues Why so eigenvalues/eigenvectors ? When doing PCA, you want to compute some orthogonal basis by maximizing the projected variance on each basis vector. Having computed previous basis vectors, you want the next one to be: orthogonal to the previous norm 1 maximizing projected variance, i.e with maximal covariance norm This is a constrained optimization problem, and the Lagrange multipliers (here's for the geometric intuition, see wikipedia page) tell you that the gradients of the objective (projected variance) and the constraint (unit norm) should be "parallel" at the optimium. This is the same as saying that the next basis vector should be an eigenvector of the covariance matrix. The best choice at each step is to pick the one with the largest eigenvalue among the remaining ones.
Making sense of principal component analysis, eigenvectors & eigenvalues Why so eigenvalues/eigenvectors ? When doing PCA, you want to compute some orthogonal basis by maximizing the projected variance on each basis vector. Having computed previous basis vectors, you want
22
Making sense of principal component analysis, eigenvectors & eigenvalues
Some time back I tried to understand this PCA algorithm and I wanted to make a note about eigen vectors and eigen values. That document stated that the purpose of EVs is to convert a model of the large sized model to a very small sized model. For example, instead of constructing first the full sized bridge and then carrying out experiments and tests on it, it is possible to use EVs to create a very small sized bridge where all the factors/quantities will be reduced by the same margin and moreover the actual result of tests and stress related tests carried out on it can be calculated and enlarged appropriately as needed for the original model. In a way EVs help to create abstracts of the original. To me, this explaination had profound meaning to what I was trying to do! Hope it helps you too!
Making sense of principal component analysis, eigenvectors & eigenvalues
Some time back I tried to understand this PCA algorithm and I wanted to make a note about eigen vectors and eigen values. That document stated that the purpose of EVs is to convert a model of the larg
Making sense of principal component analysis, eigenvectors & eigenvalues Some time back I tried to understand this PCA algorithm and I wanted to make a note about eigen vectors and eigen values. That document stated that the purpose of EVs is to convert a model of the large sized model to a very small sized model. For example, instead of constructing first the full sized bridge and then carrying out experiments and tests on it, it is possible to use EVs to create a very small sized bridge where all the factors/quantities will be reduced by the same margin and moreover the actual result of tests and stress related tests carried out on it can be calculated and enlarged appropriately as needed for the original model. In a way EVs help to create abstracts of the original. To me, this explaination had profound meaning to what I was trying to do! Hope it helps you too!
Making sense of principal component analysis, eigenvectors & eigenvalues Some time back I tried to understand this PCA algorithm and I wanted to make a note about eigen vectors and eigen values. That document stated that the purpose of EVs is to convert a model of the larg
23
Making sense of principal component analysis, eigenvectors & eigenvalues
Imagine grandma has just taken her first photos and movies on the digital camera you gave her for Christmas, unfortunately she drops her right hand as she pushes down on the button for photos, and she shakes quite a bit during the movies too. She notices that the people, trees, fences, buildings, doorways, furniture, etc. aren't straight up and down, aren't vertical, and that the floor, the ground, the sea, the horizon isn't well horizontal, and well the movies are rather shaky as well. She asks if you can you help her fix them, all 3000 holiday photos and about 100 videos at home and beach (she's Australian), opening presents, walking in the country. She's got this photo software that allows you to do that she says. You tell her that that would take days, and won't work on the videos anyway, but you know techniques called PCA and ICA that might help. You explain that your research actually involves just this kind of rotation of data into the natural dimensions, that these techniques find the most important directions in the data, the photo in this case, and rotate so the most important one is horizontal, the second one is vertical (and it can even go on for more dimensions we can't imagine very well, although time is also a dimension in the movies). -- Technical Aside. In fact, you could probably earn your PhD doing this for her, and there is an important paper by Bell and Sejnowski (1997) about independent components of images corresponding to edges. To relate this to PCA: ICA uses PCA or SVD as a first step to reduce the dimensionality and initial approximations, but then improves them that takes into account not only second order error (SSE) like PCA, but high order errors - if it's true ICA, all higher orders, although many algorithms confine themselves to 3rd or 4th. The low order PCA components do tend to be influenced strongly by the horizontals and verticals. Dealing with camera motion for the movies can also make use of PCA/ICA. Both for the 2D photos and the 2½D movies you need a couple of representational tricks to achieve this. Another application you could explain to grandma is eigenfaces - higher order eigenvectors can approximate the '7 basic emotions' (the average face for each of them and the 'scaled rotation' or linear combination to do that averaging), but often we find components that are sex and race related, and some might distinguish individuals or individual features (glasses, beard, etc.). This is what happens if you have few photos of any one individual and many emotions/expressions, but you get a different bias if you have many faces with neutral expressions. Using ICA instead of PCA doesn't really seem to help much for basic emotions, but Bartlett and Sejnowsiki (1997) showed it found useful features for face recognition.
Making sense of principal component analysis, eigenvectors & eigenvalues
Imagine grandma has just taken her first photos and movies on the digital camera you gave her for Christmas, unfortunately she drops her right hand as she pushes down on the button for photos, and she
Making sense of principal component analysis, eigenvectors & eigenvalues Imagine grandma has just taken her first photos and movies on the digital camera you gave her for Christmas, unfortunately she drops her right hand as she pushes down on the button for photos, and she shakes quite a bit during the movies too. She notices that the people, trees, fences, buildings, doorways, furniture, etc. aren't straight up and down, aren't vertical, and that the floor, the ground, the sea, the horizon isn't well horizontal, and well the movies are rather shaky as well. She asks if you can you help her fix them, all 3000 holiday photos and about 100 videos at home and beach (she's Australian), opening presents, walking in the country. She's got this photo software that allows you to do that she says. You tell her that that would take days, and won't work on the videos anyway, but you know techniques called PCA and ICA that might help. You explain that your research actually involves just this kind of rotation of data into the natural dimensions, that these techniques find the most important directions in the data, the photo in this case, and rotate so the most important one is horizontal, the second one is vertical (and it can even go on for more dimensions we can't imagine very well, although time is also a dimension in the movies). -- Technical Aside. In fact, you could probably earn your PhD doing this for her, and there is an important paper by Bell and Sejnowski (1997) about independent components of images corresponding to edges. To relate this to PCA: ICA uses PCA or SVD as a first step to reduce the dimensionality and initial approximations, but then improves them that takes into account not only second order error (SSE) like PCA, but high order errors - if it's true ICA, all higher orders, although many algorithms confine themselves to 3rd or 4th. The low order PCA components do tend to be influenced strongly by the horizontals and verticals. Dealing with camera motion for the movies can also make use of PCA/ICA. Both for the 2D photos and the 2½D movies you need a couple of representational tricks to achieve this. Another application you could explain to grandma is eigenfaces - higher order eigenvectors can approximate the '7 basic emotions' (the average face for each of them and the 'scaled rotation' or linear combination to do that averaging), but often we find components that are sex and race related, and some might distinguish individuals or individual features (glasses, beard, etc.). This is what happens if you have few photos of any one individual and many emotions/expressions, but you get a different bias if you have many faces with neutral expressions. Using ICA instead of PCA doesn't really seem to help much for basic emotions, but Bartlett and Sejnowsiki (1997) showed it found useful features for face recognition.
Making sense of principal component analysis, eigenvectors & eigenvalues Imagine grandma has just taken her first photos and movies on the digital camera you gave her for Christmas, unfortunately she drops her right hand as she pushes down on the button for photos, and she
24
Making sense of principal component analysis, eigenvectors & eigenvalues
Basically PCA finds new variables which are linear combinations of the original variables such that in the new space, the data has fewer dimensions. Think of a data set consisting of the points in 3 dimensions on the surface of a flat plate held up at an angle. In the original x, y, z axes you need 3 dimensions to represent the data, but with the right linear transformation, you only need 2. Basically what @Joel said, but only linear combinations of the input variables.
Making sense of principal component analysis, eigenvectors & eigenvalues
Basically PCA finds new variables which are linear combinations of the original variables such that in the new space, the data has fewer dimensions. Think of a data set consisting of the points in 3
Making sense of principal component analysis, eigenvectors & eigenvalues Basically PCA finds new variables which are linear combinations of the original variables such that in the new space, the data has fewer dimensions. Think of a data set consisting of the points in 3 dimensions on the surface of a flat plate held up at an angle. In the original x, y, z axes you need 3 dimensions to represent the data, but with the right linear transformation, you only need 2. Basically what @Joel said, but only linear combinations of the input variables.
Making sense of principal component analysis, eigenvectors & eigenvalues Basically PCA finds new variables which are linear combinations of the original variables such that in the new space, the data has fewer dimensions. Think of a data set consisting of the points in 3
25
Making sense of principal component analysis, eigenvectors & eigenvalues
I think that everyone starts explaining PCA from the wrong end: from eigenvectors. My answer starts at the right place: coordinate system. Eigenvectors, and eigenproblem in general, are the mathematical tool that is used to address the real issue at hand which is a wrong coordinate system. I'll explain. Let's start with a line. What is a line? It's a one dimensional object. So, you need only one dimension to move from one point to another. On a plane though you attach two coordinates to any point of a line. That is because with respect to a line itself the coordinate system is chosen arbitrarily. Take a look at this line. Does it look like a different object than the previous line? It may, if you keep looking at the coordinate. However, if you forget about the coordinate system, and just look at it as a geometrical object in space, then these two lines are identical! The coordinate system, I would argue, does not reflect the inner one dimensional nature of the line. If only I would always put the origin of my Cartesian coordinate system on the line, and turned it so that its x-axis was on the line, then I would not need y-axis anymore! All my points are on one axis, because a line is a one dimensional object. That's where PCA explanations should start. The eigen problem is a tool that does the rotation which I described, plus de-meaning of variables puts the origin onto the line. PCA helps reveal true dimensions of the data so long the relationships between the variables are linear.
Making sense of principal component analysis, eigenvectors & eigenvalues
I think that everyone starts explaining PCA from the wrong end: from eigenvectors. My answer starts at the right place: coordinate system. Eigenvectors, and eigenproblem in general, are the mathematic
Making sense of principal component analysis, eigenvectors & eigenvalues I think that everyone starts explaining PCA from the wrong end: from eigenvectors. My answer starts at the right place: coordinate system. Eigenvectors, and eigenproblem in general, are the mathematical tool that is used to address the real issue at hand which is a wrong coordinate system. I'll explain. Let's start with a line. What is a line? It's a one dimensional object. So, you need only one dimension to move from one point to another. On a plane though you attach two coordinates to any point of a line. That is because with respect to a line itself the coordinate system is chosen arbitrarily. Take a look at this line. Does it look like a different object than the previous line? It may, if you keep looking at the coordinate. However, if you forget about the coordinate system, and just look at it as a geometrical object in space, then these two lines are identical! The coordinate system, I would argue, does not reflect the inner one dimensional nature of the line. If only I would always put the origin of my Cartesian coordinate system on the line, and turned it so that its x-axis was on the line, then I would not need y-axis anymore! All my points are on one axis, because a line is a one dimensional object. That's where PCA explanations should start. The eigen problem is a tool that does the rotation which I described, plus de-meaning of variables puts the origin onto the line. PCA helps reveal true dimensions of the data so long the relationships between the variables are linear.
Making sense of principal component analysis, eigenvectors & eigenvalues I think that everyone starts explaining PCA from the wrong end: from eigenvectors. My answer starts at the right place: coordinate system. Eigenvectors, and eigenproblem in general, are the mathematic
26
Making sense of principal component analysis, eigenvectors & eigenvalues
Remember that an eigenvector is a vector whose transform is parallel to the same input vector. Thus an eigenvector with a high eigenvalue means that the eigenvector has a high degree of 'parallelity' to the data, meaning that you can represent the data with this vector only and expect a low error in the new representation. If you pick additional eigenvectors with lower eigenvalues, you will be able to represent more details of the data because you'll be representing other 'parallelities' - which are not as prominent as the first one because of lower eigenvalues.
Making sense of principal component analysis, eigenvectors & eigenvalues
Remember that an eigenvector is a vector whose transform is parallel to the same input vector. Thus an eigenvector with a high eigenvalue means that the eigenvector has a high degree of 'parallelity'
Making sense of principal component analysis, eigenvectors & eigenvalues Remember that an eigenvector is a vector whose transform is parallel to the same input vector. Thus an eigenvector with a high eigenvalue means that the eigenvector has a high degree of 'parallelity' to the data, meaning that you can represent the data with this vector only and expect a low error in the new representation. If you pick additional eigenvectors with lower eigenvalues, you will be able to represent more details of the data because you'll be representing other 'parallelities' - which are not as prominent as the first one because of lower eigenvalues.
Making sense of principal component analysis, eigenvectors & eigenvalues Remember that an eigenvector is a vector whose transform is parallel to the same input vector. Thus an eigenvector with a high eigenvalue means that the eigenvector has a high degree of 'parallelity'
27
Making sense of principal component analysis, eigenvectors & eigenvalues
PCA basically is a projection of a higher-dimensional space into a lower dimensional space while preserving as much information as possible. I wrote a blog post where I explain PCA via the projection of a 3D-teapot... ...onto a 2D-plane while preserving as much information as possible: Details and full R-code can be found in the post: http://blog.ephorie.de/intuition-for-principal-component-analysis-pca
Making sense of principal component analysis, eigenvectors & eigenvalues
PCA basically is a projection of a higher-dimensional space into a lower dimensional space while preserving as much information as possible. I wrote a blog post where I explain PCA via the projection
Making sense of principal component analysis, eigenvectors & eigenvalues PCA basically is a projection of a higher-dimensional space into a lower dimensional space while preserving as much information as possible. I wrote a blog post where I explain PCA via the projection of a 3D-teapot... ...onto a 2D-plane while preserving as much information as possible: Details and full R-code can be found in the post: http://blog.ephorie.de/intuition-for-principal-component-analysis-pca
Making sense of principal component analysis, eigenvectors & eigenvalues PCA basically is a projection of a higher-dimensional space into a lower dimensional space while preserving as much information as possible. I wrote a blog post where I explain PCA via the projection
28
How to choose the number of hidden layers and nodes in a feedforward neural network?
I realize this question has been answered, but I don't think the extant answer really engages the question beyond pointing to a link generally related to the question's subject matter. In particular, the link describes one technique for programmatic network configuration, but that is not a "[a] standard and accepted method" for network configuration. By following a small set of clear rules, one can programmatically set a competent network architecture (i.e., the number and type of neuronal layers and the number of neurons comprising each layer). Following this schema will give you a competent architecture but probably not an optimal one. But once this network is initialized, you can iteratively tune the configuration during training using a number of ancillary algorithms; one family of these works by pruning nodes based on (small) values of the weight vector after a certain number of training epochs--in other words, eliminating unnecessary/redundant nodes (more on this below). So every NN has three types of layers: input, hidden, and output. Creating the NN architecture, therefore, means coming up with values for the number of layers of each type and the number of nodes in each of these layers. The Input Layer Simple--every NN has exactly one of them--no exceptions that I'm aware of. With respect to the number of neurons comprising this layer, this parameter is completely and uniquely determined once you know the shape of your training data. Specifically, the number of neurons comprising that layer is equal to the number of features (columns) in your data. Some NN configurations add one additional node for a bias term. The Output Layer Like the Input layer, every NN has exactly one output layer. Determining its size (number of neurons) is simple; it is completely determined by the chosen model configuration. Is your NN going to run in Machine Mode or Regression Mode (the ML convention of using a term that is also used in statistics but assigning a different meaning to it is very confusing)? Machine mode: returns a class label (e.g., "Premium Account"/"Basic Account"). Regression Mode returns a value (e.g., price). If the NN is a regressor, then the output layer has a single node. If the NN is a classifier, then it also has a single node unless softmax is used in which case the output layer has one node per class label in your model. The Hidden Layers So those few rules set the number of layers and size (neurons/layer) for both the input and output layers. That leaves the hidden layers. How many hidden layers? Well, if your data is linearly separable (which you often know by the time you begin coding a NN), then you don't need any hidden layers at all. Of course, you don't need an NN to resolve your data either, but it will still do the job. Beyond that, as you probably know, there's a mountain of commentary on the question of hidden layer configuration in NNs (see the insanely thorough and insightful NN FAQ for an excellent summary of that commentary). One issue within this subject on which there is a consensus is the performance difference from adding additional hidden layers: the situations in which performance improves with a second (or third, etc.) hidden layer are very few. One hidden layer is sufficient for the large majority of problems. So what about the size of the hidden layer(s)--how many neurons? There are some empirically derived rules of thumb; of these, the most commonly relied on is 'the optimal size of the hidden layer is usually between the size of the input and size of the output layers'. Jeff Heaton, the author of Introduction to Neural Networks in Java, offers a few more. In sum, for most problems, one could probably get decent performance (even without a second optimization step) by setting the hidden layer configuration using just two rules: (i) the number of hidden layers equals one; and (ii) the number of neurons in that layer is the mean of the neurons in the input and output layers. Optimization of the Network Configuration Pruning describes a set of techniques to trim network size (by nodes, not layers) to improve computational performance and sometimes resolution performance. The gist of these techniques is removing nodes from the network during training by identifying those nodes which, if removed from the network, would not noticeably affect network performance (i.e., resolution of the data). (Even without using a formal pruning technique, you can get a rough idea of which nodes are not important by looking at your weight matrix after training; look at weights very close to zero--it's the nodes on either end of those weights that are often removed during pruning.) Obviously, if you use a pruning algorithm during training, then begin with a network configuration that is more likely to have excess (i.e., 'prunable') nodes--in other words, when deciding on network architecture, err on the side of more neurons, if you add a pruning step. Put another way, by applying a pruning algorithm to your network during training, you can approach optimal network configuration; whether you can do that in a single "up-front" (such as a genetic-algorithm-based algorithm), I don't know, though I do know that for now, this two-step optimization is more common.
How to choose the number of hidden layers and nodes in a feedforward neural network?
I realize this question has been answered, but I don't think the extant answer really engages the question beyond pointing to a link generally related to the question's subject matter. In particular,
How to choose the number of hidden layers and nodes in a feedforward neural network? I realize this question has been answered, but I don't think the extant answer really engages the question beyond pointing to a link generally related to the question's subject matter. In particular, the link describes one technique for programmatic network configuration, but that is not a "[a] standard and accepted method" for network configuration. By following a small set of clear rules, one can programmatically set a competent network architecture (i.e., the number and type of neuronal layers and the number of neurons comprising each layer). Following this schema will give you a competent architecture but probably not an optimal one. But once this network is initialized, you can iteratively tune the configuration during training using a number of ancillary algorithms; one family of these works by pruning nodes based on (small) values of the weight vector after a certain number of training epochs--in other words, eliminating unnecessary/redundant nodes (more on this below). So every NN has three types of layers: input, hidden, and output. Creating the NN architecture, therefore, means coming up with values for the number of layers of each type and the number of nodes in each of these layers. The Input Layer Simple--every NN has exactly one of them--no exceptions that I'm aware of. With respect to the number of neurons comprising this layer, this parameter is completely and uniquely determined once you know the shape of your training data. Specifically, the number of neurons comprising that layer is equal to the number of features (columns) in your data. Some NN configurations add one additional node for a bias term. The Output Layer Like the Input layer, every NN has exactly one output layer. Determining its size (number of neurons) is simple; it is completely determined by the chosen model configuration. Is your NN going to run in Machine Mode or Regression Mode (the ML convention of using a term that is also used in statistics but assigning a different meaning to it is very confusing)? Machine mode: returns a class label (e.g., "Premium Account"/"Basic Account"). Regression Mode returns a value (e.g., price). If the NN is a regressor, then the output layer has a single node. If the NN is a classifier, then it also has a single node unless softmax is used in which case the output layer has one node per class label in your model. The Hidden Layers So those few rules set the number of layers and size (neurons/layer) for both the input and output layers. That leaves the hidden layers. How many hidden layers? Well, if your data is linearly separable (which you often know by the time you begin coding a NN), then you don't need any hidden layers at all. Of course, you don't need an NN to resolve your data either, but it will still do the job. Beyond that, as you probably know, there's a mountain of commentary on the question of hidden layer configuration in NNs (see the insanely thorough and insightful NN FAQ for an excellent summary of that commentary). One issue within this subject on which there is a consensus is the performance difference from adding additional hidden layers: the situations in which performance improves with a second (or third, etc.) hidden layer are very few. One hidden layer is sufficient for the large majority of problems. So what about the size of the hidden layer(s)--how many neurons? There are some empirically derived rules of thumb; of these, the most commonly relied on is 'the optimal size of the hidden layer is usually between the size of the input and size of the output layers'. Jeff Heaton, the author of Introduction to Neural Networks in Java, offers a few more. In sum, for most problems, one could probably get decent performance (even without a second optimization step) by setting the hidden layer configuration using just two rules: (i) the number of hidden layers equals one; and (ii) the number of neurons in that layer is the mean of the neurons in the input and output layers. Optimization of the Network Configuration Pruning describes a set of techniques to trim network size (by nodes, not layers) to improve computational performance and sometimes resolution performance. The gist of these techniques is removing nodes from the network during training by identifying those nodes which, if removed from the network, would not noticeably affect network performance (i.e., resolution of the data). (Even without using a formal pruning technique, you can get a rough idea of which nodes are not important by looking at your weight matrix after training; look at weights very close to zero--it's the nodes on either end of those weights that are often removed during pruning.) Obviously, if you use a pruning algorithm during training, then begin with a network configuration that is more likely to have excess (i.e., 'prunable') nodes--in other words, when deciding on network architecture, err on the side of more neurons, if you add a pruning step. Put another way, by applying a pruning algorithm to your network during training, you can approach optimal network configuration; whether you can do that in a single "up-front" (such as a genetic-algorithm-based algorithm), I don't know, though I do know that for now, this two-step optimization is more common.
How to choose the number of hidden layers and nodes in a feedforward neural network? I realize this question has been answered, but I don't think the extant answer really engages the question beyond pointing to a link generally related to the question's subject matter. In particular,
29
How to choose the number of hidden layers and nodes in a feedforward neural network?
@doug's answer has worked for me. There's one additional rule of thumb that helps for supervised learning problems. You can usually prevent over-fitting if you keep your number of neurons below: $$N_h = \frac{N_s} {(\alpha * (N_i + N_o))}$$ $N_i$ = number of input neurons. $N_o$ = number of output neurons. $N_s$ = number of samples in training data set. $\alpha$ = an arbitrary scaling factor usually 2-10. Others recommend setting $\alpha$ to a value between 5 and 10, but I find a value of 2 will often work without overfitting. You can think of $\alpha$ as the effective branching factor or number of nonzero weights for each neuron. Dropout layers will bring the "effective" branching factor way down from the actual mean branching factor for your network. As explained by this excellent NN Design text, you want to limit the number of free parameters in your model (its degree or number of nonzero weights) to a small portion of the degrees of freedom in your data. The degrees of freedom in your data is the number samples * degrees of freedom (dimensions) in each sample or $N_s * (N_i + N_o)$ (assuming they're all independent). So $\alpha$ is a way to indicate how general you want your model to be, or how much you want to prevent overfitting. For an automated procedure you'd start with an $\alpha$ of 2 (twice as many degrees of freedom in your training data as your model) and work your way up to 10 if the error (loss) for your training dataset is significantly smaller than for your test dataset.
How to choose the number of hidden layers and nodes in a feedforward neural network?
@doug's answer has worked for me. There's one additional rule of thumb that helps for supervised learning problems. You can usually prevent over-fitting if you keep your number of neurons below: $$N_h
How to choose the number of hidden layers and nodes in a feedforward neural network? @doug's answer has worked for me. There's one additional rule of thumb that helps for supervised learning problems. You can usually prevent over-fitting if you keep your number of neurons below: $$N_h = \frac{N_s} {(\alpha * (N_i + N_o))}$$ $N_i$ = number of input neurons. $N_o$ = number of output neurons. $N_s$ = number of samples in training data set. $\alpha$ = an arbitrary scaling factor usually 2-10. Others recommend setting $\alpha$ to a value between 5 and 10, but I find a value of 2 will often work without overfitting. You can think of $\alpha$ as the effective branching factor or number of nonzero weights for each neuron. Dropout layers will bring the "effective" branching factor way down from the actual mean branching factor for your network. As explained by this excellent NN Design text, you want to limit the number of free parameters in your model (its degree or number of nonzero weights) to a small portion of the degrees of freedom in your data. The degrees of freedom in your data is the number samples * degrees of freedom (dimensions) in each sample or $N_s * (N_i + N_o)$ (assuming they're all independent). So $\alpha$ is a way to indicate how general you want your model to be, or how much you want to prevent overfitting. For an automated procedure you'd start with an $\alpha$ of 2 (twice as many degrees of freedom in your training data as your model) and work your way up to 10 if the error (loss) for your training dataset is significantly smaller than for your test dataset.
How to choose the number of hidden layers and nodes in a feedforward neural network? @doug's answer has worked for me. There's one additional rule of thumb that helps for supervised learning problems. You can usually prevent over-fitting if you keep your number of neurons below: $$N_h
30
How to choose the number of hidden layers and nodes in a feedforward neural network?
From Introduction to Neural Networks for Java (second edition) by Jeff Heaton - preview freely available at Google Books and previously at author's website: The Number of Hidden Layers There are really two decisions that must be made regarding the hidden layers: how many hidden layers to actually have in the neural network and how many neurons will be in each of these layers. We will first examine how to determine the number of hidden layers to use with the neural network. Problems that require two hidden layers are rarely encountered. However, neural networks with two hidden layers can represent functions with any kind of shape. There is currently no theoretical reason to use neural networks with any more than two hidden layers. In fact, for many practical problems, there is no reason to use any more than one hidden layer. Table 5.1 summarizes the capabilities of neural network architectures with various hidden layers. Table 5.1: Determining the Number of Hidden Layers | Number of Hidden Layers | Result | 0 - Only capable of representing linear separable functions or decisions. 1 - Can approximate any function that contains a continuous mapping from one finite space to another. 2 - Can represent an arbitrary decision boundary to arbitrary accuracy with rational activation functions and can approximate any smooth mapping to any accuracy. Deciding the number of hidden neuron layers is only a small part of the problem. You must also determine how many neurons will be in each of these hidden layers. This process is covered in the next section. The Number of Neurons in the Hidden Layers Deciding the number of neurons in the hidden layers is a very important part of deciding your overall neural network architecture. Though these layers do not directly interact with the external environment, they have a tremendous influence on the final output. Both the number of hidden layers and the number of neurons in each of these hidden layers must be carefully considered. Using too few neurons in the hidden layers will result in something called underfitting. Underfitting occurs when there are too few neurons in the hidden layers to adequately detect the signals in a complicated data set. Using too many neurons in the hidden layers can result in several problems. First, too many neurons in the hidden layers may result in overfitting. Overfitting occurs when the neural network has so much information processing capacity that the limited amount of information contained in the training set is not enough to train all of the neurons in the hidden layers. A second problem can occur even when the training data is sufficient. An inordinately large number of neurons in the hidden layers can increase the time it takes to train the network. The amount of training time can increase to the point that it is impossible to adequately train the neural network. Obviously, some compromise must be reached between too many and too few neurons in the hidden layers. There are many rule-of-thumb methods for determining the correct number of neurons to use in the hidden layers, such as the following: The number of hidden neurons should be between the size of the input layer and the size of the output layer. The number of hidden neurons should be 2/3 the size of the input layer, plus the size of the output layer. The number of hidden neurons should be less than twice the size of the input layer. These three rules provide a starting point for you to consider. Ultimately, the selection of an architecture for your neural network will come down to trial and error. But what exactly is meant by trial and error? You do not want to start throwing random numbers of layers and neurons at your network. To do so would be very time consuming. Chapter 8, “Pruning a Neural Network” will explore various ways to determine an optimal structure for a neural network. I also like the following snippet from an answer I found at researchgate.net, which conveys a lot in just a few words: Steffen B Petersen · Aalborg University [...] In order to secure the ability of the network to generalize the number of nodes has to be kept as low as possible. If you have a large excess of nodes, you network becomes a memory bank that can recall the training set to perfection, but does not perform well on samples that was not part of the training set.
How to choose the number of hidden layers and nodes in a feedforward neural network?
From Introduction to Neural Networks for Java (second edition) by Jeff Heaton - preview freely available at Google Books and previously at author's website: The Number of Hidden Layers There are r
How to choose the number of hidden layers and nodes in a feedforward neural network? From Introduction to Neural Networks for Java (second edition) by Jeff Heaton - preview freely available at Google Books and previously at author's website: The Number of Hidden Layers There are really two decisions that must be made regarding the hidden layers: how many hidden layers to actually have in the neural network and how many neurons will be in each of these layers. We will first examine how to determine the number of hidden layers to use with the neural network. Problems that require two hidden layers are rarely encountered. However, neural networks with two hidden layers can represent functions with any kind of shape. There is currently no theoretical reason to use neural networks with any more than two hidden layers. In fact, for many practical problems, there is no reason to use any more than one hidden layer. Table 5.1 summarizes the capabilities of neural network architectures with various hidden layers. Table 5.1: Determining the Number of Hidden Layers | Number of Hidden Layers | Result | 0 - Only capable of representing linear separable functions or decisions. 1 - Can approximate any function that contains a continuous mapping from one finite space to another. 2 - Can represent an arbitrary decision boundary to arbitrary accuracy with rational activation functions and can approximate any smooth mapping to any accuracy. Deciding the number of hidden neuron layers is only a small part of the problem. You must also determine how many neurons will be in each of these hidden layers. This process is covered in the next section. The Number of Neurons in the Hidden Layers Deciding the number of neurons in the hidden layers is a very important part of deciding your overall neural network architecture. Though these layers do not directly interact with the external environment, they have a tremendous influence on the final output. Both the number of hidden layers and the number of neurons in each of these hidden layers must be carefully considered. Using too few neurons in the hidden layers will result in something called underfitting. Underfitting occurs when there are too few neurons in the hidden layers to adequately detect the signals in a complicated data set. Using too many neurons in the hidden layers can result in several problems. First, too many neurons in the hidden layers may result in overfitting. Overfitting occurs when the neural network has so much information processing capacity that the limited amount of information contained in the training set is not enough to train all of the neurons in the hidden layers. A second problem can occur even when the training data is sufficient. An inordinately large number of neurons in the hidden layers can increase the time it takes to train the network. The amount of training time can increase to the point that it is impossible to adequately train the neural network. Obviously, some compromise must be reached between too many and too few neurons in the hidden layers. There are many rule-of-thumb methods for determining the correct number of neurons to use in the hidden layers, such as the following: The number of hidden neurons should be between the size of the input layer and the size of the output layer. The number of hidden neurons should be 2/3 the size of the input layer, plus the size of the output layer. The number of hidden neurons should be less than twice the size of the input layer. These three rules provide a starting point for you to consider. Ultimately, the selection of an architecture for your neural network will come down to trial and error. But what exactly is meant by trial and error? You do not want to start throwing random numbers of layers and neurons at your network. To do so would be very time consuming. Chapter 8, “Pruning a Neural Network” will explore various ways to determine an optimal structure for a neural network. I also like the following snippet from an answer I found at researchgate.net, which conveys a lot in just a few words: Steffen B Petersen · Aalborg University [...] In order to secure the ability of the network to generalize the number of nodes has to be kept as low as possible. If you have a large excess of nodes, you network becomes a memory bank that can recall the training set to perfection, but does not perform well on samples that was not part of the training set.
How to choose the number of hidden layers and nodes in a feedforward neural network? From Introduction to Neural Networks for Java (second edition) by Jeff Heaton - preview freely available at Google Books and previously at author's website: The Number of Hidden Layers There are r
31
How to choose the number of hidden layers and nodes in a feedforward neural network?
I am working on an empirical study of this at the moment (approching a processor-century of simulations on our HPC facility!). My advice would be to use a "large" network and regularisation, if you use regularisation then the network architecture becomes less important (provided it is large enough to represent the underlying function we want to capture), but you do need to tune the regularisation parameter properly. One of the problems with architecture selection is that it is a discrete, rather than continuous, control of the complexity of the model, and therefore can be a bit of a blunt instrument, especially when the ideal complexity is low. However, this is all subject to the "no free lunch" theorems, while regularisation is effective in most cases, there will always be cases where architecture selection works better, and the only way to find out if that is true of the problem at hand is to try both approaches and cross-validate. If I were to build an automated neural network builder, I would use Radford Neal's Hybrid Monte Carlo (HMC) sampling-based Bayesian approach, and use a large network and integrate over the weights rather than optimise the weights of a single network. However that is computationally expensive and a bit of a "black art", but the results Prof. Neal achieves suggests it is worth it!
How to choose the number of hidden layers and nodes in a feedforward neural network?
I am working on an empirical study of this at the moment (approching a processor-century of simulations on our HPC facility!). My advice would be to use a "large" network and regularisation, if you u
How to choose the number of hidden layers and nodes in a feedforward neural network? I am working on an empirical study of this at the moment (approching a processor-century of simulations on our HPC facility!). My advice would be to use a "large" network and regularisation, if you use regularisation then the network architecture becomes less important (provided it is large enough to represent the underlying function we want to capture), but you do need to tune the regularisation parameter properly. One of the problems with architecture selection is that it is a discrete, rather than continuous, control of the complexity of the model, and therefore can be a bit of a blunt instrument, especially when the ideal complexity is low. However, this is all subject to the "no free lunch" theorems, while regularisation is effective in most cases, there will always be cases where architecture selection works better, and the only way to find out if that is true of the problem at hand is to try both approaches and cross-validate. If I were to build an automated neural network builder, I would use Radford Neal's Hybrid Monte Carlo (HMC) sampling-based Bayesian approach, and use a large network and integrate over the weights rather than optimise the weights of a single network. However that is computationally expensive and a bit of a "black art", but the results Prof. Neal achieves suggests it is worth it!
How to choose the number of hidden layers and nodes in a feedforward neural network? I am working on an empirical study of this at the moment (approching a processor-century of simulations on our HPC facility!). My advice would be to use a "large" network and regularisation, if you u
32
How to choose the number of hidden layers and nodes in a feedforward neural network?
• Number of hidden nodes: There is no magic formula for selecting the optimum number of hidden neurons. However, some thumb rules are available for calculating the number of hidden neurons. A rough approximation can be obtained by the geometric pyramid rule proposed by Masters (1993). For a three layer network with n input and m output neurons, the hidden layer would have $\sqrt{n*m}$ neurons. Ref: 1 Masters, Timothy. Practical neural network recipes in C++. Morgan Kaufmann, 1993. [2] http://www.iitbhu.ac.in/faculty/min/rajesh-rai/NMEICT-Slope/lecture/c14/l1.html
How to choose the number of hidden layers and nodes in a feedforward neural network?
• Number of hidden nodes: There is no magic formula for selecting the optimum number of hidden neurons. However, some thumb rules are available for calculating the number of hidden neurons. A rough ap
How to choose the number of hidden layers and nodes in a feedforward neural network? • Number of hidden nodes: There is no magic formula for selecting the optimum number of hidden neurons. However, some thumb rules are available for calculating the number of hidden neurons. A rough approximation can be obtained by the geometric pyramid rule proposed by Masters (1993). For a three layer network with n input and m output neurons, the hidden layer would have $\sqrt{n*m}$ neurons. Ref: 1 Masters, Timothy. Practical neural network recipes in C++. Morgan Kaufmann, 1993. [2] http://www.iitbhu.ac.in/faculty/min/rajesh-rai/NMEICT-Slope/lecture/c14/l1.html
How to choose the number of hidden layers and nodes in a feedforward neural network? • Number of hidden nodes: There is no magic formula for selecting the optimum number of hidden neurons. However, some thumb rules are available for calculating the number of hidden neurons. A rough ap
33
How to choose the number of hidden layers and nodes in a feedforward neural network?
As far as I know there is no way to select automatically the number of layers and neurons in each layer. But there are networks that can build automatically their topology, like EANN (Evolutionary Artificial Neural Networks, which use Genetic Algorithms to evolved the topology). There are several approaches, a more or less modern one that seemed to give good results was NEAT (Neuro Evolution of Augmented Topologies).
How to choose the number of hidden layers and nodes in a feedforward neural network?
As far as I know there is no way to select automatically the number of layers and neurons in each layer. But there are networks that can build automatically their topology, like EANN (Evolutionary Art
How to choose the number of hidden layers and nodes in a feedforward neural network? As far as I know there is no way to select automatically the number of layers and neurons in each layer. But there are networks that can build automatically their topology, like EANN (Evolutionary Artificial Neural Networks, which use Genetic Algorithms to evolved the topology). There are several approaches, a more or less modern one that seemed to give good results was NEAT (Neuro Evolution of Augmented Topologies).
How to choose the number of hidden layers and nodes in a feedforward neural network? As far as I know there is no way to select automatically the number of layers and neurons in each layer. But there are networks that can build automatically their topology, like EANN (Evolutionary Art
34
How to choose the number of hidden layers and nodes in a feedforward neural network?
I've listed many ways of topology learning in my masters thesis, chapter 3. The big categories are: Growing approaches Pruning approaches Genetic approaches Reinforcement Learning Convolutional Neural Fabrics
How to choose the number of hidden layers and nodes in a feedforward neural network?
I've listed many ways of topology learning in my masters thesis, chapter 3. The big categories are: Growing approaches Pruning approaches Genetic approaches Reinforcement Learning Convolutional Neura
How to choose the number of hidden layers and nodes in a feedforward neural network? I've listed many ways of topology learning in my masters thesis, chapter 3. The big categories are: Growing approaches Pruning approaches Genetic approaches Reinforcement Learning Convolutional Neural Fabrics
How to choose the number of hidden layers and nodes in a feedforward neural network? I've listed many ways of topology learning in my masters thesis, chapter 3. The big categories are: Growing approaches Pruning approaches Genetic approaches Reinforcement Learning Convolutional Neura
35
How to choose the number of hidden layers and nodes in a feedforward neural network?
Automated ways of building neural networks using global hyper-parameter search: Input and output layers are fixed size. What can vary: the number of layers number of neurons in each layer the type of layer Multiple methods can be used for this discrete optimization problem, with the network out of sample error as the cost function. 1) Grid / random search over the parameter space, to start from a slightly better position 2) Plenty of methods that could be used for finding the optimal architecture. (Yes, it takes time). 3) Do some regularization, rinse, repeat.
How to choose the number of hidden layers and nodes in a feedforward neural network?
Automated ways of building neural networks using global hyper-parameter search: Input and output layers are fixed size. What can vary: the number of layers number of neurons in each layer the type
How to choose the number of hidden layers and nodes in a feedforward neural network? Automated ways of building neural networks using global hyper-parameter search: Input and output layers are fixed size. What can vary: the number of layers number of neurons in each layer the type of layer Multiple methods can be used for this discrete optimization problem, with the network out of sample error as the cost function. 1) Grid / random search over the parameter space, to start from a slightly better position 2) Plenty of methods that could be used for finding the optimal architecture. (Yes, it takes time). 3) Do some regularization, rinse, repeat.
How to choose the number of hidden layers and nodes in a feedforward neural network? Automated ways of building neural networks using global hyper-parameter search: Input and output layers are fixed size. What can vary: the number of layers number of neurons in each layer the type
36
How to choose the number of hidden layers and nodes in a feedforward neural network?
Sorry I can't post a comment yet so please bear with me. Anyway, I bumped into this discussion thread which reminded me of a paper I had seen very recently. I think it might be of interest to folks participating here: AdaNet: Adaptive Structural Learning of Artificial Neural Networks Corinna Cortes, Xavier Gonzalvo, Vitaly Kuznetsov, Mehryar Mohri, Scott Yang ; Proceedings of the 34th International Conference on Machine Learning, PMLR 70:874-883, 2017. Abstract We present a new framework for analyzing and learning artificial neural networks. Our approach simultaneously and adaptively learns both the structure of the network as well as its weights. The methodology is based upon and accompanied by strong data-dependent theoretical learning guarantees, so that the final network architecture provably adapts to the complexity of any given problem.
How to choose the number of hidden layers and nodes in a feedforward neural network?
Sorry I can't post a comment yet so please bear with me. Anyway, I bumped into this discussion thread which reminded me of a paper I had seen very recently. I think it might be of interest to folks pa
How to choose the number of hidden layers and nodes in a feedforward neural network? Sorry I can't post a comment yet so please bear with me. Anyway, I bumped into this discussion thread which reminded me of a paper I had seen very recently. I think it might be of interest to folks participating here: AdaNet: Adaptive Structural Learning of Artificial Neural Networks Corinna Cortes, Xavier Gonzalvo, Vitaly Kuznetsov, Mehryar Mohri, Scott Yang ; Proceedings of the 34th International Conference on Machine Learning, PMLR 70:874-883, 2017. Abstract We present a new framework for analyzing and learning artificial neural networks. Our approach simultaneously and adaptively learns both the structure of the network as well as its weights. The methodology is based upon and accompanied by strong data-dependent theoretical learning guarantees, so that the final network architecture provably adapts to the complexity of any given problem.
How to choose the number of hidden layers and nodes in a feedforward neural network? Sorry I can't post a comment yet so please bear with me. Anyway, I bumped into this discussion thread which reminded me of a paper I had seen very recently. I think it might be of interest to folks pa
37
How to choose the number of hidden layers and nodes in a feedforward neural network?
I'd like to suggest a less common but super effective method. Basically, you can leverage a set of algorithms called "genetic algorithms" that try a small subset of the potential options (random number of layers and nodes per layer). It then treats this population of options as "parents" that create children by combining/ mutating one or more of the parents much like organisms evolve. The best children and some random ok children are kept in each generation and over generations, the fittest survive. For ~100 or fewer parameters (such as the choice of the number of layers, types of layers, and the number of neurons per layer), this method is super effective. Use it by creating a number of potential network architectures for each generation and training them partially till the learning curve can be estimated (100-10k mini-batches typically depending on many parameters). After a few generations, you may want to consider the point in which the train and validation start to have significantly different error rate (overfitting) as your objective function for choosing children. It may be a good idea to use a very small subset of your data (10-20%) until you choose a final model to reach a conclusion faster. Also, use a single seed for your network initialization to properly compare the results. 10-50 generations should yield great results for a decent sized network.
How to choose the number of hidden layers and nodes in a feedforward neural network?
I'd like to suggest a less common but super effective method. Basically, you can leverage a set of algorithms called "genetic algorithms" that try a small subset of the potential options (random numbe
How to choose the number of hidden layers and nodes in a feedforward neural network? I'd like to suggest a less common but super effective method. Basically, you can leverage a set of algorithms called "genetic algorithms" that try a small subset of the potential options (random number of layers and nodes per layer). It then treats this population of options as "parents" that create children by combining/ mutating one or more of the parents much like organisms evolve. The best children and some random ok children are kept in each generation and over generations, the fittest survive. For ~100 or fewer parameters (such as the choice of the number of layers, types of layers, and the number of neurons per layer), this method is super effective. Use it by creating a number of potential network architectures for each generation and training them partially till the learning curve can be estimated (100-10k mini-batches typically depending on many parameters). After a few generations, you may want to consider the point in which the train and validation start to have significantly different error rate (overfitting) as your objective function for choosing children. It may be a good idea to use a very small subset of your data (10-20%) until you choose a final model to reach a conclusion faster. Also, use a single seed for your network initialization to properly compare the results. 10-50 generations should yield great results for a decent sized network.
How to choose the number of hidden layers and nodes in a feedforward neural network? I'd like to suggest a less common but super effective method. Basically, you can leverage a set of algorithms called "genetic algorithms" that try a small subset of the potential options (random numbe
38
How to choose the number of hidden layers and nodes in a feedforward neural network?
Number of Hidden Layers and what they can achieve: 0 - Only capable of representing linear separable functions or decisions. 1 - Can approximate any function that contains a continuous mapping from one finite space to another. 2 - Can represent an arbitrary decision boundary to arbitrary accuracy with rational activation functions and can approximate any smooth mapping to any accuracy. More than 2 - Additional layers can learn complex representations (sort of automatic feature engineering) for layer layers.
How to choose the number of hidden layers and nodes in a feedforward neural network?
Number of Hidden Layers and what they can achieve: 0 - Only capable of representing linear separable functions or decisions. 1 - Can approximate any function that contains a continuous mapping from
How to choose the number of hidden layers and nodes in a feedforward neural network? Number of Hidden Layers and what they can achieve: 0 - Only capable of representing linear separable functions or decisions. 1 - Can approximate any function that contains a continuous mapping from one finite space to another. 2 - Can represent an arbitrary decision boundary to arbitrary accuracy with rational activation functions and can approximate any smooth mapping to any accuracy. More than 2 - Additional layers can learn complex representations (sort of automatic feature engineering) for layer layers.
How to choose the number of hidden layers and nodes in a feedforward neural network? Number of Hidden Layers and what they can achieve: 0 - Only capable of representing linear separable functions or decisions. 1 - Can approximate any function that contains a continuous mapping from
39
What is the difference between "likelihood" and "probability"?
The answer depends on whether you are dealing with discrete or continuous random variables. So, I will split my answer accordingly. I will assume that you want some technical details and not necessarily an explanation in plain English. Discrete Random Variables Suppose that you have a stochastic process that takes discrete values (e.g., outcomes of tossing a coin 10 times, number of customers who arrive at a store in 10 minutes etc). In such cases, we can calculate the probability of observing a particular set of outcomes by making suitable assumptions about the underlying stochastic process (e.g., probability of coin landing heads is $p$ and that coin tosses are independent). Denote the observed outcomes by $O$ and the set of parameters that describe the stochastic process as $\theta$. Thus, when we speak of probability we want to calculate $P(O|\theta)$. In other words, given specific values for $\theta$, $P(O|\theta)$ is the probability that we would observe the outcomes represented by $O$. However, when we model a real life stochastic process, we often do not know $\theta$. We simply observe $O$ and the goal then is to arrive at an estimate for $\theta$ that would be a plausible choice given the observed outcomes $O$. We know that given a value of $\theta$ the probability of observing $O$ is $P(O|\theta)$. Thus, a 'natural' estimation process is to choose that value of $\theta$ that would maximize the probability that we would actually observe $O$. In other words, we find the parameter values $\theta$ that maximize the following function: $L(\theta|O) = P(O|\theta)$ $L(\theta|O)$ is called the likelihood function. Notice that by definition the likelihood function is conditioned on the observed $O$ and that it is a function of the unknown parameters $\theta$. Continuous Random Variables In the continuous case the situation is similar with one important difference. We can no longer talk about the probability that we observed $O$ given $\theta$ because in the continuous case $P(O|\theta) = 0$. Without getting into technicalities, the basic idea is as follows: Denote the probability density function (pdf) associated with the outcomes $O$ as: $f(O|\theta)$. Thus, in the continuous case we estimate $\theta$ given observed outcomes $O$ by maximizing the following function: $L(\theta|O) = f(O|\theta)$ In this situation, we cannot technically assert that we are finding the parameter value that maximizes the probability that we observe $O$ as we maximize the PDF associated with the observed outcomes $O$.
What is the difference between "likelihood" and "probability"?
The answer depends on whether you are dealing with discrete or continuous random variables. So, I will split my answer accordingly. I will assume that you want some technical details and not necessari
What is the difference between "likelihood" and "probability"? The answer depends on whether you are dealing with discrete or continuous random variables. So, I will split my answer accordingly. I will assume that you want some technical details and not necessarily an explanation in plain English. Discrete Random Variables Suppose that you have a stochastic process that takes discrete values (e.g., outcomes of tossing a coin 10 times, number of customers who arrive at a store in 10 minutes etc). In such cases, we can calculate the probability of observing a particular set of outcomes by making suitable assumptions about the underlying stochastic process (e.g., probability of coin landing heads is $p$ and that coin tosses are independent). Denote the observed outcomes by $O$ and the set of parameters that describe the stochastic process as $\theta$. Thus, when we speak of probability we want to calculate $P(O|\theta)$. In other words, given specific values for $\theta$, $P(O|\theta)$ is the probability that we would observe the outcomes represented by $O$. However, when we model a real life stochastic process, we often do not know $\theta$. We simply observe $O$ and the goal then is to arrive at an estimate for $\theta$ that would be a plausible choice given the observed outcomes $O$. We know that given a value of $\theta$ the probability of observing $O$ is $P(O|\theta)$. Thus, a 'natural' estimation process is to choose that value of $\theta$ that would maximize the probability that we would actually observe $O$. In other words, we find the parameter values $\theta$ that maximize the following function: $L(\theta|O) = P(O|\theta)$ $L(\theta|O)$ is called the likelihood function. Notice that by definition the likelihood function is conditioned on the observed $O$ and that it is a function of the unknown parameters $\theta$. Continuous Random Variables In the continuous case the situation is similar with one important difference. We can no longer talk about the probability that we observed $O$ given $\theta$ because in the continuous case $P(O|\theta) = 0$. Without getting into technicalities, the basic idea is as follows: Denote the probability density function (pdf) associated with the outcomes $O$ as: $f(O|\theta)$. Thus, in the continuous case we estimate $\theta$ given observed outcomes $O$ by maximizing the following function: $L(\theta|O) = f(O|\theta)$ In this situation, we cannot technically assert that we are finding the parameter value that maximizes the probability that we observe $O$ as we maximize the PDF associated with the observed outcomes $O$.
What is the difference between "likelihood" and "probability"? The answer depends on whether you are dealing with discrete or continuous random variables. So, I will split my answer accordingly. I will assume that you want some technical details and not necessari
40
What is the difference between "likelihood" and "probability"?
This is the kind of question that just about everybody is going to answer and I would expect all the answers to be good. But you're a mathematician, Douglas, so let me offer a mathematical reply. A statistical model has to connect two distinct conceptual entities: data, which are elements $x$ of some set (such as a vector space), and a possible quantitative model of the data behavior. Models are usually represented by points $\theta$ on a finite dimensional manifold, a manifold with boundary, or a function space (the latter is termed a "non-parametric" problem). The data $x$ are connected to the possible models $\theta$ by means of a function $\Lambda(x, \theta)$. For any given $\theta$, $\Lambda(x, \theta)$ is intended to be the probability (or probability density) of $x$. For any given $x$, on the other hand, $\Lambda(x, \theta)$ can be viewed as a function of $\theta$ and is usually assumed to have certain nice properties, such as being continuously second differentiable. The intention to view $\Lambda$ in this way and to invoke these assumptions is announced by calling $\Lambda$ the "likelihood." It's quite like the distinction between variables and parameters in a differential equation: sometimes we want to study the solution (i.e., we focus on the variables as the argument) and sometimes we want to study how the solution varies with the parameters. The main distinction is that in statistics we rarely need to study the simultaneous variation of both sets of arguments; there is no statistical object that naturally corresponds to changing both the data $x$ and the model parameters $\theta$. That's why you hear more about this dichotomy than you would in analogous mathematical settings.
What is the difference between "likelihood" and "probability"?
This is the kind of question that just about everybody is going to answer and I would expect all the answers to be good. But you're a mathematician, Douglas, so let me offer a mathematical reply. A s
What is the difference between "likelihood" and "probability"? This is the kind of question that just about everybody is going to answer and I would expect all the answers to be good. But you're a mathematician, Douglas, so let me offer a mathematical reply. A statistical model has to connect two distinct conceptual entities: data, which are elements $x$ of some set (such as a vector space), and a possible quantitative model of the data behavior. Models are usually represented by points $\theta$ on a finite dimensional manifold, a manifold with boundary, or a function space (the latter is termed a "non-parametric" problem). The data $x$ are connected to the possible models $\theta$ by means of a function $\Lambda(x, \theta)$. For any given $\theta$, $\Lambda(x, \theta)$ is intended to be the probability (or probability density) of $x$. For any given $x$, on the other hand, $\Lambda(x, \theta)$ can be viewed as a function of $\theta$ and is usually assumed to have certain nice properties, such as being continuously second differentiable. The intention to view $\Lambda$ in this way and to invoke these assumptions is announced by calling $\Lambda$ the "likelihood." It's quite like the distinction between variables and parameters in a differential equation: sometimes we want to study the solution (i.e., we focus on the variables as the argument) and sometimes we want to study how the solution varies with the parameters. The main distinction is that in statistics we rarely need to study the simultaneous variation of both sets of arguments; there is no statistical object that naturally corresponds to changing both the data $x$ and the model parameters $\theta$. That's why you hear more about this dichotomy than you would in analogous mathematical settings.
What is the difference between "likelihood" and "probability"? This is the kind of question that just about everybody is going to answer and I would expect all the answers to be good. But you're a mathematician, Douglas, so let me offer a mathematical reply. A s
41
What is the difference between "likelihood" and "probability"?
I'll try and minimise the mathematics in my explanation as there are some good mathematical explanations already. As Robin Girard comments, the difference between probability and likelihood is closely related to the difference between probability and statistics. In a sense probability and statistics concern themselves with problems that are opposite or inverse to one another. Consider a coin toss. (My answer will be similar to Example 1 on Wikipedia.) If we know the coin is fair ($p=0.5$) a typical probability question is: What is the probability of getting two heads in a row. The answer is $P(HH) = P(H)\times P(H) = 0.5\times0.5 = 0.25$. A typical statistical question is: Is the coin fair? To answer this we need to ask: To what extent does our sample support the our hypothesis that $P(H) = P(T) = 0.5$? The first point to note is that the direction of the question has reversed. In probability we start with an assumed parameter ($P(head)$) and estimate the probability of a given sample (two heads in a row). In statistics we start with the observation (two heads in a row) and make INFERENCE about our parameter ($p = P(H) = 1- P(T) = 1 - q$). Example 1 on Wikipedia shows us that the maximum likelihood estimate of $P(H)$ after 2 heads in a row is $p_{MLE} = 1$. But the data in no way rule out the the true parameter value $p(H) = 0.5$ (let's not concern ourselves with the details at the moment). Indeed only very small values of $p(H)$ and particularly $p(H)=0$ can be reasonably eliminated after $n = 2$ (two throws of the coin). After the third throw comes up tails we can now eliminate the possibility that $P(H) = 1.0$ (i.e. it is not a two-headed coin), but most values in between can be reasonably supported by the data. (An exact binomial 95% confidence interval for $p(H)$ is 0.094 to 0.992. After 100 coin tosses and (say) 70 heads, we now have a reasonable basis for the suspicion that the coin is not in fact fair. An exact 95% CI on $p(H)$ is now 0.600 to 0.787 and the probability of observing a result as extreme as 70 or more heads (or tails) from 100 tosses given $p(H) = 0.5$ is 0.0000785. Although I have not explicitly used likelihood calculations this example captures the concept of likelihood: Likelihood is a measure of the extent to which a sample provides support for particular values of a parameter in a parametric model.
What is the difference between "likelihood" and "probability"?
I'll try and minimise the mathematics in my explanation as there are some good mathematical explanations already. As Robin Girard comments, the difference between probability and likelihood is closely
What is the difference between "likelihood" and "probability"? I'll try and minimise the mathematics in my explanation as there are some good mathematical explanations already. As Robin Girard comments, the difference between probability and likelihood is closely related to the difference between probability and statistics. In a sense probability and statistics concern themselves with problems that are opposite or inverse to one another. Consider a coin toss. (My answer will be similar to Example 1 on Wikipedia.) If we know the coin is fair ($p=0.5$) a typical probability question is: What is the probability of getting two heads in a row. The answer is $P(HH) = P(H)\times P(H) = 0.5\times0.5 = 0.25$. A typical statistical question is: Is the coin fair? To answer this we need to ask: To what extent does our sample support the our hypothesis that $P(H) = P(T) = 0.5$? The first point to note is that the direction of the question has reversed. In probability we start with an assumed parameter ($P(head)$) and estimate the probability of a given sample (two heads in a row). In statistics we start with the observation (two heads in a row) and make INFERENCE about our parameter ($p = P(H) = 1- P(T) = 1 - q$). Example 1 on Wikipedia shows us that the maximum likelihood estimate of $P(H)$ after 2 heads in a row is $p_{MLE} = 1$. But the data in no way rule out the the true parameter value $p(H) = 0.5$ (let's not concern ourselves with the details at the moment). Indeed only very small values of $p(H)$ and particularly $p(H)=0$ can be reasonably eliminated after $n = 2$ (two throws of the coin). After the third throw comes up tails we can now eliminate the possibility that $P(H) = 1.0$ (i.e. it is not a two-headed coin), but most values in between can be reasonably supported by the data. (An exact binomial 95% confidence interval for $p(H)$ is 0.094 to 0.992. After 100 coin tosses and (say) 70 heads, we now have a reasonable basis for the suspicion that the coin is not in fact fair. An exact 95% CI on $p(H)$ is now 0.600 to 0.787 and the probability of observing a result as extreme as 70 or more heads (or tails) from 100 tosses given $p(H) = 0.5$ is 0.0000785. Although I have not explicitly used likelihood calculations this example captures the concept of likelihood: Likelihood is a measure of the extent to which a sample provides support for particular values of a parameter in a parametric model.
What is the difference between "likelihood" and "probability"? I'll try and minimise the mathematics in my explanation as there are some good mathematical explanations already. As Robin Girard comments, the difference between probability and likelihood is closely
42
What is the difference between "likelihood" and "probability"?
Given all the fine technical answers above, let me take it back to language: Probability quantifies anticipation (of outcome), likelihood quantifies trust (in model). Suppose somebody challenges us to a 'profitable gambling game'. Then, probabilities will serve us to compute things like the expected profile of your gains and loses (mean, mode, median, variance, information ratio, value at risk, gamblers ruin, and so on). In contrast, likelihood will serve us to quantify whether we trust those probabilities in the first place; or whether we 'smell a rat'. Incidentally -- since somebody above mentioned the religions of statistics -- I believe likelihood ratio to be an integral part of the Bayesian world as well as of the frequentist one: In the Bayesian world, Bayes formula just combines prior with likelihood to produce posterior.
What is the difference between "likelihood" and "probability"?
Given all the fine technical answers above, let me take it back to language: Probability quantifies anticipation (of outcome), likelihood quantifies trust (in model). Suppose somebody challenges us to
What is the difference between "likelihood" and "probability"? Given all the fine technical answers above, let me take it back to language: Probability quantifies anticipation (of outcome), likelihood quantifies trust (in model). Suppose somebody challenges us to a 'profitable gambling game'. Then, probabilities will serve us to compute things like the expected profile of your gains and loses (mean, mode, median, variance, information ratio, value at risk, gamblers ruin, and so on). In contrast, likelihood will serve us to quantify whether we trust those probabilities in the first place; or whether we 'smell a rat'. Incidentally -- since somebody above mentioned the religions of statistics -- I believe likelihood ratio to be an integral part of the Bayesian world as well as of the frequentist one: In the Bayesian world, Bayes formula just combines prior with likelihood to produce posterior.
What is the difference between "likelihood" and "probability"? Given all the fine technical answers above, let me take it back to language: Probability quantifies anticipation (of outcome), likelihood quantifies trust (in model). Suppose somebody challenges us to
43
What is the difference between "likelihood" and "probability"?
I will give you the perspective from the view of Likelihood Theory which originated with Fisher -- and is the basis for the statistical definition in the cited Wikipedia article. Suppose you have random variates $X$ which arise from a parameterized distribution $F(X; \theta)$, where $\theta$ is the parameter characterizing $F$. Then the probability of $X = x$ would be: $P(X = x) = F(x; \theta)$, with known $\theta$. More often, you have data $X$ and $\theta$ is unknown. Given the assumed model $F$, the likelihood is defined as the probability of observed data as a function of $\theta$: $L(\theta) = P(\theta; X = x)$. Note that $X$ is known, but $\theta$ is unknown; in fact the motivation for defining the likelihood is to determine the parameter of the distribution. Although it seems like we have simply re-written the probability function, a key consequence of this is that the likelihood function does not obey the laws of probability (for example, it's not bound to the [0, 1] interval). However, the likelihood function is proportional to the probability of the observed data. This concept of likelihood actually leads to a different school of thought, "likelihoodists" (distinct from frequentist and bayesian) and you can google to search for all the various historical debates. The cornerstone is the Likelihood Principle which essentially says that we can perform inference directly from the likelihood function (neither Bayesians nor frequentists accept this since it is not probability based inference). These days a lot of what is taught as "frequentist" in schools is actually an amalgam of frequentist and likelihood thinking. For deeper insight, a nice start and historical reference is Edwards' Likelihood. For a modern take, I'd recommend Richard Royall's wonderful monograph, Statistical Evidence: A Likelihood Paradigm.
What is the difference between "likelihood" and "probability"?
I will give you the perspective from the view of Likelihood Theory which originated with Fisher -- and is the basis for the statistical definition in the cited Wikipedia article. Suppose you have ra
What is the difference between "likelihood" and "probability"? I will give you the perspective from the view of Likelihood Theory which originated with Fisher -- and is the basis for the statistical definition in the cited Wikipedia article. Suppose you have random variates $X$ which arise from a parameterized distribution $F(X; \theta)$, where $\theta$ is the parameter characterizing $F$. Then the probability of $X = x$ would be: $P(X = x) = F(x; \theta)$, with known $\theta$. More often, you have data $X$ and $\theta$ is unknown. Given the assumed model $F$, the likelihood is defined as the probability of observed data as a function of $\theta$: $L(\theta) = P(\theta; X = x)$. Note that $X$ is known, but $\theta$ is unknown; in fact the motivation for defining the likelihood is to determine the parameter of the distribution. Although it seems like we have simply re-written the probability function, a key consequence of this is that the likelihood function does not obey the laws of probability (for example, it's not bound to the [0, 1] interval). However, the likelihood function is proportional to the probability of the observed data. This concept of likelihood actually leads to a different school of thought, "likelihoodists" (distinct from frequentist and bayesian) and you can google to search for all the various historical debates. The cornerstone is the Likelihood Principle which essentially says that we can perform inference directly from the likelihood function (neither Bayesians nor frequentists accept this since it is not probability based inference). These days a lot of what is taught as "frequentist" in schools is actually an amalgam of frequentist and likelihood thinking. For deeper insight, a nice start and historical reference is Edwards' Likelihood. For a modern take, I'd recommend Richard Royall's wonderful monograph, Statistical Evidence: A Likelihood Paradigm.
What is the difference between "likelihood" and "probability"? I will give you the perspective from the view of Likelihood Theory which originated with Fisher -- and is the basis for the statistical definition in the cited Wikipedia article. Suppose you have ra
44
What is the difference between "likelihood" and "probability"?
If I have a fair coin (parameter value) then the probability that it will come up heads is 0.5. If I flip a coin 100 times and it comes up heads 52 times then it has a high likelihood of being fair (the numeric value of likelihood potentially taking a number of forms).
What is the difference between "likelihood" and "probability"?
If I have a fair coin (parameter value) then the probability that it will come up heads is 0.5. If I flip a coin 100 times and it comes up heads 52 times then it has a high likelihood of being fair (
What is the difference between "likelihood" and "probability"? If I have a fair coin (parameter value) then the probability that it will come up heads is 0.5. If I flip a coin 100 times and it comes up heads 52 times then it has a high likelihood of being fair (the numeric value of likelihood potentially taking a number of forms).
What is the difference between "likelihood" and "probability"? If I have a fair coin (parameter value) then the probability that it will come up heads is 0.5. If I flip a coin 100 times and it comes up heads 52 times then it has a high likelihood of being fair (
45
What is the difference between "likelihood" and "probability"?
Suppose you have a coin with probability $p$ to land heads and $(1-p)$ to land tails. Let $x=1$ indicate heads and $x=0$ indicate tails. Define $f$ as follows $$f(x,p)=p^x (1-p)^{1-x}$$ $f(x,2/3)$ is probability of x given $p=2/3$, $f(1,p)$ is likelihood of $p$ given $x=1$. Basically likelihood vs. probability tells you which parameter of density is considered to be the variable
What is the difference between "likelihood" and "probability"?
Suppose you have a coin with probability $p$ to land heads and $(1-p)$ to land tails. Let $x=1$ indicate heads and $x=0$ indicate tails. Define $f$ as follows $$f(x,p)=p^x (1-p)^{1-x}$$ $f(x,2/3)$ is
What is the difference between "likelihood" and "probability"? Suppose you have a coin with probability $p$ to land heads and $(1-p)$ to land tails. Let $x=1$ indicate heads and $x=0$ indicate tails. Define $f$ as follows $$f(x,p)=p^x (1-p)^{1-x}$$ $f(x,2/3)$ is probability of x given $p=2/3$, $f(1,p)$ is likelihood of $p$ given $x=1$. Basically likelihood vs. probability tells you which parameter of density is considered to be the variable
What is the difference between "likelihood" and "probability"? Suppose you have a coin with probability $p$ to land heads and $(1-p)$ to land tails. Let $x=1$ indicate heads and $x=0$ indicate tails. Define $f$ as follows $$f(x,p)=p^x (1-p)^{1-x}$$ $f(x,2/3)$ is
46
What is the difference between "likelihood" and "probability"?
$P(x|\theta)$ can be seen from two points of view: As a function of $x$, treating $\theta$ as known/observed. If $\theta$ is not a random variable, then $P(x|\theta)$ is called the (parameterized) probability of $x$ given the model parameters $\theta$, which is sometimes also written as $P(x;\theta)$ or $P_{\theta}(x)$. If $\theta$ is a random variable, as in Bayesian statistics, then $P(x|\theta)$ is a conditional probability, defined as ${P(x\cap\theta)}/{P(\theta)}$. As a function of $\theta$, treating $x$ as observed. For example, when you try to find a certain assignment $\hat\theta$ for $\theta$ that maximizes $P(x|\theta)$, then $P(x|\hat\theta)$ is called the maximum likelihood of $\theta$ given the data $x$, sometimes written as $\mathcal L(\hat\theta|x)$. So, the term likelihood is just shorthand to refer to the probability $P(x|\theta)$ for some data $x$ that results from assigning different values to $\theta$ (e.g. as one traverses the search space of $\theta$ for a good solution). So, it is often used as an objective function, but also as a performance measure to compare two models as in Bayesian model comparison. Often, this expression is still a function of both its arguments, so it is rather a matter of emphasis.
What is the difference between "likelihood" and "probability"?
$P(x|\theta)$ can be seen from two points of view: As a function of $x$, treating $\theta$ as known/observed. If $\theta$ is not a random variable, then $P(x|\theta)$ is called the (parameterized) pr
What is the difference between "likelihood" and "probability"? $P(x|\theta)$ can be seen from two points of view: As a function of $x$, treating $\theta$ as known/observed. If $\theta$ is not a random variable, then $P(x|\theta)$ is called the (parameterized) probability of $x$ given the model parameters $\theta$, which is sometimes also written as $P(x;\theta)$ or $P_{\theta}(x)$. If $\theta$ is a random variable, as in Bayesian statistics, then $P(x|\theta)$ is a conditional probability, defined as ${P(x\cap\theta)}/{P(\theta)}$. As a function of $\theta$, treating $x$ as observed. For example, when you try to find a certain assignment $\hat\theta$ for $\theta$ that maximizes $P(x|\theta)$, then $P(x|\hat\theta)$ is called the maximum likelihood of $\theta$ given the data $x$, sometimes written as $\mathcal L(\hat\theta|x)$. So, the term likelihood is just shorthand to refer to the probability $P(x|\theta)$ for some data $x$ that results from assigning different values to $\theta$ (e.g. as one traverses the search space of $\theta$ for a good solution). So, it is often used as an objective function, but also as a performance measure to compare two models as in Bayesian model comparison. Often, this expression is still a function of both its arguments, so it is rather a matter of emphasis.
What is the difference between "likelihood" and "probability"? $P(x|\theta)$ can be seen from two points of view: As a function of $x$, treating $\theta$ as known/observed. If $\theta$ is not a random variable, then $P(x|\theta)$ is called the (parameterized) pr
47
What is the difference between "likelihood" and "probability"?
do you know the pilot to the tv series "num3ers" in which the FBI tries to locate the home base of a serial criminal who seems to choose his victims at random? the FBI's mathematical advisor and brother of the agent in charge solves the problem with a maximum likelihood approach. first, he assumes some "gugelhupf shaped" probability $p(x|\theta)$ that the crimes take place at locations $x$ if the criminal lives at location $\theta$. (the gugelhupf assumption is that the criminal will neither commit a crime in his immediate neighbourhood nor travel extremely far to choose his next random victim.) this model describes the probabilities for different $x$ given a fixed $\theta$. in other words, $p_{\theta}(x)=p(x|\theta)$ is a function of $x$ with a fixed parameter $\theta$. of course, the FBI doesn't know the criminal's domicile, nor does it want to predict the next crime scene. (they hope to find the criminal first!) it's the other way round, the FBI already knows the crime scenes $x$ and wants to locate the criminal's domicile $\theta$. so the FBI agent's brilliant brother has to try and find the most likely $\theta$ among all values possible, i.e. the $\theta$ which maximises $p(x|\theta)$ for the actually observed $x$. therefore, he now considers $l_x(\theta)=p(x|\theta)$ as a function of $\theta$ with a fixed parameter $x$. figuratively speaking, he shoves his gugelhupf around on the map until it optimally "fits" the known crime scenes $x$. the FBI then goes knocking on the door in the center $\hat{\theta}$ of the gugelhupf. to stress this change of perspective, $l_x(\theta)$ is called the likelihood (function) of $\theta$, whereas $p_{\theta}(x)$ was the probability (function) of $x$. both are actually the same function $p(x|\theta)$ but seen from different perspectives and with $x$ and $\theta$ switching their roles as variable and parameter, respectively.
What is the difference between "likelihood" and "probability"?
do you know the pilot to the tv series "num3ers" in which the FBI tries to locate the home base of a serial criminal who seems to choose his victims at random? the FBI's mathematical advisor and broth
What is the difference between "likelihood" and "probability"? do you know the pilot to the tv series "num3ers" in which the FBI tries to locate the home base of a serial criminal who seems to choose his victims at random? the FBI's mathematical advisor and brother of the agent in charge solves the problem with a maximum likelihood approach. first, he assumes some "gugelhupf shaped" probability $p(x|\theta)$ that the crimes take place at locations $x$ if the criminal lives at location $\theta$. (the gugelhupf assumption is that the criminal will neither commit a crime in his immediate neighbourhood nor travel extremely far to choose his next random victim.) this model describes the probabilities for different $x$ given a fixed $\theta$. in other words, $p_{\theta}(x)=p(x|\theta)$ is a function of $x$ with a fixed parameter $\theta$. of course, the FBI doesn't know the criminal's domicile, nor does it want to predict the next crime scene. (they hope to find the criminal first!) it's the other way round, the FBI already knows the crime scenes $x$ and wants to locate the criminal's domicile $\theta$. so the FBI agent's brilliant brother has to try and find the most likely $\theta$ among all values possible, i.e. the $\theta$ which maximises $p(x|\theta)$ for the actually observed $x$. therefore, he now considers $l_x(\theta)=p(x|\theta)$ as a function of $\theta$ with a fixed parameter $x$. figuratively speaking, he shoves his gugelhupf around on the map until it optimally "fits" the known crime scenes $x$. the FBI then goes knocking on the door in the center $\hat{\theta}$ of the gugelhupf. to stress this change of perspective, $l_x(\theta)$ is called the likelihood (function) of $\theta$, whereas $p_{\theta}(x)$ was the probability (function) of $x$. both are actually the same function $p(x|\theta)$ but seen from different perspectives and with $x$ and $\theta$ switching their roles as variable and parameter, respectively.
What is the difference between "likelihood" and "probability"? do you know the pilot to the tv series "num3ers" in which the FBI tries to locate the home base of a serial criminal who seems to choose his victims at random? the FBI's mathematical advisor and broth
48
What is the difference between "likelihood" and "probability"?
As far as I'm concerned, the most important distinction is that likelihood is not a probability (of $\theta$). In an estimation problem, the X is given and the likelihood $P(X|\theta)$ describes a distribution of X rather than $\theta$. That is, $\int P(X|\theta) d\theta$ is meaningless, since likelihood is not a pdf of $\theta$, though it does characterize $\theta$ to some extent.
What is the difference between "likelihood" and "probability"?
As far as I'm concerned, the most important distinction is that likelihood is not a probability (of $\theta$). In an estimation problem, the X is given and the likelihood $P(X|\theta)$ describes a d
What is the difference between "likelihood" and "probability"? As far as I'm concerned, the most important distinction is that likelihood is not a probability (of $\theta$). In an estimation problem, the X is given and the likelihood $P(X|\theta)$ describes a distribution of X rather than $\theta$. That is, $\int P(X|\theta) d\theta$ is meaningless, since likelihood is not a pdf of $\theta$, though it does characterize $\theta$ to some extent.
What is the difference between "likelihood" and "probability"? As far as I'm concerned, the most important distinction is that likelihood is not a probability (of $\theta$). In an estimation problem, the X is given and the likelihood $P(X|\theta)$ describes a d
49
What is the difference between "likelihood" and "probability"?
If we put the conditional probability interpretation aside, you can think it in this way: In probability you usually want to find the probability of a possible event based on a model/parameter/probability distribution, etc. In likelihood you have observed some outcome, so you want to find/create/estimate the most likely source/model/parameter/probability distribution from which this event has raised.
What is the difference between "likelihood" and "probability"?
If we put the conditional probability interpretation aside, you can think it in this way: In probability you usually want to find the probability of a possible event based on a model/parameter/proba
What is the difference between "likelihood" and "probability"? If we put the conditional probability interpretation aside, you can think it in this way: In probability you usually want to find the probability of a possible event based on a model/parameter/probability distribution, etc. In likelihood you have observed some outcome, so you want to find/create/estimate the most likely source/model/parameter/probability distribution from which this event has raised.
What is the difference between "likelihood" and "probability"? If we put the conditional probability interpretation aside, you can think it in this way: In probability you usually want to find the probability of a possible event based on a model/parameter/proba
50
What is the difference between "likelihood" and "probability"?
Likelihood is bound to the statistical model that you have chosen. Let's take a discrete example, and assume you have a single observation. Hypothetically, you could always choose a statistical model that always produces one outcome, the observation that you have, with probability $1$, hence, the likelihood will also be $1$. This would be a bad model but it would fit your data perfectly. So, likelihood, in essence, is a subjective value because it depends on how you want to model your data. PS: Above is the case when you have a single observation. Similar example can be provided for the case when you have multiple observations, i.e. data. Again, hypothetically, you can restrict your model in such a way that it produces only from the observations that you have. For example, say your observations are the following coin flips, TTHH, TTHT, TTTH, TTTT, then you can restrict the model so that it always produces TT as the first two flips. This will be your assumption, hence it is subjective, and the likelihood you get will be higher than if you had not imposed that restriction.
What is the difference between "likelihood" and "probability"?
Likelihood is bound to the statistical model that you have chosen. Let's take a discrete example, and assume you have a single observation. Hypothetically, you could always choose a statistical model
What is the difference between "likelihood" and "probability"? Likelihood is bound to the statistical model that you have chosen. Let's take a discrete example, and assume you have a single observation. Hypothetically, you could always choose a statistical model that always produces one outcome, the observation that you have, with probability $1$, hence, the likelihood will also be $1$. This would be a bad model but it would fit your data perfectly. So, likelihood, in essence, is a subjective value because it depends on how you want to model your data. PS: Above is the case when you have a single observation. Similar example can be provided for the case when you have multiple observations, i.e. data. Again, hypothetically, you can restrict your model in such a way that it produces only from the observations that you have. For example, say your observations are the following coin flips, TTHH, TTHT, TTTH, TTTT, then you can restrict the model so that it always produces TT as the first two flips. This will be your assumption, hence it is subjective, and the likelihood you get will be higher than if you had not imposed that restriction.
What is the difference between "likelihood" and "probability"? Likelihood is bound to the statistical model that you have chosen. Let's take a discrete example, and assume you have a single observation. Hypothetically, you could always choose a statistical model
51
Relationship between SVD and PCA. How to use SVD to perform PCA?
Let the real values data matrix $\mathbf X$ be of $n \times p$ size, where $n$ is the number of samples and $p$ is the number of variables. Let us assume that it is centered, i.e. column means have been subtracted and are now equal to zero. Then the $p \times p$ covariance matrix $\mathbf C$ is given by $\mathbf C = \mathbf X^\top \mathbf X/(n-1)$. It is a symmetric matrix and so it can be diagonalized: $$\mathbf C = \mathbf V \mathbf L \mathbf V^\top,$$ where $\mathbf V$ is a matrix of eigenvectors (each column is an eigenvector) and $\mathbf L$ is a diagonal matrix with eigenvalues $\lambda_i$ in the decreasing order on the diagonal. The eigenvectors are called principal axes or principal directions of the data. Projections of the data on the principal axes are called principal components, also known as PC scores; these can be seen as new, transformed, variables. The $j$-th principal component is given by $j$-th column of $\mathbf {XV}$. The coordinates of the $i$-th data point in the new PC space are given by the $i$-th row of $\mathbf{XV}$. If we now perform singular value decomposition of $\mathbf X$, we obtain a decomposition $$\mathbf X = \mathbf U \mathbf S \mathbf V^\top,$$ where $\mathbf U$ is a unitary matrix (with columns called left singular vectors), $\mathbf S$ is the diagonal matrix of singular values $s_i$ and $\mathbf V$ columns are called right singular vectors. From here one can easily see that $$\mathbf C = \mathbf V \mathbf S \mathbf U^\top \mathbf U \mathbf S \mathbf V^\top /(n-1) = \mathbf V \frac{\mathbf S^2}{n-1}\mathbf V^\top,$$ meaning that right singular vectors $\mathbf V$ are principal directions (eigenvectors) and that singular values are related to the eigenvalues of covariance matrix via $\lambda_i = s_i^2/(n-1)$. Principal components are given by $\mathbf X \mathbf V = \mathbf U \mathbf S \mathbf V^\top \mathbf V = \mathbf U \mathbf S$. To summarize: If $\mathbf X = \mathbf U \mathbf S \mathbf V^\top$, then the columns of $\mathbf V$ are principal directions/axes (eigenvectors). Columns of $\mathbf {US}$ are principal components ("scores"). Singular values are related to the eigenvalues of covariance matrix via $\lambda_i = s_i^2/(n-1)$. Eigenvalues $\lambda_i$ show variances of the respective PCs. Standardized scores are given by columns of $\sqrt{n-1}\mathbf U$ and loadings are given by columns of $\mathbf V \mathbf S/\sqrt{n-1}$. See e.g. here and here for why "loadings" should not be confused with principal directions. The above is correct only if $\mathbf X$ is centered. Only then is covariance matrix equal to $\mathbf X^\top \mathbf X/(n-1)$. The above is correct only for $\mathbf X$ having samples in rows and variables in columns. If variables are in rows and samples in columns, then $\mathbf U$ and $\mathbf V$ exchange interpretations. If one wants to perform PCA on a correlation matrix (instead of a covariance matrix), then columns of $\mathbf X$ should not only be centered, but standardized as well, i.e. divided by their standard deviations. To reduce the dimensionality of the data from $p$ to $k<p$, select $k$ first columns of $\mathbf U$, and $k\times k$ upper-left part of $\mathbf S$. Their product $\mathbf U_k \mathbf S_k$ is the required $n \times k$ matrix containing first $k$ PCs. Further multiplying the first $k$ PCs by the corresponding principal axes $\mathbf V_k^\top$ yields $\mathbf X_k = \mathbf U_k^\vphantom \top \mathbf S_k^\vphantom \top \mathbf V_k^\top$ matrix that has the original $n \times p$ size but is of lower rank (of rank $k$). This matrix $\mathbf X_k$ provides a reconstruction of the original data from the first $k$ PCs. It has the lowest possible reconstruction error, see my answer here. Strictly speaking, $\mathbf U$ is of $n\times n$ size and $\mathbf V$ is of $p \times p$ size. However, if $n>p$ then the last $n-p$ columns of $\mathbf U$ are arbitrary (and corresponding rows of $\mathbf S$ are constant zero); one should therefore use an economy size (or thin) SVD that returns $\mathbf U$ of $n\times p$ size, dropping the useless columns. For large $n\gg p$ the matrix $\mathbf U$ would otherwise be unnecessarily huge. The same applies for an opposite situation of $n\ll p$. Further links What is the intuitive relationship between SVD and PCA -- a very popular and very similar thread on math.SE. Why PCA of data by means of SVD of the data? -- a discussion of what are the benefits of performing PCA via SVD [short answer: numerical stability]. PCA and Correspondence analysis in their relation to Biplot -- PCA in the context of some congeneric techniques, all based on SVD. Is there any advantage of SVD over PCA? -- a question asking if there any benefits in using SVD instead of PCA [short answer: ill-posed question]. Making sense of principal component analysis, eigenvectors & eigenvalues -- my answer giving a non-technical explanation of PCA. To draw attention, I reproduce one figure here:
Relationship between SVD and PCA. How to use SVD to perform PCA?
Let the real values data matrix $\mathbf X$ be of $n \times p$ size, where $n$ is the number of samples and $p$ is the number of variables. Let us assume that it is centered, i.e. column means have be
Relationship between SVD and PCA. How to use SVD to perform PCA? Let the real values data matrix $\mathbf X$ be of $n \times p$ size, where $n$ is the number of samples and $p$ is the number of variables. Let us assume that it is centered, i.e. column means have been subtracted and are now equal to zero. Then the $p \times p$ covariance matrix $\mathbf C$ is given by $\mathbf C = \mathbf X^\top \mathbf X/(n-1)$. It is a symmetric matrix and so it can be diagonalized: $$\mathbf C = \mathbf V \mathbf L \mathbf V^\top,$$ where $\mathbf V$ is a matrix of eigenvectors (each column is an eigenvector) and $\mathbf L$ is a diagonal matrix with eigenvalues $\lambda_i$ in the decreasing order on the diagonal. The eigenvectors are called principal axes or principal directions of the data. Projections of the data on the principal axes are called principal components, also known as PC scores; these can be seen as new, transformed, variables. The $j$-th principal component is given by $j$-th column of $\mathbf {XV}$. The coordinates of the $i$-th data point in the new PC space are given by the $i$-th row of $\mathbf{XV}$. If we now perform singular value decomposition of $\mathbf X$, we obtain a decomposition $$\mathbf X = \mathbf U \mathbf S \mathbf V^\top,$$ where $\mathbf U$ is a unitary matrix (with columns called left singular vectors), $\mathbf S$ is the diagonal matrix of singular values $s_i$ and $\mathbf V$ columns are called right singular vectors. From here one can easily see that $$\mathbf C = \mathbf V \mathbf S \mathbf U^\top \mathbf U \mathbf S \mathbf V^\top /(n-1) = \mathbf V \frac{\mathbf S^2}{n-1}\mathbf V^\top,$$ meaning that right singular vectors $\mathbf V$ are principal directions (eigenvectors) and that singular values are related to the eigenvalues of covariance matrix via $\lambda_i = s_i^2/(n-1)$. Principal components are given by $\mathbf X \mathbf V = \mathbf U \mathbf S \mathbf V^\top \mathbf V = \mathbf U \mathbf S$. To summarize: If $\mathbf X = \mathbf U \mathbf S \mathbf V^\top$, then the columns of $\mathbf V$ are principal directions/axes (eigenvectors). Columns of $\mathbf {US}$ are principal components ("scores"). Singular values are related to the eigenvalues of covariance matrix via $\lambda_i = s_i^2/(n-1)$. Eigenvalues $\lambda_i$ show variances of the respective PCs. Standardized scores are given by columns of $\sqrt{n-1}\mathbf U$ and loadings are given by columns of $\mathbf V \mathbf S/\sqrt{n-1}$. See e.g. here and here for why "loadings" should not be confused with principal directions. The above is correct only if $\mathbf X$ is centered. Only then is covariance matrix equal to $\mathbf X^\top \mathbf X/(n-1)$. The above is correct only for $\mathbf X$ having samples in rows and variables in columns. If variables are in rows and samples in columns, then $\mathbf U$ and $\mathbf V$ exchange interpretations. If one wants to perform PCA on a correlation matrix (instead of a covariance matrix), then columns of $\mathbf X$ should not only be centered, but standardized as well, i.e. divided by their standard deviations. To reduce the dimensionality of the data from $p$ to $k<p$, select $k$ first columns of $\mathbf U$, and $k\times k$ upper-left part of $\mathbf S$. Their product $\mathbf U_k \mathbf S_k$ is the required $n \times k$ matrix containing first $k$ PCs. Further multiplying the first $k$ PCs by the corresponding principal axes $\mathbf V_k^\top$ yields $\mathbf X_k = \mathbf U_k^\vphantom \top \mathbf S_k^\vphantom \top \mathbf V_k^\top$ matrix that has the original $n \times p$ size but is of lower rank (of rank $k$). This matrix $\mathbf X_k$ provides a reconstruction of the original data from the first $k$ PCs. It has the lowest possible reconstruction error, see my answer here. Strictly speaking, $\mathbf U$ is of $n\times n$ size and $\mathbf V$ is of $p \times p$ size. However, if $n>p$ then the last $n-p$ columns of $\mathbf U$ are arbitrary (and corresponding rows of $\mathbf S$ are constant zero); one should therefore use an economy size (or thin) SVD that returns $\mathbf U$ of $n\times p$ size, dropping the useless columns. For large $n\gg p$ the matrix $\mathbf U$ would otherwise be unnecessarily huge. The same applies for an opposite situation of $n\ll p$. Further links What is the intuitive relationship between SVD and PCA -- a very popular and very similar thread on math.SE. Why PCA of data by means of SVD of the data? -- a discussion of what are the benefits of performing PCA via SVD [short answer: numerical stability]. PCA and Correspondence analysis in their relation to Biplot -- PCA in the context of some congeneric techniques, all based on SVD. Is there any advantage of SVD over PCA? -- a question asking if there any benefits in using SVD instead of PCA [short answer: ill-posed question]. Making sense of principal component analysis, eigenvectors & eigenvalues -- my answer giving a non-technical explanation of PCA. To draw attention, I reproduce one figure here:
Relationship between SVD and PCA. How to use SVD to perform PCA? Let the real values data matrix $\mathbf X$ be of $n \times p$ size, where $n$ is the number of samples and $p$ is the number of variables. Let us assume that it is centered, i.e. column means have be
52
Relationship between SVD and PCA. How to use SVD to perform PCA?
I wrote a Python & Numpy snippet that accompanies @amoeba's answer and I leave it here in case it is useful for someone. The comments are mostly taken from @amoeba's answer. import numpy as np from numpy import linalg as la np.random.seed(42) def flip_signs(A, B): """ utility function for resolving the sign ambiguity in SVD http://stats.stackexchange.com/q/34396/115202 """ signs = np.sign(A) * np.sign(B) return A, B * signs # Let the data matrix X be of n x p size, # where n is the number of samples and p is the number of variables n, p = 5, 3 X = np.random.rand(n, p) # Let us assume that it is centered X -= np.mean(X, axis=0) # the p x p covariance matrix C = np.cov(X, rowvar=False) print("C = \n", C) # C is a symmetric matrix and so it can be diagonalized: l, principal_axes = la.eig(C) # sort results wrt. eigenvalues idx = l.argsort()[::-1] l, principal_axes = l[idx], principal_axes[:, idx] # the eigenvalues in decreasing order print("l = \n", l) # a matrix of eigenvectors (each column is an eigenvector) print("V = \n", principal_axes) # projections of X on the principal axes are called principal components principal_components = X.dot(principal_axes) print("Y = \n", principal_components) # we now perform singular value decomposition of X # "economy size" (or "thin") SVD U, s, Vt = la.svd(X, full_matrices=False) V = Vt.T S = np.diag(s) # 1) then columns of V are principal directions/axes. assert np.allclose(*flip_signs(V, principal_axes)) # 2) columns of US are principal components assert np.allclose(*flip_signs(U.dot(S), principal_components)) # 3) singular values are related to the eigenvalues of covariance matrix assert np.allclose((s ** 2) / (n - 1), l) # 8) dimensionality reduction k = 2 PC_k = principal_components[:, 0:k] US_k = U[:, 0:k].dot(S[0:k, 0:k]) assert np.allclose(*flip_signs(PC_k, US_k)) # 10) we used "economy size" (or "thin") SVD assert U.shape == (n, p) assert S.shape == (p, p) assert V.shape == (p, p)
Relationship between SVD and PCA. How to use SVD to perform PCA?
I wrote a Python & Numpy snippet that accompanies @amoeba's answer and I leave it here in case it is useful for someone. The comments are mostly taken from @amoeba's answer. import numpy as np from nu
Relationship between SVD and PCA. How to use SVD to perform PCA? I wrote a Python & Numpy snippet that accompanies @amoeba's answer and I leave it here in case it is useful for someone. The comments are mostly taken from @amoeba's answer. import numpy as np from numpy import linalg as la np.random.seed(42) def flip_signs(A, B): """ utility function for resolving the sign ambiguity in SVD http://stats.stackexchange.com/q/34396/115202 """ signs = np.sign(A) * np.sign(B) return A, B * signs # Let the data matrix X be of n x p size, # where n is the number of samples and p is the number of variables n, p = 5, 3 X = np.random.rand(n, p) # Let us assume that it is centered X -= np.mean(X, axis=0) # the p x p covariance matrix C = np.cov(X, rowvar=False) print("C = \n", C) # C is a symmetric matrix and so it can be diagonalized: l, principal_axes = la.eig(C) # sort results wrt. eigenvalues idx = l.argsort()[::-1] l, principal_axes = l[idx], principal_axes[:, idx] # the eigenvalues in decreasing order print("l = \n", l) # a matrix of eigenvectors (each column is an eigenvector) print("V = \n", principal_axes) # projections of X on the principal axes are called principal components principal_components = X.dot(principal_axes) print("Y = \n", principal_components) # we now perform singular value decomposition of X # "economy size" (or "thin") SVD U, s, Vt = la.svd(X, full_matrices=False) V = Vt.T S = np.diag(s) # 1) then columns of V are principal directions/axes. assert np.allclose(*flip_signs(V, principal_axes)) # 2) columns of US are principal components assert np.allclose(*flip_signs(U.dot(S), principal_components)) # 3) singular values are related to the eigenvalues of covariance matrix assert np.allclose((s ** 2) / (n - 1), l) # 8) dimensionality reduction k = 2 PC_k = principal_components[:, 0:k] US_k = U[:, 0:k].dot(S[0:k, 0:k]) assert np.allclose(*flip_signs(PC_k, US_k)) # 10) we used "economy size" (or "thin") SVD assert U.shape == (n, p) assert S.shape == (p, p) assert V.shape == (p, p)
Relationship between SVD and PCA. How to use SVD to perform PCA? I wrote a Python & Numpy snippet that accompanies @amoeba's answer and I leave it here in case it is useful for someone. The comments are mostly taken from @amoeba's answer. import numpy as np from nu
53
Relationship between SVD and PCA. How to use SVD to perform PCA?
Let me start with PCA. Suppose that you have $n$ data points comprised of $d$ numbers (or dimensions) each. If you center this data (subtract the mean data point $\mu$ from each data vector $x_i$) you can stack the data to make a matrix $$ X = \left( \begin{array}{ccccc} && x_1^T - \mu^T && \\ \hline && x_2^T - \mu^T && \\ \hline && \vdots && \\ \hline && x_n^T - \mu^T && \end{array} \right)\,. $$ The covariance matrix $$ S = \frac{1}{n-1} \sum_{i=1}^n (x_i-\mu)(x_i-\mu)^T = \frac{1}{n-1} X^T X $$ measures to which degree the different coordinates in which your data is given vary together. So, it's maybe not surprising that PCA -- which is designed to capture the variation of your data -- can be given in terms of the covariance matrix. In particular, the eigenvalue decomposition of $S$ turns out to be $$ S = V \Lambda V^T = \sum_{i = 1}^r \lambda_i v_i v_i^T \,, $$ where $v_i$ is the $i$-th Principal Component, or PC, and $\lambda_i$ is the $i$-th eigenvalue of $S$ and is also equal to the variance of the data along the $i$-th PC. This decomposition comes from a general theorem in linear algebra, and some work does have to be done to motivate the relation to PCA. SVD is a general way to understand a matrix in terms of its column-space and row-space. (It's a way to rewrite any matrix in terms of other matrices with an intuitive relation to the row and column space.) For example, for the matrix $A = \left( \begin{array}{cc}1&2\\0&1\end{array} \right)$ we can find directions $u_i$ and $v_i$ in the domain and range so that You can find these by considering how $A$ as a linear transformation morphs a unit sphere $\mathbb S$ in its domain to an ellipse: the principal semi-axes of the ellipse align with the $u_i$ and the $v_i$ are their preimages. In any case, for the data matrix $X$ above (really, just set $A = X$), SVD lets us write $$ X = \sum_{i=1}^r \sigma_i u_i v_i^T\,, $$ where $\{ u_i \}$ and $\{ v_i \}$ are orthonormal sets of vectors. A comparison with the eigenvalue decomposition of $S$ reveals that the "right singular vectors" $v_i$ are equal to the PCs, the "right singular vectors" are $$ u_i = \frac{1}{\sqrt{(n-1)\lambda_i}} Xv_i\,, $$ and the "singular values" $\sigma_i$ are related to the data matrix via $$ \sigma_i^2 = (n-1) \lambda_i\,. $$ It's a general fact that the left singular vectors $u_i$ span the column space of $X$. In this specific case, $u_i$ gives us a scaled projection of the data $X$ onto the direction of the $i$-th principal component. The right singular vectors $v_i$ in general span the row space of $X$, which gives us a set of orthonormal vectors that spans the data much like PCs. I go into some more details and benefits of the relationship between PCA and SVD in this longer article.
Relationship between SVD and PCA. How to use SVD to perform PCA?
Let me start with PCA. Suppose that you have $n$ data points comprised of $d$ numbers (or dimensions) each. If you center this data (subtract the mean data point $\mu$ from each data vector $x_i$) you
Relationship between SVD and PCA. How to use SVD to perform PCA? Let me start with PCA. Suppose that you have $n$ data points comprised of $d$ numbers (or dimensions) each. If you center this data (subtract the mean data point $\mu$ from each data vector $x_i$) you can stack the data to make a matrix $$ X = \left( \begin{array}{ccccc} && x_1^T - \mu^T && \\ \hline && x_2^T - \mu^T && \\ \hline && \vdots && \\ \hline && x_n^T - \mu^T && \end{array} \right)\,. $$ The covariance matrix $$ S = \frac{1}{n-1} \sum_{i=1}^n (x_i-\mu)(x_i-\mu)^T = \frac{1}{n-1} X^T X $$ measures to which degree the different coordinates in which your data is given vary together. So, it's maybe not surprising that PCA -- which is designed to capture the variation of your data -- can be given in terms of the covariance matrix. In particular, the eigenvalue decomposition of $S$ turns out to be $$ S = V \Lambda V^T = \sum_{i = 1}^r \lambda_i v_i v_i^T \,, $$ where $v_i$ is the $i$-th Principal Component, or PC, and $\lambda_i$ is the $i$-th eigenvalue of $S$ and is also equal to the variance of the data along the $i$-th PC. This decomposition comes from a general theorem in linear algebra, and some work does have to be done to motivate the relation to PCA. SVD is a general way to understand a matrix in terms of its column-space and row-space. (It's a way to rewrite any matrix in terms of other matrices with an intuitive relation to the row and column space.) For example, for the matrix $A = \left( \begin{array}{cc}1&2\\0&1\end{array} \right)$ we can find directions $u_i$ and $v_i$ in the domain and range so that You can find these by considering how $A$ as a linear transformation morphs a unit sphere $\mathbb S$ in its domain to an ellipse: the principal semi-axes of the ellipse align with the $u_i$ and the $v_i$ are their preimages. In any case, for the data matrix $X$ above (really, just set $A = X$), SVD lets us write $$ X = \sum_{i=1}^r \sigma_i u_i v_i^T\,, $$ where $\{ u_i \}$ and $\{ v_i \}$ are orthonormal sets of vectors. A comparison with the eigenvalue decomposition of $S$ reveals that the "right singular vectors" $v_i$ are equal to the PCs, the "right singular vectors" are $$ u_i = \frac{1}{\sqrt{(n-1)\lambda_i}} Xv_i\,, $$ and the "singular values" $\sigma_i$ are related to the data matrix via $$ \sigma_i^2 = (n-1) \lambda_i\,. $$ It's a general fact that the left singular vectors $u_i$ span the column space of $X$. In this specific case, $u_i$ gives us a scaled projection of the data $X$ onto the direction of the $i$-th principal component. The right singular vectors $v_i$ in general span the row space of $X$, which gives us a set of orthonormal vectors that spans the data much like PCs. I go into some more details and benefits of the relationship between PCA and SVD in this longer article.
Relationship between SVD and PCA. How to use SVD to perform PCA? Let me start with PCA. Suppose that you have $n$ data points comprised of $d$ numbers (or dimensions) each. If you center this data (subtract the mean data point $\mu$ from each data vector $x_i$) you
54
Relationship between SVD and PCA. How to use SVD to perform PCA?
Let's try to understand using a data matrix $X$ of dimension $n \times d$, where $d \gg n$ and $rank(X)=n$. Then $\underset{n \times d}{X}=\underset{n \times n}{U}\underset{n \times n}{\Sigma} \underset{n \times d}{V^T}$ (reduced SVD) with $X^TX=V\Sigma^TU^TU\Sigma V^T=V\Sigma^2V^T$ (since unitary / orthonormal $U$, $V$ and diagonal $\Sigma$) and the covariance matrix (assuming $X$ is already mean centered, i.e., the columns of $X$ have $0$ means) $C=E[X^TX]-E[X]^TE[X]=\frac{X^TX}{n-1}-0=V\frac{\Sigma^2}{n-1}V^T=\tilde{V}\Lambda \tilde{V}^T$ (PCA, by spectral decomposition) $\implies \Lambda = \frac{\Sigma^2}{n-1}$ and $V=\tilde{V}$ upto sign flip. Let's validate the above with eigenfaces (i.e., the principal components / eigenvectors of the covariance matrix for such a face dataset) using the following face dataset: import numpy as np from sklearn.decomposition import PCA from sklearn.datasets import fetch_olivetti_faces X = fetch_olivetti_faces().data X.shape # 400 face images of size 64×64 flattened # (400,4096) n = len(X) # z-score normalize X = X - np.mean(X, axis=0) # mean-centering # X = X / np.std(X, axis=0) # scaling to have sd=1 # choose first k eigenvalues / eigenvectors for dimensionality reduction k = 25 # SVD U, Σ, Vt = np.linalg.svd(X, full_matrices=False) # PCA pca = PCA(k).fit(X) PC = pca.components_.T #Vt.shape, PC.shape Now let's compare the eigenvalues and eigenvectors computed: # first k eigenvalues of Λ = Σ^2/(n-1) # from SVD print(Σ[:k]**2/(n-1)) # from SVD # [18.840178 11.071763 6.304614 3.9545844 2.8560426 2.49771 # 1.9200633 1.611159 1.5492224 1.3229507 1.2621089 1.1369102 # 0.98639774 0.90758985 0.84092826 0.77355367 0.7271429 0.64526594 # 0.59645116 0.5910001 0.55270135 0.48628208 0.4619924 0.45075357 # 0.4321357 ] print(pca.explained_variance_[:k]) # from PCA # [18.840164 11.07176 6.3046117 3.9545813 2.8560433 2.4977121 # 1.9200654 1.6111585 1.549223 1.3229507 1.2621082 1.1369106 # 0.98639697 0.9075892 0.84092826 0.773553 0.72714305 0.64526534 # 0.59645087 0.5909973 0.55269724 0.4862703 0.461944 0.45075053 0.43211046] # plot PC the k dominant principal components / eigenvectors as images # 1. using PC obtained with PCA # 2. using Vt[:k,:].T obtained from SVD Here the differences in the eigenvectors are due to sign ambiguity (refer to https://www.osti.gov/servlets/purl/920802)
Relationship between SVD and PCA. How to use SVD to perform PCA?
Let's try to understand using a data matrix $X$ of dimension $n \times d$, where $d \gg n$ and $rank(X)=n$. Then $\underset{n \times d}{X}=\underset{n \times n}{U}\underset{n \times n}{\Sigma} \unders
Relationship between SVD and PCA. How to use SVD to perform PCA? Let's try to understand using a data matrix $X$ of dimension $n \times d$, where $d \gg n$ and $rank(X)=n$. Then $\underset{n \times d}{X}=\underset{n \times n}{U}\underset{n \times n}{\Sigma} \underset{n \times d}{V^T}$ (reduced SVD) with $X^TX=V\Sigma^TU^TU\Sigma V^T=V\Sigma^2V^T$ (since unitary / orthonormal $U$, $V$ and diagonal $\Sigma$) and the covariance matrix (assuming $X$ is already mean centered, i.e., the columns of $X$ have $0$ means) $C=E[X^TX]-E[X]^TE[X]=\frac{X^TX}{n-1}-0=V\frac{\Sigma^2}{n-1}V^T=\tilde{V}\Lambda \tilde{V}^T$ (PCA, by spectral decomposition) $\implies \Lambda = \frac{\Sigma^2}{n-1}$ and $V=\tilde{V}$ upto sign flip. Let's validate the above with eigenfaces (i.e., the principal components / eigenvectors of the covariance matrix for such a face dataset) using the following face dataset: import numpy as np from sklearn.decomposition import PCA from sklearn.datasets import fetch_olivetti_faces X = fetch_olivetti_faces().data X.shape # 400 face images of size 64×64 flattened # (400,4096) n = len(X) # z-score normalize X = X - np.mean(X, axis=0) # mean-centering # X = X / np.std(X, axis=0) # scaling to have sd=1 # choose first k eigenvalues / eigenvectors for dimensionality reduction k = 25 # SVD U, Σ, Vt = np.linalg.svd(X, full_matrices=False) # PCA pca = PCA(k).fit(X) PC = pca.components_.T #Vt.shape, PC.shape Now let's compare the eigenvalues and eigenvectors computed: # first k eigenvalues of Λ = Σ^2/(n-1) # from SVD print(Σ[:k]**2/(n-1)) # from SVD # [18.840178 11.071763 6.304614 3.9545844 2.8560426 2.49771 # 1.9200633 1.611159 1.5492224 1.3229507 1.2621089 1.1369102 # 0.98639774 0.90758985 0.84092826 0.77355367 0.7271429 0.64526594 # 0.59645116 0.5910001 0.55270135 0.48628208 0.4619924 0.45075357 # 0.4321357 ] print(pca.explained_variance_[:k]) # from PCA # [18.840164 11.07176 6.3046117 3.9545813 2.8560433 2.4977121 # 1.9200654 1.6111585 1.549223 1.3229507 1.2621082 1.1369106 # 0.98639697 0.9075892 0.84092826 0.773553 0.72714305 0.64526534 # 0.59645087 0.5909973 0.55269724 0.4862703 0.461944 0.45075053 0.43211046] # plot PC the k dominant principal components / eigenvectors as images # 1. using PC obtained with PCA # 2. using Vt[:k,:].T obtained from SVD Here the differences in the eigenvectors are due to sign ambiguity (refer to https://www.osti.gov/servlets/purl/920802)
Relationship between SVD and PCA. How to use SVD to perform PCA? Let's try to understand using a data matrix $X$ of dimension $n \times d$, where $d \gg n$ and $rank(X)=n$. Then $\underset{n \times d}{X}=\underset{n \times n}{U}\underset{n \times n}{\Sigma} \unders
55
What is the difference between test set and validation set?
Typically to perform supervised learning, you need two types of data sets: In one dataset (your "gold standard"), you have the input data together with correct/expected output; This dataset is usually duly prepared either by humans or by collecting some data in a semi-automated way. But you must have the expected output for every data row here because you need this for supervised learning. The data you are going to apply your model to. In many cases, this is the data in which you are interested in the output of your model, and thus you don't have any "expected" output here yet. While performing machine learning, you do the following: Training phase: you present your data from your "gold standard" and train your model, by pairing the input with the expected output. Validation/Test phase: in order to estimate how well your model has been trained (that is dependent upon the size of your data, the value you would like to predict, input, etc) and to estimate model properties (mean error for numeric predictors, classification errors for classifiers, recall and precision for IR-models etc.) Application phase: now, you apply your freshly-developed model to the real-world data and get the results. Since you usually don't have any reference value in this type of data (otherwise, why would you need your model?), you can only speculate about the quality of your model output using the results of your validation phase. The validation phase is often split into two parts: In the first part, you just look at your models and select the best performing approach using the validation data (=validation) Then you estimate the accuracy of the selected approach (=test). Hence the separation to 50/25/25. In case if you don't need to choose an appropriate model from several rivaling approaches, you can just re-partition your set that you basically have only training set and test set, without performing the validation of your trained model. I personally partition them 70/30 then. See also this question.
What is the difference between test set and validation set?
Typically to perform supervised learning, you need two types of data sets: In one dataset (your "gold standard"), you have the input data together with correct/expected output; This dataset is usual
What is the difference between test set and validation set? Typically to perform supervised learning, you need two types of data sets: In one dataset (your "gold standard"), you have the input data together with correct/expected output; This dataset is usually duly prepared either by humans or by collecting some data in a semi-automated way. But you must have the expected output for every data row here because you need this for supervised learning. The data you are going to apply your model to. In many cases, this is the data in which you are interested in the output of your model, and thus you don't have any "expected" output here yet. While performing machine learning, you do the following: Training phase: you present your data from your "gold standard" and train your model, by pairing the input with the expected output. Validation/Test phase: in order to estimate how well your model has been trained (that is dependent upon the size of your data, the value you would like to predict, input, etc) and to estimate model properties (mean error for numeric predictors, classification errors for classifiers, recall and precision for IR-models etc.) Application phase: now, you apply your freshly-developed model to the real-world data and get the results. Since you usually don't have any reference value in this type of data (otherwise, why would you need your model?), you can only speculate about the quality of your model output using the results of your validation phase. The validation phase is often split into two parts: In the first part, you just look at your models and select the best performing approach using the validation data (=validation) Then you estimate the accuracy of the selected approach (=test). Hence the separation to 50/25/25. In case if you don't need to choose an appropriate model from several rivaling approaches, you can just re-partition your set that you basically have only training set and test set, without performing the validation of your trained model. I personally partition them 70/30 then. See also this question.
What is the difference between test set and validation set? Typically to perform supervised learning, you need two types of data sets: In one dataset (your "gold standard"), you have the input data together with correct/expected output; This dataset is usual
56
What is the difference between test set and validation set?
Training set A set of examples used for learning: to fit the parameters of the classifier In the Multilayer Perceptron (MLP) case, we would use the training set to find the “optimal” weights with the back-prop rule Validation set A set of examples used to tune the hyper-parameters of a classifier In the MLP case, we would use the validation set to find the “optimal” number of hidden units or determine a stopping point for the back-propagation algorithm Test set A set of examples used only to assess the performance of a fully-trained classifier In the MLP case, we would use the test to estimate the error rate after we have chosen the final model (MLP size and actual weights) After assessing the final model on the test set, YOU MUST NOT tune the model any further! Why separate test and validation sets? The error rate estimate of the final model on validation data will be biased (smaller than the true error rate) since the validation set is used to select the final model After assessing the final model on the test set, YOU MUST NOT tune the model any further! source : Introduction to Pattern Analysis,Ricardo Gutierrez-OsunaTexas A&M University, Texas A&M University
What is the difference between test set and validation set?
Training set A set of examples used for learning: to fit the parameters of the classifier In the Multilayer Perceptron (MLP) case, we would use the training set to find the “optimal” weights with the
What is the difference between test set and validation set? Training set A set of examples used for learning: to fit the parameters of the classifier In the Multilayer Perceptron (MLP) case, we would use the training set to find the “optimal” weights with the back-prop rule Validation set A set of examples used to tune the hyper-parameters of a classifier In the MLP case, we would use the validation set to find the “optimal” number of hidden units or determine a stopping point for the back-propagation algorithm Test set A set of examples used only to assess the performance of a fully-trained classifier In the MLP case, we would use the test to estimate the error rate after we have chosen the final model (MLP size and actual weights) After assessing the final model on the test set, YOU MUST NOT tune the model any further! Why separate test and validation sets? The error rate estimate of the final model on validation data will be biased (smaller than the true error rate) since the validation set is used to select the final model After assessing the final model on the test set, YOU MUST NOT tune the model any further! source : Introduction to Pattern Analysis,Ricardo Gutierrez-OsunaTexas A&M University, Texas A&M University
What is the difference between test set and validation set? Training set A set of examples used for learning: to fit the parameters of the classifier In the Multilayer Perceptron (MLP) case, we would use the training set to find the “optimal” weights with the
57
What is the difference between test set and validation set?
My 5 years experience in Computer Science taught me that nothing is better than simplicity. The concept of Training/Cross-Validation/Test Data Sets is as simple as this. When you have a large data set, it's recommended to split it into 3 parts: Training set (60% of the original data set): This is used to build up our prediction algorithm. Our algorithm tries to tune itself to the quirks of the training data sets. In this phase we usually create multiple algorithms in order to compare their performances during the Cross-Validation Phase. Cross-Validation set (20% of the original data set): This data set is used to compare the performances of the prediction algorithms that were created based on the training set. We choose the algorithm that has the best performance. Test set (20% of the original data set): Now we have chosen our preferred prediction algorithm but we don't know yet how it's going to perform on completely unseen real-world data. So, we apply our chosen prediction algorithm on our test set in order to see how it's going to perform so we can have an idea about our algorithm's performance on unseen data. Notes It's very important to keep in mind that skipping the test phase is not recommended, because the algorithm that performed well during the cross-validation phase doesn't really mean that it's truly the best one, because the algorithms are compared based on the cross-validation set and its quirks and noises... During the Test Phase, the purpose is to see how our final model is going to deal in the wild, so in case its performance is very poor we should repeat the whole process starting from the Training Phase.
What is the difference between test set and validation set?
My 5 years experience in Computer Science taught me that nothing is better than simplicity. The concept of Training/Cross-Validation/Test Data Sets is as simple as this. When you have a large data set
What is the difference between test set and validation set? My 5 years experience in Computer Science taught me that nothing is better than simplicity. The concept of Training/Cross-Validation/Test Data Sets is as simple as this. When you have a large data set, it's recommended to split it into 3 parts: Training set (60% of the original data set): This is used to build up our prediction algorithm. Our algorithm tries to tune itself to the quirks of the training data sets. In this phase we usually create multiple algorithms in order to compare their performances during the Cross-Validation Phase. Cross-Validation set (20% of the original data set): This data set is used to compare the performances of the prediction algorithms that were created based on the training set. We choose the algorithm that has the best performance. Test set (20% of the original data set): Now we have chosen our preferred prediction algorithm but we don't know yet how it's going to perform on completely unseen real-world data. So, we apply our chosen prediction algorithm on our test set in order to see how it's going to perform so we can have an idea about our algorithm's performance on unseen data. Notes It's very important to keep in mind that skipping the test phase is not recommended, because the algorithm that performed well during the cross-validation phase doesn't really mean that it's truly the best one, because the algorithms are compared based on the cross-validation set and its quirks and noises... During the Test Phase, the purpose is to see how our final model is going to deal in the wild, so in case its performance is very poor we should repeat the whole process starting from the Training Phase.
What is the difference between test set and validation set? My 5 years experience in Computer Science taught me that nothing is better than simplicity. The concept of Training/Cross-Validation/Test Data Sets is as simple as this. When you have a large data set
58
What is the difference between test set and validation set?
At each step that you are asked to make a decision (i.e. choose one option among several options), you must have an additional set/partition to gauge the accuracy of your choice so that you do not simply pick the most favorable result of randomness and mistake the tail-end of the distribution for the center 1. The left is the pessimist. The right is the optimist. The center is the pragmatist. Be the pragmatist. Step 1) Training: Each type of algorithm has its own parameter options (the number of layers in a Neural Network, the number of trees in a Random Forest, etc). For each of your algorithms, you must pick one option. That’s why you have a training set. Step 2) Validating: You now have a collection of algorithms. You must pick one algorithm. That’s why you have a test set. Most people pick the algorithm that performs best on the validation set (and that's ok). But, if you do not measure your top-performing algorithm’s error rate on the test set, and just go with its error rate on the validation set, then you have blindly mistaken the “best possible scenario” for the “most likely scenario.” That's a recipe for disaster. Step 3) Testing: I suppose that if your algorithms did not have any parameters then you would not need a third step. In that case, your validation step would be your test step. Perhaps Matlab does not ask you for parameters or you have chosen not to use them and that is the source of your confusion. 1 It is often helpful to go into each step with the assumption (null hypothesis) that all options are the same (e.g. all parameters are the same or all algorithms are the same), hence my reference to the distribution. 2 This image is not my own. I have taken it from this site: http://www.teamten.com/lawrence/writings/bell-curve.png
What is the difference between test set and validation set?
At each step that you are asked to make a decision (i.e. choose one option among several options), you must have an additional set/partition to gauge the accuracy of your choice so that you do not sim
What is the difference between test set and validation set? At each step that you are asked to make a decision (i.e. choose one option among several options), you must have an additional set/partition to gauge the accuracy of your choice so that you do not simply pick the most favorable result of randomness and mistake the tail-end of the distribution for the center 1. The left is the pessimist. The right is the optimist. The center is the pragmatist. Be the pragmatist. Step 1) Training: Each type of algorithm has its own parameter options (the number of layers in a Neural Network, the number of trees in a Random Forest, etc). For each of your algorithms, you must pick one option. That’s why you have a training set. Step 2) Validating: You now have a collection of algorithms. You must pick one algorithm. That’s why you have a test set. Most people pick the algorithm that performs best on the validation set (and that's ok). But, if you do not measure your top-performing algorithm’s error rate on the test set, and just go with its error rate on the validation set, then you have blindly mistaken the “best possible scenario” for the “most likely scenario.” That's a recipe for disaster. Step 3) Testing: I suppose that if your algorithms did not have any parameters then you would not need a third step. In that case, your validation step would be your test step. Perhaps Matlab does not ask you for parameters or you have chosen not to use them and that is the source of your confusion. 1 It is often helpful to go into each step with the assumption (null hypothesis) that all options are the same (e.g. all parameters are the same or all algorithms are the same), hence my reference to the distribution. 2 This image is not my own. I have taken it from this site: http://www.teamten.com/lawrence/writings/bell-curve.png
What is the difference between test set and validation set? At each step that you are asked to make a decision (i.e. choose one option among several options), you must have an additional set/partition to gauge the accuracy of your choice so that you do not sim
59
What is the difference between test set and validation set?
A typical machine learning task can be visualized as the following nested loop: while (error in validation set > X) { tune hyper-parameters while (error in training set > Y) { tune parameters } } Typically the outer loop is performed by human, on the validation set, and the inner loop by machine, on the training set. You then need a 3rd test set to assess the final performance of the model. In other words, validation set is the training set for human.
What is the difference between test set and validation set?
A typical machine learning task can be visualized as the following nested loop: while (error in validation set > X) { tune hyper-parameters while (error in training set > Y) { tune par
What is the difference between test set and validation set? A typical machine learning task can be visualized as the following nested loop: while (error in validation set > X) { tune hyper-parameters while (error in training set > Y) { tune parameters } } Typically the outer loop is performed by human, on the validation set, and the inner loop by machine, on the training set. You then need a 3rd test set to assess the final performance of the model. In other words, validation set is the training set for human.
What is the difference between test set and validation set? A typical machine learning task can be visualized as the following nested loop: while (error in validation set > X) { tune hyper-parameters while (error in training set > Y) { tune par
60
What is the difference between test set and validation set?
It does not follow that you need to split the data in any way. The bootstrap can provide smaller mean squared error estimates of prediction accuracy using the whole sample for both developing and testing the model.
What is the difference between test set and validation set?
It does not follow that you need to split the data in any way. The bootstrap can provide smaller mean squared error estimates of prediction accuracy using the whole sample for both developing and tes
What is the difference between test set and validation set? It does not follow that you need to split the data in any way. The bootstrap can provide smaller mean squared error estimates of prediction accuracy using the whole sample for both developing and testing the model.
What is the difference between test set and validation set? It does not follow that you need to split the data in any way. The bootstrap can provide smaller mean squared error estimates of prediction accuracy using the whole sample for both developing and tes
61
What is the difference between test set and validation set?
One way to think of these three sets is that two of them (training and validation) come from the past, whereas the test set comes from the "future". The model should be built and tuned using data from the "past" (training/validation data), but never test data which comes from the "future". To give a practical example, let's say we are building a model to predict how well baseball players will do in the future. We will use data from 1899-2014 to create a test and validation set. Once the model is built and tuned on those data, we will use data from 2015 (actually in the past!) as a test set, which from the perspective of the model appears like "future" data and in no way influenced the model creation. (Obviously, in theory, we could wait for data from 2016 if we really want!) Obviously I'm using quotes everywhere, because the actual temporal order of the data may not coincide with actual future (by definition all of the data generation probably took place in the actual past). In reality, the test set might simply be data from the same time period as the training/validation sets, that you "hold out". In this way, it had no influence on tuning the model, but those hold out data are not actually coming from the future.
What is the difference between test set and validation set?
One way to think of these three sets is that two of them (training and validation) come from the past, whereas the test set comes from the "future". The model should be built and tuned using data from
What is the difference between test set and validation set? One way to think of these three sets is that two of them (training and validation) come from the past, whereas the test set comes from the "future". The model should be built and tuned using data from the "past" (training/validation data), but never test data which comes from the "future". To give a practical example, let's say we are building a model to predict how well baseball players will do in the future. We will use data from 1899-2014 to create a test and validation set. Once the model is built and tuned on those data, we will use data from 2015 (actually in the past!) as a test set, which from the perspective of the model appears like "future" data and in no way influenced the model creation. (Obviously, in theory, we could wait for data from 2016 if we really want!) Obviously I'm using quotes everywhere, because the actual temporal order of the data may not coincide with actual future (by definition all of the data generation probably took place in the actual past). In reality, the test set might simply be data from the same time period as the training/validation sets, that you "hold out". In this way, it had no influence on tuning the model, but those hold out data are not actually coming from the future.
What is the difference between test set and validation set? One way to think of these three sets is that two of them (training and validation) come from the past, whereas the test set comes from the "future". The model should be built and tuned using data from
62
What is the difference between test set and validation set?
Most supervised data mining algorithms follow these three steps: The training set is used to build the model. This contains a set of data that has preclassified target and predictor variables. Typically a hold-out dataset or test set is used to evaluate how well the model does with data outside the training set. The test set contains the preclassified results data but they are not used when the test set data is run through the model until the end, when the preclassified data are compared against the model results. The model is adjusted to minimize error on the test set. Another hold-out dataset or validation set is used to evaluate the adjusted model in step #2 where, again, the validation set data is run against the adjusted model and results compared to the unused preclassified data.
What is the difference between test set and validation set?
Most supervised data mining algorithms follow these three steps: The training set is used to build the model. This contains a set of data that has preclassified target and predictor variables. Typi
What is the difference between test set and validation set? Most supervised data mining algorithms follow these three steps: The training set is used to build the model. This contains a set of data that has preclassified target and predictor variables. Typically a hold-out dataset or test set is used to evaluate how well the model does with data outside the training set. The test set contains the preclassified results data but they are not used when the test set data is run through the model until the end, when the preclassified data are compared against the model results. The model is adjusted to minimize error on the test set. Another hold-out dataset or validation set is used to evaluate the adjusted model in step #2 where, again, the validation set data is run against the adjusted model and results compared to the unused preclassified data.
What is the difference between test set and validation set? Most supervised data mining algorithms follow these three steps: The training set is used to build the model. This contains a set of data that has preclassified target and predictor variables. Typi
63
What is the difference between test set and validation set?
Some people have confusion about why we use a validation set, so I will give a simple, intuitive explanation of what will happen if you don't use a validation dataset. If you don't use a validation set, you will instead have to pick hyperparameters and decide when to stop training based on the performance of the model on the testing dataset. If you decide when to stop training based on the performance of the model on the testing dataset, you could just stop training when the model happens to do well on the testing dataset. Then when you report your results, you report the accuracy on the testing dataset. The problem with this is that you could say your model did really well when in fact it was just a random variation that caused it to do better on just the testing set. If you use a validation set instead to decide when to stop training, the accuracy of the model on the testing set is more of an unbiased reflection of how well it performs on the task in general, and it shows that you didn't optimize the model just to perform well on the testing set.
What is the difference between test set and validation set?
Some people have confusion about why we use a validation set, so I will give a simple, intuitive explanation of what will happen if you don't use a validation dataset. If you don't use a validation se
What is the difference between test set and validation set? Some people have confusion about why we use a validation set, so I will give a simple, intuitive explanation of what will happen if you don't use a validation dataset. If you don't use a validation set, you will instead have to pick hyperparameters and decide when to stop training based on the performance of the model on the testing dataset. If you decide when to stop training based on the performance of the model on the testing dataset, you could just stop training when the model happens to do well on the testing dataset. Then when you report your results, you report the accuracy on the testing dataset. The problem with this is that you could say your model did really well when in fact it was just a random variation that caused it to do better on just the testing set. If you use a validation set instead to decide when to stop training, the accuracy of the model on the testing set is more of an unbiased reflection of how well it performs on the task in general, and it shows that you didn't optimize the model just to perform well on the testing set.
What is the difference between test set and validation set? Some people have confusion about why we use a validation set, so I will give a simple, intuitive explanation of what will happen if you don't use a validation dataset. If you don't use a validation se
64
What is the difference between test set and validation set?
I would like to add to other very good answers here by pointing to a relatively new approach in machine learning called "differential privacy" (see papers by Dwork; the Win Vector Blog for more). The idea allows to actually reuse the testing set without compromising the final model performance. In a typical setting the test set is only used to estimate the final performance; ideally one is not even allowed to look at it. As it is well described in this Win Vector blog (see other entries as well), it is possible to "use" the test set without biasing the model's performance. This is done using the special procedure called "differential privacy". The learner will not have direct access to the test set.
What is the difference between test set and validation set?
I would like to add to other very good answers here by pointing to a relatively new approach in machine learning called "differential privacy" (see papers by Dwork; the Win Vector Blog for more). The
What is the difference between test set and validation set? I would like to add to other very good answers here by pointing to a relatively new approach in machine learning called "differential privacy" (see papers by Dwork; the Win Vector Blog for more). The idea allows to actually reuse the testing set without compromising the final model performance. In a typical setting the test set is only used to estimate the final performance; ideally one is not even allowed to look at it. As it is well described in this Win Vector blog (see other entries as well), it is possible to "use" the test set without biasing the model's performance. This is done using the special procedure called "differential privacy". The learner will not have direct access to the test set.
What is the difference between test set and validation set? I would like to add to other very good answers here by pointing to a relatively new approach in machine learning called "differential privacy" (see papers by Dwork; the Win Vector Blog for more). The
65
What is the difference between test set and validation set?
My Idea is that those option in neural network toolbox is for avoiding overfitting. In this situation the weights are specified for the training data only and don't show the global trend. By having a validation set, the iterations are adaptable to where decreases in the training data error cause decreases in validation data and increases in validation data error; along with decreases in training data error, this demonstrates the overfitting phenomenon.
What is the difference between test set and validation set?
My Idea is that those option in neural network toolbox is for avoiding overfitting. In this situation the weights are specified for the training data only and don't show the global trend. By having a
What is the difference between test set and validation set? My Idea is that those option in neural network toolbox is for avoiding overfitting. In this situation the weights are specified for the training data only and don't show the global trend. By having a validation set, the iterations are adaptable to where decreases in the training data error cause decreases in validation data and increases in validation data error; along with decreases in training data error, this demonstrates the overfitting phenomenon.
What is the difference between test set and validation set? My Idea is that those option in neural network toolbox is for avoiding overfitting. In this situation the weights are specified for the training data only and don't show the global trend. By having a
66
What is the intuition behind beta distribution?
The short version is that the Beta distribution can be understood as representing a distribution of probabilities, that is, it represents all the possible values of a probability when we don't know what that probability is. Here is my favorite intuitive explanation of this: Anyone who follows baseball is familiar with batting averages—simply the number of times a player gets a base hit divided by the number of times he goes up at bat (so it's just a percentage between 0 and 1). .266 is in general considered an average batting average, while .300 is considered an excellent one. Imagine we have a baseball player, and we want to predict what his season-long batting average will be. You might say we can just use his batting average so far- but this will be a very poor measure at the start of a season! If a player goes up to bat once and gets a single, his batting average is briefly 1.000, while if he strikes out, his batting average is 0.000. It doesn't get much better if you go up to bat five or six times- you could get a lucky streak and get an average of 1.000, or an unlucky streak and get an average of 0, neither of which are a remotely good predictor of how you will bat that season. Why is your batting average in the first few hits not a good predictor of your eventual batting average? When a player's first at-bat is a strikeout, why does no one predict that he'll never get a hit all season? Because we're going in with prior expectations. We know that in history, most batting averages over a season have hovered between something like .215 and .360, with some extremely rare exceptions on either side. We know that if a player gets a few strikeouts in a row at the start, that might indicate he'll end up a bit worse than average, but we know he probably won't deviate from that range. Given our batting average problem, which can be represented with a binomial distribution (a series of successes and failures), the best way to represent these prior expectations (what we in statistics just call a prior) is with the Beta distribution- it's saying, before we've seen the player take his first swing, what we roughly expect his batting average to be. The domain of the Beta distribution is (0, 1), just like a probability, so we already know we're on the right track, but the appropriateness of the Beta for this task goes far beyond that. We expect that the player's season-long batting average will be most likely around .27, but that it could reasonably range from .21 to .35. This can be represented with a Beta distribution with parameters $\alpha=81$ and $\beta=219$: curve(dbeta(x, 81, 219)) I came up with these parameters for two reasons: The mean is $\frac{\alpha}{\alpha+\beta}=\frac{81}{81+219}=.270$ As you can see in the plot, this distribution lies almost entirely within (.2, .35)- the reasonable range for a batting average. You asked what the x axis represents in a beta distribution density plot—here it represents his batting average. Thus notice that in this case, not only is the y-axis a probability (or more precisely a probability density), but the x-axis is as well (batting average is just a probability of a hit, after all)! The Beta distribution is representing a probability distribution of probabilities. But here's why the Beta distribution is so appropriate. Imagine the player gets a single hit. His record for the season is now 1 hit; 1 at bat. We have to then update our probabilities- we want to shift this entire curve over just a bit to reflect our new information. While the math for proving this is a bit involved (it's shown here), the result is very simple. The new Beta distribution will be: $\mbox{Beta}(\alpha_0+\mbox{hits}, \beta_0+\mbox{misses})$ Where $\alpha_0$ and $\beta_0$ are the parameters we started with- that is, 81 and 219. Thus, in this case, $\alpha$ has increased by 1 (his one hit), while $\beta$ has not increased at all (no misses yet). That means our new distribution is $\mbox{Beta}(81+1, 219)$, or: curve(dbeta(x, 82, 219)) Notice that it has barely changed at all- the change is indeed invisible to the naked eye! (That's because one hit doesn't really mean anything). However, the more the player hits over the course of the season, the more the curve will shift to accommodate the new evidence, and furthermore the more it will narrow based on the fact that we have more proof. Let's say halfway through the season he has been up to bat 300 times, hitting 100 out of those times. The new distribution would be $\mbox{Beta}(81+100, 219+200)$, or: curve(dbeta(x, 81+100, 219+200)) Notice the curve is now both thinner and shifted to the right (higher batting average) than it used to be- we have a better sense of what the player's batting average is. One of the most interesting outputs of this formula is the expected value of the resulting Beta distribution, which is basically your new estimate. Recall that the expected value of the Beta distribution is $\frac{\alpha}{\alpha+\beta}$. Thus, after 100 hits of 300 real at-bats, the expected value of the new Beta distribution is $\frac{81+100}{81+100+219+200}=.303$- notice that it is lower than the naive estimate of $\frac{100}{100+200}=.333$, but higher than the estimate you started the season with ($\frac{81}{81+219}=.270$). You might notice that this formula is equivalent to adding a "head start" to the number of hits and non-hits of a player- you're saying "start him off in the season with 81 hits and 219 non hits on his record"). Thus, the Beta distribution is best for representing a probabilistic distribution of probabilities: the case where we don't know what a probability is in advance, but we have some reasonable guesses.
What is the intuition behind beta distribution?
The short version is that the Beta distribution can be understood as representing a distribution of probabilities, that is, it represents all the possible values of a probability when we don't know wh
What is the intuition behind beta distribution? The short version is that the Beta distribution can be understood as representing a distribution of probabilities, that is, it represents all the possible values of a probability when we don't know what that probability is. Here is my favorite intuitive explanation of this: Anyone who follows baseball is familiar with batting averages—simply the number of times a player gets a base hit divided by the number of times he goes up at bat (so it's just a percentage between 0 and 1). .266 is in general considered an average batting average, while .300 is considered an excellent one. Imagine we have a baseball player, and we want to predict what his season-long batting average will be. You might say we can just use his batting average so far- but this will be a very poor measure at the start of a season! If a player goes up to bat once and gets a single, his batting average is briefly 1.000, while if he strikes out, his batting average is 0.000. It doesn't get much better if you go up to bat five or six times- you could get a lucky streak and get an average of 1.000, or an unlucky streak and get an average of 0, neither of which are a remotely good predictor of how you will bat that season. Why is your batting average in the first few hits not a good predictor of your eventual batting average? When a player's first at-bat is a strikeout, why does no one predict that he'll never get a hit all season? Because we're going in with prior expectations. We know that in history, most batting averages over a season have hovered between something like .215 and .360, with some extremely rare exceptions on either side. We know that if a player gets a few strikeouts in a row at the start, that might indicate he'll end up a bit worse than average, but we know he probably won't deviate from that range. Given our batting average problem, which can be represented with a binomial distribution (a series of successes and failures), the best way to represent these prior expectations (what we in statistics just call a prior) is with the Beta distribution- it's saying, before we've seen the player take his first swing, what we roughly expect his batting average to be. The domain of the Beta distribution is (0, 1), just like a probability, so we already know we're on the right track, but the appropriateness of the Beta for this task goes far beyond that. We expect that the player's season-long batting average will be most likely around .27, but that it could reasonably range from .21 to .35. This can be represented with a Beta distribution with parameters $\alpha=81$ and $\beta=219$: curve(dbeta(x, 81, 219)) I came up with these parameters for two reasons: The mean is $\frac{\alpha}{\alpha+\beta}=\frac{81}{81+219}=.270$ As you can see in the plot, this distribution lies almost entirely within (.2, .35)- the reasonable range for a batting average. You asked what the x axis represents in a beta distribution density plot—here it represents his batting average. Thus notice that in this case, not only is the y-axis a probability (or more precisely a probability density), but the x-axis is as well (batting average is just a probability of a hit, after all)! The Beta distribution is representing a probability distribution of probabilities. But here's why the Beta distribution is so appropriate. Imagine the player gets a single hit. His record for the season is now 1 hit; 1 at bat. We have to then update our probabilities- we want to shift this entire curve over just a bit to reflect our new information. While the math for proving this is a bit involved (it's shown here), the result is very simple. The new Beta distribution will be: $\mbox{Beta}(\alpha_0+\mbox{hits}, \beta_0+\mbox{misses})$ Where $\alpha_0$ and $\beta_0$ are the parameters we started with- that is, 81 and 219. Thus, in this case, $\alpha$ has increased by 1 (his one hit), while $\beta$ has not increased at all (no misses yet). That means our new distribution is $\mbox{Beta}(81+1, 219)$, or: curve(dbeta(x, 82, 219)) Notice that it has barely changed at all- the change is indeed invisible to the naked eye! (That's because one hit doesn't really mean anything). However, the more the player hits over the course of the season, the more the curve will shift to accommodate the new evidence, and furthermore the more it will narrow based on the fact that we have more proof. Let's say halfway through the season he has been up to bat 300 times, hitting 100 out of those times. The new distribution would be $\mbox{Beta}(81+100, 219+200)$, or: curve(dbeta(x, 81+100, 219+200)) Notice the curve is now both thinner and shifted to the right (higher batting average) than it used to be- we have a better sense of what the player's batting average is. One of the most interesting outputs of this formula is the expected value of the resulting Beta distribution, which is basically your new estimate. Recall that the expected value of the Beta distribution is $\frac{\alpha}{\alpha+\beta}$. Thus, after 100 hits of 300 real at-bats, the expected value of the new Beta distribution is $\frac{81+100}{81+100+219+200}=.303$- notice that it is lower than the naive estimate of $\frac{100}{100+200}=.333$, but higher than the estimate you started the season with ($\frac{81}{81+219}=.270$). You might notice that this formula is equivalent to adding a "head start" to the number of hits and non-hits of a player- you're saying "start him off in the season with 81 hits and 219 non hits on his record"). Thus, the Beta distribution is best for representing a probabilistic distribution of probabilities: the case where we don't know what a probability is in advance, but we have some reasonable guesses.
What is the intuition behind beta distribution? The short version is that the Beta distribution can be understood as representing a distribution of probabilities, that is, it represents all the possible values of a probability when we don't know wh
67
What is the intuition behind beta distribution?
A Beta distribution is used to model things that have a limited range, like 0 to 1. Examples are the probability of success in an experiment having only two outcomes, like success and failure. If you do a limited number of experiments, and some are successful, you can represent what that tells you by a beta distribution. Another example is order statistics. For example, if you generate several (say 4) uniform 0,1 random numbers, and sort them, what is the distribution of the 3rd one? I use them to understand software performance diagnosis by sampling. If you stop a program at random $n$ times, and $s$ of those times you see it doing something you could actually get rid of, and $s>1$, then the fraction of time to be saved by doing so is represented by $Beta(s+1, (n-s)+1)$, and the speedup factor has a BetaPrime distribution. More about that...
What is the intuition behind beta distribution?
A Beta distribution is used to model things that have a limited range, like 0 to 1. Examples are the probability of success in an experiment having only two outcomes, like success and failure. If you
What is the intuition behind beta distribution? A Beta distribution is used to model things that have a limited range, like 0 to 1. Examples are the probability of success in an experiment having only two outcomes, like success and failure. If you do a limited number of experiments, and some are successful, you can represent what that tells you by a beta distribution. Another example is order statistics. For example, if you generate several (say 4) uniform 0,1 random numbers, and sort them, what is the distribution of the 3rd one? I use them to understand software performance diagnosis by sampling. If you stop a program at random $n$ times, and $s$ of those times you see it doing something you could actually get rid of, and $s>1$, then the fraction of time to be saved by doing so is represented by $Beta(s+1, (n-s)+1)$, and the speedup factor has a BetaPrime distribution. More about that...
What is the intuition behind beta distribution? A Beta distribution is used to model things that have a limited range, like 0 to 1. Examples are the probability of success in an experiment having only two outcomes, like success and failure. If you
68
What is the intuition behind beta distribution?
The Beta distribution also appears as an order statistic for a random sample of independent uniform distributions on $(0,1)$. Precisely, let $U_1$, $\ldots$, $U_n$ be $n$ independent random variables, each having the uniform distribution on $(0,1)$. Denote by $U_{(1)}$, $\ldots$, $U_{(n)}$ the order statistics of the random sample $(U_1, \ldots, U_n)$, defined by sorting the values of $U_1$, $\ldots$, $U_n$ in increasing order. In particular $U_{(1)}=\min(U_i)$ and $U_{(n)}=\max(U_i)$. Then one can show that $U_{(k)} \sim \textrm{Beta}(k, n+1-k)$ for every $k=1,\ldots,n$. This result shows that the Beta distributions naturally appear in mathematics, and it has some interesting applications in mathematics.
What is the intuition behind beta distribution?
The Beta distribution also appears as an order statistic for a random sample of independent uniform distributions on $(0,1)$. Precisely, let $U_1$, $\ldots$, $U_n$ be $n$ independent random variables,
What is the intuition behind beta distribution? The Beta distribution also appears as an order statistic for a random sample of independent uniform distributions on $(0,1)$. Precisely, let $U_1$, $\ldots$, $U_n$ be $n$ independent random variables, each having the uniform distribution on $(0,1)$. Denote by $U_{(1)}$, $\ldots$, $U_{(n)}$ the order statistics of the random sample $(U_1, \ldots, U_n)$, defined by sorting the values of $U_1$, $\ldots$, $U_n$ in increasing order. In particular $U_{(1)}=\min(U_i)$ and $U_{(n)}=\max(U_i)$. Then one can show that $U_{(k)} \sim \textrm{Beta}(k, n+1-k)$ for every $k=1,\ldots,n$. This result shows that the Beta distributions naturally appear in mathematics, and it has some interesting applications in mathematics.
What is the intuition behind beta distribution? The Beta distribution also appears as an order statistic for a random sample of independent uniform distributions on $(0,1)$. Precisely, let $U_1$, $\ldots$, $U_n$ be $n$ independent random variables,
69
What is the intuition behind beta distribution?
There are two principal motivations: First, the beta distribution is conjugate prior to the Bernoulli distribution. That means that if you have an unknown probability like the bias of a coin that you are estimating by repeated coin flips, then the likelihood induced on the unknown bias by a sequence of coin flips is beta-distributed. Second, a consequence of the beta distribution being an exponential family is that it is the maximum entropy distribution for a set of sufficient statistics. In the beta distribution's case these statistics are $\log(x)$ and $\log(1-x)$ for $x$ in $[0,1]$. That means that if you only keep the average measurement of these sufficient statistics for a set of samples $x_1, \dots, x_n$, the minimum assumption you can make about the distribution of the samples is that it is beta-distributed. The beta distribution is not special for generally modeling things over [0,1] since many distributions can be truncated to that support and are more applicable in many cases.
What is the intuition behind beta distribution?
There are two principal motivations: First, the beta distribution is conjugate prior to the Bernoulli distribution. That means that if you have an unknown probability like the bias of a coin that you
What is the intuition behind beta distribution? There are two principal motivations: First, the beta distribution is conjugate prior to the Bernoulli distribution. That means that if you have an unknown probability like the bias of a coin that you are estimating by repeated coin flips, then the likelihood induced on the unknown bias by a sequence of coin flips is beta-distributed. Second, a consequence of the beta distribution being an exponential family is that it is the maximum entropy distribution for a set of sufficient statistics. In the beta distribution's case these statistics are $\log(x)$ and $\log(1-x)$ for $x$ in $[0,1]$. That means that if you only keep the average measurement of these sufficient statistics for a set of samples $x_1, \dots, x_n$, the minimum assumption you can make about the distribution of the samples is that it is beta-distributed. The beta distribution is not special for generally modeling things over [0,1] since many distributions can be truncated to that support and are more applicable in many cases.
What is the intuition behind beta distribution? There are two principal motivations: First, the beta distribution is conjugate prior to the Bernoulli distribution. That means that if you have an unknown probability like the bias of a coin that you
70
What is the intuition behind beta distribution?
Let's assume a seller on some e-commerce web-site receives 500 ratings of which 400 are good and 100 are bad. We think of this as the result of a Bernoulli experiment of length 500 which led to 400 successes (1 = good) while the underlying probability $p$ is unknown. The naive quality in terms of ratings of the seller is 80% because 0.8 = 400 / 500. But the "true" quality in terms of ratings we don't know. Theoretically also a seller with "true" quality of $p=77\%$ might have ended up with 400 good of 500 ratings. The pointy bar plot in the picture represents the frequency of how often it happend in a simulation that for a given assumed "true" $p$ 400 of 500 ratings were good. The bar plot is the density of the histogram of the result of the simulation. And as you can see - the density curve of the beta distribution for $\alpha=400+1$ and $\beta=100+1$ (orange) tightly surrounds the bar chart (the density of the histogram for the simulation). So the beta distribution essentially defines the probability that a Bernoulli experiment's success probability is $p$ given the outcome of the experiment. library(ggplot2) # 90% positive of 10 ratings o1 <- 9 o0 <- 1 M <- 100 N <- 100000 m <- sapply(0:M/M,function(prob)rbinom(N,o1+o0,prob)) v <- colSums(m==o1) df_sim1 <- data.frame(p=rep(0:M/M,v)) df_beta1 <- data.frame(p=0:M/M, y=dbeta(0:M/M,o1+1,o0+1)) # 80% positive of 500 ratings o1 <- 400 o0 <- 100 M <- 100 N <- 100000 m <- sapply(0:M/M,function(prob)rbinom(N,o1+o0,prob)) v <- colSums(m==o1) df_sim2 <- data.frame(p=rep(0:M/M,v)) df_beta2 <- data.frame(p=0:M/M, y=dbeta(0:M/M,o1+1,o0+1)) ggplot(data=df_sim1,aes(p)) + scale_x_continuous(breaks=0:10/10) + geom_histogram(aes(y=..density..,fill=..density..), binwidth=0.01, origin=-.005, colour=I("gray")) + geom_line(data=df_beta1 ,aes(p,y),colour=I("red"),size=2,alpha=.5) + geom_histogram(data=df_sim2, aes(y=..density..,fill=..density..), binwidth=0.01, origin=-.005, colour=I("gray")) + geom_line(data=df_beta2,aes(p,y),colour=I("orange"),size=2,alpha=.5) https://www.joyofdata.de/blog/an-intuitive-interpretation-of-the-beta-distribution/
What is the intuition behind beta distribution?
Let's assume a seller on some e-commerce web-site receives 500 ratings of which 400 are good and 100 are bad. We think of this as the result of a Bernoulli experiment of length 500 which led to 400 su
What is the intuition behind beta distribution? Let's assume a seller on some e-commerce web-site receives 500 ratings of which 400 are good and 100 are bad. We think of this as the result of a Bernoulli experiment of length 500 which led to 400 successes (1 = good) while the underlying probability $p$ is unknown. The naive quality in terms of ratings of the seller is 80% because 0.8 = 400 / 500. But the "true" quality in terms of ratings we don't know. Theoretically also a seller with "true" quality of $p=77\%$ might have ended up with 400 good of 500 ratings. The pointy bar plot in the picture represents the frequency of how often it happend in a simulation that for a given assumed "true" $p$ 400 of 500 ratings were good. The bar plot is the density of the histogram of the result of the simulation. And as you can see - the density curve of the beta distribution for $\alpha=400+1$ and $\beta=100+1$ (orange) tightly surrounds the bar chart (the density of the histogram for the simulation). So the beta distribution essentially defines the probability that a Bernoulli experiment's success probability is $p$ given the outcome of the experiment. library(ggplot2) # 90% positive of 10 ratings o1 <- 9 o0 <- 1 M <- 100 N <- 100000 m <- sapply(0:M/M,function(prob)rbinom(N,o1+o0,prob)) v <- colSums(m==o1) df_sim1 <- data.frame(p=rep(0:M/M,v)) df_beta1 <- data.frame(p=0:M/M, y=dbeta(0:M/M,o1+1,o0+1)) # 80% positive of 500 ratings o1 <- 400 o0 <- 100 M <- 100 N <- 100000 m <- sapply(0:M/M,function(prob)rbinom(N,o1+o0,prob)) v <- colSums(m==o1) df_sim2 <- data.frame(p=rep(0:M/M,v)) df_beta2 <- data.frame(p=0:M/M, y=dbeta(0:M/M,o1+1,o0+1)) ggplot(data=df_sim1,aes(p)) + scale_x_continuous(breaks=0:10/10) + geom_histogram(aes(y=..density..,fill=..density..), binwidth=0.01, origin=-.005, colour=I("gray")) + geom_line(data=df_beta1 ,aes(p,y),colour=I("red"),size=2,alpha=.5) + geom_histogram(data=df_sim2, aes(y=..density..,fill=..density..), binwidth=0.01, origin=-.005, colour=I("gray")) + geom_line(data=df_beta2,aes(p,y),colour=I("orange"),size=2,alpha=.5) https://www.joyofdata.de/blog/an-intuitive-interpretation-of-the-beta-distribution/
What is the intuition behind beta distribution? Let's assume a seller on some e-commerce web-site receives 500 ratings of which 400 are good and 100 are bad. We think of this as the result of a Bernoulli experiment of length 500 which led to 400 su
71
What is the intuition behind beta distribution?
So far the preponderance of answers covered the rationale for Beta RVs being generated as the prior for a sample proportions, and one clever answer has related Beta RVs to order statistics. Beta distributions also arise from a simple relationship between two $\textrm{Gamma}(k_i, 1)$ RVs, $i=1,2 $ call them $X $ and $ Y.$ $X/(X+Y)$ has a Beta distribution. Gamma RVs already have their rationale in modeling arrival times for independent events, so I will not address that since it is not your question. But a "fraction of time" spent completing one of two tasks performed in sequence naturally lends itself to a Beta distribution.
What is the intuition behind beta distribution?
So far the preponderance of answers covered the rationale for Beta RVs being generated as the prior for a sample proportions, and one clever answer has related Beta RVs to order statistics. Beta distr
What is the intuition behind beta distribution? So far the preponderance of answers covered the rationale for Beta RVs being generated as the prior for a sample proportions, and one clever answer has related Beta RVs to order statistics. Beta distributions also arise from a simple relationship between two $\textrm{Gamma}(k_i, 1)$ RVs, $i=1,2 $ call them $X $ and $ Y.$ $X/(X+Y)$ has a Beta distribution. Gamma RVs already have their rationale in modeling arrival times for independent events, so I will not address that since it is not your question. But a "fraction of time" spent completing one of two tasks performed in sequence naturally lends itself to a Beta distribution.
What is the intuition behind beta distribution? So far the preponderance of answers covered the rationale for Beta RVs being generated as the prior for a sample proportions, and one clever answer has related Beta RVs to order statistics. Beta distr
72
What is the intuition behind beta distribution?
Most of the answers here seem to cover two approaches: Bayesian and the order statistic. I'd like to add a viewpoint from the binomial, which I think the easiest to grasp. The intuition for a beta distribution comes into play when we look at it from the lens of the binomial distribution. The difference between the binomial and the beta is that the former models the number of occurrences ($x$), while the latter models the probability ($p$) itself. In other words, the probability is a parameter in binomial; In the Beta, the probability is a random variable. Interpretation of $\boldsymbol{\alpha}$, $\boldsymbol{\beta}$ You can think of $\alpha-1$ as the number of successes and $\beta-1$ as the number of failures, just like $n$ & $n-x$ terms in binomial. You can choose the $\alpha$ and $\beta$ parameters however you think they are supposed to be. If you think the probability of success is very high, let's say 90%, set 90 for $\alpha$ and 10 for $\beta$. If you think otherwise, 90 for $\beta$ and 10 for $\alpha$. As $\alpha$ becomes larger (more successful events), the bulk of the probability distribution will shift towards the right, whereas an increase in $\beta$ moves the distribution towards the left (more failures). Also, the distribution will narrow if both $\alpha$ and $\beta$ increase, for we are more certain. The Intuition behind the shapes The PDF of Beta distribution can be U-shaped with asymptotic ends, bell-shaped, strictly increasing/decreasing or even straight lines. As you change $\alpha$ or $\beta$, the shape of the distribution changes. a. Bell-shape Notice that the graph of PDF with $\alpha = 8$ and $\beta = 2$ is in blue, not in read. The x-axis is the probability of success. The PDF of a beta distribution is approximately normal if $\alpha +\beta$ is large enough and $\alpha$ & $\beta$ are approximately equal. b. Straight Lines The beta PDF can be a straight line too. c. U-shape When $\alpha <1$, $\beta<1$, the PDF of the Beta is U-shaped. The Intuition behind the shapes Why would Beta(2,2) be bell-shaped? If you think of $\alpha-1$ as the number of successes and $\beta-1$ as the number of failures, Beta(2,2) means you got 1 success and 1 failure. So it makes sense that the probability of the success is highest at 0.5. Also, Beta(1,1) would mean you got zero for the head and zero for the tail. Then, your guess about the probability of success should be the same throughout [0,1]. The horizontal straight line confirms it. What’s the intuition for Beta(0.5, 0.5)? Why is it U-shaped? What does it mean to have negative (-0.5) heads and tails? I don’t have an answer for this one yet. I even asked this on Stackexchange but haven’t gotten the response yet. If you have a good idea about the U-shaped Beta, please let me know! Update: I ended up writing a blog post about it.
What is the intuition behind beta distribution?
Most of the answers here seem to cover two approaches: Bayesian and the order statistic. I'd like to add a viewpoint from the binomial, which I think the easiest to grasp. The intuition for a beta dis
What is the intuition behind beta distribution? Most of the answers here seem to cover two approaches: Bayesian and the order statistic. I'd like to add a viewpoint from the binomial, which I think the easiest to grasp. The intuition for a beta distribution comes into play when we look at it from the lens of the binomial distribution. The difference between the binomial and the beta is that the former models the number of occurrences ($x$), while the latter models the probability ($p$) itself. In other words, the probability is a parameter in binomial; In the Beta, the probability is a random variable. Interpretation of $\boldsymbol{\alpha}$, $\boldsymbol{\beta}$ You can think of $\alpha-1$ as the number of successes and $\beta-1$ as the number of failures, just like $n$ & $n-x$ terms in binomial. You can choose the $\alpha$ and $\beta$ parameters however you think they are supposed to be. If you think the probability of success is very high, let's say 90%, set 90 for $\alpha$ and 10 for $\beta$. If you think otherwise, 90 for $\beta$ and 10 for $\alpha$. As $\alpha$ becomes larger (more successful events), the bulk of the probability distribution will shift towards the right, whereas an increase in $\beta$ moves the distribution towards the left (more failures). Also, the distribution will narrow if both $\alpha$ and $\beta$ increase, for we are more certain. The Intuition behind the shapes The PDF of Beta distribution can be U-shaped with asymptotic ends, bell-shaped, strictly increasing/decreasing or even straight lines. As you change $\alpha$ or $\beta$, the shape of the distribution changes. a. Bell-shape Notice that the graph of PDF with $\alpha = 8$ and $\beta = 2$ is in blue, not in read. The x-axis is the probability of success. The PDF of a beta distribution is approximately normal if $\alpha +\beta$ is large enough and $\alpha$ & $\beta$ are approximately equal. b. Straight Lines The beta PDF can be a straight line too. c. U-shape When $\alpha <1$, $\beta<1$, the PDF of the Beta is U-shaped. The Intuition behind the shapes Why would Beta(2,2) be bell-shaped? If you think of $\alpha-1$ as the number of successes and $\beta-1$ as the number of failures, Beta(2,2) means you got 1 success and 1 failure. So it makes sense that the probability of the success is highest at 0.5. Also, Beta(1,1) would mean you got zero for the head and zero for the tail. Then, your guess about the probability of success should be the same throughout [0,1]. The horizontal straight line confirms it. What’s the intuition for Beta(0.5, 0.5)? Why is it U-shaped? What does it mean to have negative (-0.5) heads and tails? I don’t have an answer for this one yet. I even asked this on Stackexchange but haven’t gotten the response yet. If you have a good idea about the U-shaped Beta, please let me know! Update: I ended up writing a blog post about it.
What is the intuition behind beta distribution? Most of the answers here seem to cover two approaches: Bayesian and the order statistic. I'd like to add a viewpoint from the binomial, which I think the easiest to grasp. The intuition for a beta dis
73
What is the intuition behind beta distribution?
My intuition says that it "weighs" both the current proportion of success "$x$" and current proportion of failure "$(1-x)$": $f(x;\alpha,\beta) = \text{constant}\cdot x^{\alpha-1}(1-x)^{\beta-1}$. Where the constant is $1/B(\alpha,\beta)$. The $\alpha$ is like a "weight" for success's contribution. The $\beta$ is like a "weight" for failure's contribution. You have a two dimensional parameter space (one for successes contribution and one for failures contribution) which makes it kind of difficult to think about and understand.
What is the intuition behind beta distribution?
My intuition says that it "weighs" both the current proportion of success "$x$" and current proportion of failure "$(1-x)$": $f(x;\alpha,\beta) = \text{constant}\cdot x^{\alpha-1}(1-x)^{\beta-1}$. Whe
What is the intuition behind beta distribution? My intuition says that it "weighs" both the current proportion of success "$x$" and current proportion of failure "$(1-x)$": $f(x;\alpha,\beta) = \text{constant}\cdot x^{\alpha-1}(1-x)^{\beta-1}$. Where the constant is $1/B(\alpha,\beta)$. The $\alpha$ is like a "weight" for success's contribution. The $\beta$ is like a "weight" for failure's contribution. You have a two dimensional parameter space (one for successes contribution and one for failures contribution) which makes it kind of difficult to think about and understand.
What is the intuition behind beta distribution? My intuition says that it "weighs" both the current proportion of success "$x$" and current proportion of failure "$(1-x)$": $f(x;\alpha,\beta) = \text{constant}\cdot x^{\alpha-1}(1-x)^{\beta-1}$. Whe
74
What is the intuition behind beta distribution?
In the cited example the parameters are alpha = 81 and beta = 219 from the prior year [81 hits in 300 at bats or (81 and 300 - 81 = 219)] I don't know what they call the prior assumption of 81 hits and 219 outs but in English, that's the a priori assumption. Notice how as the season progresses the curve shifts left or right and the modal probability shifts left or right but there is still a curve. I wonder if the Laa of Large Numbers eventually takes hold and drives the batting average back to .270. To guesstimate the alpha and beta in general one would take the complete number of prior occurrences (at bats), the batting average as known, obtain the total hits (the alpha), the beta or the grand total minus the failures) and voila – you have your formula. Then, work the additional data in as shown.
What is the intuition behind beta distribution?
In the cited example the parameters are alpha = 81 and beta = 219 from the prior year [81 hits in 300 at bats or (81 and 300 - 81 = 219)] I don't know what they call the prior assumption of 81 hits an
What is the intuition behind beta distribution? In the cited example the parameters are alpha = 81 and beta = 219 from the prior year [81 hits in 300 at bats or (81 and 300 - 81 = 219)] I don't know what they call the prior assumption of 81 hits and 219 outs but in English, that's the a priori assumption. Notice how as the season progresses the curve shifts left or right and the modal probability shifts left or right but there is still a curve. I wonder if the Laa of Large Numbers eventually takes hold and drives the batting average back to .270. To guesstimate the alpha and beta in general one would take the complete number of prior occurrences (at bats), the batting average as known, obtain the total hits (the alpha), the beta or the grand total minus the failures) and voila – you have your formula. Then, work the additional data in as shown.
What is the intuition behind beta distribution? In the cited example the parameters are alpha = 81 and beta = 219 from the prior year [81 hits in 300 at bats or (81 and 300 - 81 = 219)] I don't know what they call the prior assumption of 81 hits an
75
What is the intuition behind beta distribution?
The beta distribution is very useful when you are working with particle size distribution. It is not the situation when you want to model a grain distribution; this case is better to use Tanh distribution $F(X) = \tanh ((x/p)^n)$ that is not bounded on the right. By the way, what's up if you produce a size distribution from a microscopic observation and you have a particle distribution in number, and your aim is to work with a volume distribution? It is almost mandatory to get the original distribution in number bounded on the right. So, the transformation is more consistent because you are sure that in the new volume distribution does not appear any mode, nor median nor medium size out of the interval you are working. Besides, you avoid the Greenland Africa effect. The transformation is very easy if you have regular shapes, i.e., a sphere or a prism. You ought to add three units to the alpha parameter of the number beta distribution and get the volume distribution.
What is the intuition behind beta distribution?
The beta distribution is very useful when you are working with particle size distribution. It is not the situation when you want to model a grain distribution; this case is better to use Tanh distribu
What is the intuition behind beta distribution? The beta distribution is very useful when you are working with particle size distribution. It is not the situation when you want to model a grain distribution; this case is better to use Tanh distribution $F(X) = \tanh ((x/p)^n)$ that is not bounded on the right. By the way, what's up if you produce a size distribution from a microscopic observation and you have a particle distribution in number, and your aim is to work with a volume distribution? It is almost mandatory to get the original distribution in number bounded on the right. So, the transformation is more consistent because you are sure that in the new volume distribution does not appear any mode, nor median nor medium size out of the interval you are working. Besides, you avoid the Greenland Africa effect. The transformation is very easy if you have regular shapes, i.e., a sphere or a prism. You ought to add three units to the alpha parameter of the number beta distribution and get the volume distribution.
What is the intuition behind beta distribution? The beta distribution is very useful when you are working with particle size distribution. It is not the situation when you want to model a grain distribution; this case is better to use Tanh distribu
76
What is the intuition behind beta distribution?
To add to David Robinson's answer, the initial $\alpha$ and $\beta$ parameters of the Beta distribution can be computed from a desired mean ($\mu$) and standard deviation ($\sigma$) using the following formulae: $\alpha = \frac{-\mu(\mu^2-\mu+\sigma^2)}{\sigma^2}$ $\beta = \frac{(\mu-1)(\mu^2-\mu+\sigma^2)}{\sigma^2}$ For example, if the desired mean = 0.27 and standard deviation = 0.025, then: $\alpha = \frac{-0.27(0.27^2-0.27+0.025^2)}{0.025^2} \approx 84.88$ $\beta = \frac{(0.27-1)(0.27^2-0.27+0.025^2)}{0.025^2} \approx 229.48$ Compare these to the estimates in David's answer ($\alpha=81, \beta=219$) Based on the mean and variance (table on the right) of the Beta distribution and solved via WolframAlpha here.
What is the intuition behind beta distribution?
To add to David Robinson's answer, the initial $\alpha$ and $\beta$ parameters of the Beta distribution can be computed from a desired mean ($\mu$) and standard deviation ($\sigma$) using the followin
What is the intuition behind beta distribution? To add to David Robinson's answer, the initial $\alpha$ and $\beta$ parameters of the Beta distribution can be computed from a desired mean ($\mu$) and standard deviation ($\sigma$) using the following formulae: $\alpha = \frac{-\mu(\mu^2-\mu+\sigma^2)}{\sigma^2}$ $\beta = \frac{(\mu-1)(\mu^2-\mu+\sigma^2)}{\sigma^2}$ For example, if the desired mean = 0.27 and standard deviation = 0.025, then: $\alpha = \frac{-0.27(0.27^2-0.27+0.025^2)}{0.025^2} \approx 84.88$ $\beta = \frac{(0.27-1)(0.27^2-0.27+0.025^2)}{0.025^2} \approx 229.48$ Compare these to the estimates in David's answer ($\alpha=81, \beta=219$) Based on the mean and variance (table on the right) of the Beta distribution and solved via WolframAlpha here.
What is the intuition behind beta distribution? To add to David Robinson's answer, the initial $\alpha$ and $\beta$ parameters of the Beta distribution can be computed from a desired mean ($\mu$) and standard deviation ($\sigma$) using the followin
77
What is the intuition behind beta distribution?
In another question concerning the beta distribution the following intuition behind beta is provided: In other words the beta distribution can be seen as the distribution of probabilities in the center of a jittered distribution. For details please checkout the full answer at https://stats.stackexchange.com/a/429754/142758
What is the intuition behind beta distribution?
In another question concerning the beta distribution the following intuition behind beta is provided: In other words the beta distribution can be seen as the distribution of probabilities in the c
What is the intuition behind beta distribution? In another question concerning the beta distribution the following intuition behind beta is provided: In other words the beta distribution can be seen as the distribution of probabilities in the center of a jittered distribution. For details please checkout the full answer at https://stats.stackexchange.com/a/429754/142758
What is the intuition behind beta distribution? In another question concerning the beta distribution the following intuition behind beta is provided: In other words the beta distribution can be seen as the distribution of probabilities in the c
78
What is the intuition behind beta distribution?
If you break a unit-length rod into k+m pieces, keeping k and discarding m, then the resulting length is Beta(k,m). (See this question for more details. A related example is that Beta(k,n-k) is the k-th smallest among n-1 independent variables uniformly distributed over the unit interval.)
What is the intuition behind beta distribution?
If you break a unit-length rod into k+m pieces, keeping k and discarding m, then the resulting length is Beta(k,m). (See this question for more details. A related example is that Beta(k,n-k) is the k-
What is the intuition behind beta distribution? If you break a unit-length rod into k+m pieces, keeping k and discarding m, then the resulting length is Beta(k,m). (See this question for more details. A related example is that Beta(k,n-k) is the k-th smallest among n-1 independent variables uniformly distributed over the unit interval.)
What is the intuition behind beta distribution? If you break a unit-length rod into k+m pieces, keeping k and discarding m, then the resulting length is Beta(k,m). (See this question for more details. A related example is that Beta(k,n-k) is the k-
79
What is the intuition behind beta distribution?
There are already so many awesome answers here, but I'd like to share with you how I interpret the "probabilistic distribution of probabilities" as @David Robinson described in the accepted answer and add some supplementary points using some very simple illustrations and derivations. Imagine this, we have a coin and flip it in the following three scenarios: 1) toss it five times and get TTTTT (five tails and zero head); in scenario 2) use the same coin and toss it also five times and get HTTHH (three heads and two tails); in scenario 3) get the same coin and toss it ten times and get THHTHHTHTH (six heads and four tails). Then three issues arise a) we don't have a strategy to guess the probability in the first flipping; b) in scenario 1 the probability (we would work out) of getting head in the 6th tossing would be impossible which seems unreal(black swan event); c) in scenario 2 and 3 the (relative) probabilities of getting head next time are both $0.6$ although we know the confidence is higher in scenario 3. Therefore it is not enough to estimate the probability in tossing a coin just using a probability point and with no prior information, instead, we need a prior before we toss the coin and a probability distribution for each time step in the three cases above. Beta distribution $\text{Beta}(\theta|\alpha_H, \alpha_T)$ can address the three problems where $\theta$ represents the density over the interval [0, 1], $\alpha_H$ the times heads occur and $\alpha_T$ the times tails occur here. For the issue a, we can assume before flipping the coin that heads and tails are equally likely by either use a probability point and saying that the chance of occurring heads is 50%, or employing the Beta distribution and setting the prior as $\text{Beta}(\theta|1, 1)$ (equivalent to the uniform distribution) meaning two virtual tosses(we can treat the hyperparameter (1, 1) as pseudocounts) and we have observed one head event and one tail event (as depicted bellow). p = seq(0,1, length=100) plot(p, dbeta(p, 1, 1), ylab="dbeta(p, 1, 1)", type ="l", col="blue") In fact we can bridge the two methods by the following derivation: $\begin{align*} E[\text{Beta}(\theta|\alpha_H, \alpha_T)] &= \int_0^1 \theta P(\theta|\alpha_H, \alpha_T) d\theta \hspace{2.15cm}\text{the numerator/normalization is a constant}\\ &=\dfrac{\int_0^1 \theta \{ \theta^{\alpha_H-1} (1-\theta)^{\alpha_T-1}\}\ d\theta}{B(\alpha_H,\alpha_T)}\hspace{.75cm} \text{definition of Beta; the numerator is a constant} \\ &= \dfrac{B(\alpha_H+1,\alpha_T)}{B(\alpha_H,\alpha_T)} \hspace{3cm}\text{$\theta \theta^{\alpha_H-1}=\theta^{\alpha_H}$} \\ &= \dfrac{\Gamma(\alpha_H+1) \Gamma(\alpha_T)}{\Gamma(\alpha_H+\alpha_T+1)} \dfrac{\Gamma(\alpha_H+\alpha_T)}{\Gamma(\alpha_H)\Gamma(\alpha_T)} \\ &= \dfrac{\alpha_H}{\alpha_H+\alpha_T} \end{align*}$ We see that the expectation $\frac{1}{1+1}=50%$ is just equal to the probability point, and we can also view the probability point as one point in the Beta distribution (the Beta distribution implies that all probabilities are 100% but the probability point implies that only 50% is 100%). For the issue b, we can calculate the posterior as follows after getting N observations(N is 5: $N_T=5$ and $N_H=0$) $\mathcal{D}$. $\begin{align*} \text{Beta}(\theta|\mathcal{D}, \alpha_H, \alpha_T) &\propto P(\mathcal{D}|\theta,\alpha_H, \alpha_T)P(\theta|\alpha_H, \alpha_T) \hspace{.47cm}\text{likelihood $\times$ prior}\\ &= P(\mathcal{D}|\theta) P(\theta|\alpha_H, \alpha_T) \hspace{2cm} \text{as depicted bellow}\\ &\propto \theta^{N_H} (1-\theta)^{N_T} \cdot \theta^{\alpha_H-1} (1-\theta)^{\alpha_T-1} \\ &= \theta^{N_H+\alpha_H-1} (1-\theta)^{N_T+\alpha_T-1} \\ &= \text{Beta}(\theta|\alpha_H+N_H, \alpha_T+N_T) \end{align*}$ $\mathcal{D}$,$\alpha_H$ and $\alpha_T$ are independent given $\theta$ We can plug in the prior and N observations and get $\text{Beta}(\theta|1+0, 1+5)$ p = seq(0,1, length=100) plot(p, dbeta(p, 1+0, 1+5), ylab="dbeta(p, 1+0, 1+5)", type ="l", col="blue") We see the distribution over all probabilities of getting a head the density is high over the low probabilities but never be zero we can get otherwise, and the expectation is $E[\text{Beta}(\theta|1+0, 1+5)] = \frac{1+0}{1+0+1+5}$ (the Laplace smoothing or additive smoothing) rather than 0/impossible (in issue b). For the issue c, we can calculate the two posteriors (along the same line as the above derivation) and compare them (as with the uniform as prior). When we get three heads and two tails we get $\text{Beta}(\theta|\mathcal{D}, \alpha_H, \alpha_T)=\text{Beta}(\theta|1+3, 1+2)$ p = seq(0,1, length=100) plot(p, dbeta(p, 1+3, 1+2), ylab="dbeta(p, 1+3, 1+2)", type ="l", col="blue") When we get six heads and four tails we get $\text{Beta}(\theta|\mathcal{D}, \alpha_H, \alpha_T)=\text{Beta}(\theta|1+6, 1+4)$ p = seq(0,1, length=100) plot(p, dbeta(p, 1+6, 1+4), ylab="dbeta(p, 1+6, 1+4)", type ="l", col="blue") We can calculate their expectations ($\frac{1+3}{1+3+1+2} = 0.571 \approx \frac{1+6}{1+6+1+4} = 0.583$, and if we don't consider the prior $\frac{3}{3+2} = \frac{6}{6+4}$) but we can see that the second curve is more tall and narrow(more confident). The denominator of the expectation can be interpreted as a measure of confidence, the more evidence (either virtual or real) we have the more confident the posterior and the taller and narrower the curve of the Beta distribution. But if we do like that in issue c the information is just lost. References: https://math.stackexchange.com/a/497599/351322 17.3.1.3 of Probabilistic Graphical Models Principles and Techniques
What is the intuition behind beta distribution?
There are already so many awesome answers here, but I'd like to share with you how I interpret the "probabilistic distribution of probabilities" as @David Robinson described in the accepted answer and
What is the intuition behind beta distribution? There are already so many awesome answers here, but I'd like to share with you how I interpret the "probabilistic distribution of probabilities" as @David Robinson described in the accepted answer and add some supplementary points using some very simple illustrations and derivations. Imagine this, we have a coin and flip it in the following three scenarios: 1) toss it five times and get TTTTT (five tails and zero head); in scenario 2) use the same coin and toss it also five times and get HTTHH (three heads and two tails); in scenario 3) get the same coin and toss it ten times and get THHTHHTHTH (six heads and four tails). Then three issues arise a) we don't have a strategy to guess the probability in the first flipping; b) in scenario 1 the probability (we would work out) of getting head in the 6th tossing would be impossible which seems unreal(black swan event); c) in scenario 2 and 3 the (relative) probabilities of getting head next time are both $0.6$ although we know the confidence is higher in scenario 3. Therefore it is not enough to estimate the probability in tossing a coin just using a probability point and with no prior information, instead, we need a prior before we toss the coin and a probability distribution for each time step in the three cases above. Beta distribution $\text{Beta}(\theta|\alpha_H, \alpha_T)$ can address the three problems where $\theta$ represents the density over the interval [0, 1], $\alpha_H$ the times heads occur and $\alpha_T$ the times tails occur here. For the issue a, we can assume before flipping the coin that heads and tails are equally likely by either use a probability point and saying that the chance of occurring heads is 50%, or employing the Beta distribution and setting the prior as $\text{Beta}(\theta|1, 1)$ (equivalent to the uniform distribution) meaning two virtual tosses(we can treat the hyperparameter (1, 1) as pseudocounts) and we have observed one head event and one tail event (as depicted bellow). p = seq(0,1, length=100) plot(p, dbeta(p, 1, 1), ylab="dbeta(p, 1, 1)", type ="l", col="blue") In fact we can bridge the two methods by the following derivation: $\begin{align*} E[\text{Beta}(\theta|\alpha_H, \alpha_T)] &= \int_0^1 \theta P(\theta|\alpha_H, \alpha_T) d\theta \hspace{2.15cm}\text{the numerator/normalization is a constant}\\ &=\dfrac{\int_0^1 \theta \{ \theta^{\alpha_H-1} (1-\theta)^{\alpha_T-1}\}\ d\theta}{B(\alpha_H,\alpha_T)}\hspace{.75cm} \text{definition of Beta; the numerator is a constant} \\ &= \dfrac{B(\alpha_H+1,\alpha_T)}{B(\alpha_H,\alpha_T)} \hspace{3cm}\text{$\theta \theta^{\alpha_H-1}=\theta^{\alpha_H}$} \\ &= \dfrac{\Gamma(\alpha_H+1) \Gamma(\alpha_T)}{\Gamma(\alpha_H+\alpha_T+1)} \dfrac{\Gamma(\alpha_H+\alpha_T)}{\Gamma(\alpha_H)\Gamma(\alpha_T)} \\ &= \dfrac{\alpha_H}{\alpha_H+\alpha_T} \end{align*}$ We see that the expectation $\frac{1}{1+1}=50%$ is just equal to the probability point, and we can also view the probability point as one point in the Beta distribution (the Beta distribution implies that all probabilities are 100% but the probability point implies that only 50% is 100%). For the issue b, we can calculate the posterior as follows after getting N observations(N is 5: $N_T=5$ and $N_H=0$) $\mathcal{D}$. $\begin{align*} \text{Beta}(\theta|\mathcal{D}, \alpha_H, \alpha_T) &\propto P(\mathcal{D}|\theta,\alpha_H, \alpha_T)P(\theta|\alpha_H, \alpha_T) \hspace{.47cm}\text{likelihood $\times$ prior}\\ &= P(\mathcal{D}|\theta) P(\theta|\alpha_H, \alpha_T) \hspace{2cm} \text{as depicted bellow}\\ &\propto \theta^{N_H} (1-\theta)^{N_T} \cdot \theta^{\alpha_H-1} (1-\theta)^{\alpha_T-1} \\ &= \theta^{N_H+\alpha_H-1} (1-\theta)^{N_T+\alpha_T-1} \\ &= \text{Beta}(\theta|\alpha_H+N_H, \alpha_T+N_T) \end{align*}$ $\mathcal{D}$,$\alpha_H$ and $\alpha_T$ are independent given $\theta$ We can plug in the prior and N observations and get $\text{Beta}(\theta|1+0, 1+5)$ p = seq(0,1, length=100) plot(p, dbeta(p, 1+0, 1+5), ylab="dbeta(p, 1+0, 1+5)", type ="l", col="blue") We see the distribution over all probabilities of getting a head the density is high over the low probabilities but never be zero we can get otherwise, and the expectation is $E[\text{Beta}(\theta|1+0, 1+5)] = \frac{1+0}{1+0+1+5}$ (the Laplace smoothing or additive smoothing) rather than 0/impossible (in issue b). For the issue c, we can calculate the two posteriors (along the same line as the above derivation) and compare them (as with the uniform as prior). When we get three heads and two tails we get $\text{Beta}(\theta|\mathcal{D}, \alpha_H, \alpha_T)=\text{Beta}(\theta|1+3, 1+2)$ p = seq(0,1, length=100) plot(p, dbeta(p, 1+3, 1+2), ylab="dbeta(p, 1+3, 1+2)", type ="l", col="blue") When we get six heads and four tails we get $\text{Beta}(\theta|\mathcal{D}, \alpha_H, \alpha_T)=\text{Beta}(\theta|1+6, 1+4)$ p = seq(0,1, length=100) plot(p, dbeta(p, 1+6, 1+4), ylab="dbeta(p, 1+6, 1+4)", type ="l", col="blue") We can calculate their expectations ($\frac{1+3}{1+3+1+2} = 0.571 \approx \frac{1+6}{1+6+1+4} = 0.583$, and if we don't consider the prior $\frac{3}{3+2} = \frac{6}{6+4}$) but we can see that the second curve is more tall and narrow(more confident). The denominator of the expectation can be interpreted as a measure of confidence, the more evidence (either virtual or real) we have the more confident the posterior and the taller and narrower the curve of the Beta distribution. But if we do like that in issue c the information is just lost. References: https://math.stackexchange.com/a/497599/351322 17.3.1.3 of Probabilistic Graphical Models Principles and Techniques
What is the intuition behind beta distribution? There are already so many awesome answers here, but I'd like to share with you how I interpret the "probabilistic distribution of probabilities" as @David Robinson described in the accepted answer and
80
What is the intuition behind beta distribution?
I think there is NO intuition behind beta distribution! The beta distribution is just a very flexible distribution with FIX range! And for integer a and b it is even easy to deal with. Also many special cases of the beta have their native meaning, like the uniform distribution. So if the data needs to be modeled like this, or with slightly more flexibility, then the beta is a very good choice.
What is the intuition behind beta distribution?
I think there is NO intuition behind beta distribution! The beta distribution is just a very flexible distribution with FIX range! And for integer a and b it is even easy to deal with. Also many speci
What is the intuition behind beta distribution? I think there is NO intuition behind beta distribution! The beta distribution is just a very flexible distribution with FIX range! And for integer a and b it is even easy to deal with. Also many special cases of the beta have their native meaning, like the uniform distribution. So if the data needs to be modeled like this, or with slightly more flexibility, then the beta is a very good choice.
What is the intuition behind beta distribution? I think there is NO intuition behind beta distribution! The beta distribution is just a very flexible distribution with FIX range! And for integer a and b it is even easy to deal with. Also many speci
81
Why square the difference instead of taking the absolute value in standard deviation?
If the goal of the standard deviation is to summarise the spread of a symmetrical data set (i.e. in general how far each datum is from the mean), then we need a good method of defining how to measure that spread. The benefits of squaring include: Squaring always gives a non-negative value, so the sum will always be zero or higher. Squaring emphasizes larger differences, a feature that turns out to be both good and bad (think of the effect outliers have). Squaring however does have a problem as a measure of spread and that is that the units are all squared, whereas we might prefer the spread to be in the same units as the original data (think of squared pounds, squared dollars, or squared apples). Hence the square root allows us to return to the original units. I suppose you could say that absolute difference assigns equal weight to the spread of data whereas squaring emphasises the extremes. Technically though, as others have pointed out, squaring makes the algebra much easier to work with and offers properties that the absolute method does not (for example, the variance is equal to the expected value of the square of the distribution minus the square of the mean of the distribution) It is important to note however that there's no reason you couldn't take the absolute difference if that is your preference on how you wish to view 'spread' (sort of how some people see 5% as some magical threshold for $p$-values, when in fact it is situation dependent). Indeed, there are in fact several competing methods for measuring spread. My view is to use the squared values because I like to think of how it relates to the Pythagorean Theorem of Statistics: $c = \sqrt{a^2 + b^2}$ …this also helps me remember that when working with independent random variables, variances add, standard deviations don't. But that's just my personal subjective preference which I mostly only use as a memory aid, feel free to ignore this paragraph. An interesting analysis can be read here: Revisiting a 90-year-old debate: the advantages of the mean deviation - Stephen Gorard (Department of Educational Studies, University of York); Paper presented at the British Educational Research Association Annual Conference, University of Manchester, 16-18 September 2004
Why square the difference instead of taking the absolute value in standard deviation?
If the goal of the standard deviation is to summarise the spread of a symmetrical data set (i.e. in general how far each datum is from the mean), then we need a good method of defining how to measure
Why square the difference instead of taking the absolute value in standard deviation? If the goal of the standard deviation is to summarise the spread of a symmetrical data set (i.e. in general how far each datum is from the mean), then we need a good method of defining how to measure that spread. The benefits of squaring include: Squaring always gives a non-negative value, so the sum will always be zero or higher. Squaring emphasizes larger differences, a feature that turns out to be both good and bad (think of the effect outliers have). Squaring however does have a problem as a measure of spread and that is that the units are all squared, whereas we might prefer the spread to be in the same units as the original data (think of squared pounds, squared dollars, or squared apples). Hence the square root allows us to return to the original units. I suppose you could say that absolute difference assigns equal weight to the spread of data whereas squaring emphasises the extremes. Technically though, as others have pointed out, squaring makes the algebra much easier to work with and offers properties that the absolute method does not (for example, the variance is equal to the expected value of the square of the distribution minus the square of the mean of the distribution) It is important to note however that there's no reason you couldn't take the absolute difference if that is your preference on how you wish to view 'spread' (sort of how some people see 5% as some magical threshold for $p$-values, when in fact it is situation dependent). Indeed, there are in fact several competing methods for measuring spread. My view is to use the squared values because I like to think of how it relates to the Pythagorean Theorem of Statistics: $c = \sqrt{a^2 + b^2}$ …this also helps me remember that when working with independent random variables, variances add, standard deviations don't. But that's just my personal subjective preference which I mostly only use as a memory aid, feel free to ignore this paragraph. An interesting analysis can be read here: Revisiting a 90-year-old debate: the advantages of the mean deviation - Stephen Gorard (Department of Educational Studies, University of York); Paper presented at the British Educational Research Association Annual Conference, University of Manchester, 16-18 September 2004
Why square the difference instead of taking the absolute value in standard deviation? If the goal of the standard deviation is to summarise the spread of a symmetrical data set (i.e. in general how far each datum is from the mean), then we need a good method of defining how to measure
82
Why square the difference instead of taking the absolute value in standard deviation?
The squared difference has nicer mathematical properties; it's continuously differentiable (nice when you want to minimize it), it's a sufficient statistic for the Gaussian distribution, and it's (a version of) the L2 norm which comes in handy for proving convergence and so on. The mean absolute deviation (the absolute value notation you suggest) is also used as a measure of dispersion, but it's not as "well-behaved" as the squared error.
Why square the difference instead of taking the absolute value in standard deviation?
The squared difference has nicer mathematical properties; it's continuously differentiable (nice when you want to minimize it), it's a sufficient statistic for the Gaussian distribution, and it's (a v
Why square the difference instead of taking the absolute value in standard deviation? The squared difference has nicer mathematical properties; it's continuously differentiable (nice when you want to minimize it), it's a sufficient statistic for the Gaussian distribution, and it's (a version of) the L2 norm which comes in handy for proving convergence and so on. The mean absolute deviation (the absolute value notation you suggest) is also used as a measure of dispersion, but it's not as "well-behaved" as the squared error.
Why square the difference instead of taking the absolute value in standard deviation? The squared difference has nicer mathematical properties; it's continuously differentiable (nice when you want to minimize it), it's a sufficient statistic for the Gaussian distribution, and it's (a v
83
Why square the difference instead of taking the absolute value in standard deviation?
One way you can think of this is that standard deviation is similar to a "distance from the mean". Compare this to distances in euclidean space - this gives you the true distance, where what you suggested (which, btw, is the absolute deviation) is more like a manhattan distance calculation.
Why square the difference instead of taking the absolute value in standard deviation?
One way you can think of this is that standard deviation is similar to a "distance from the mean". Compare this to distances in euclidean space - this gives you the true distance, where what you sug
Why square the difference instead of taking the absolute value in standard deviation? One way you can think of this is that standard deviation is similar to a "distance from the mean". Compare this to distances in euclidean space - this gives you the true distance, where what you suggested (which, btw, is the absolute deviation) is more like a manhattan distance calculation.
Why square the difference instead of taking the absolute value in standard deviation? One way you can think of this is that standard deviation is similar to a "distance from the mean". Compare this to distances in euclidean space - this gives you the true distance, where what you sug
84
Why square the difference instead of taking the absolute value in standard deviation?
The reason that we calculate standard deviation instead of absolute error is that we are assuming error to be normally distributed. It's a part of the model. Suppose you were measuring very small lengths with a ruler, then standard deviation is a bad metric for error because you know you will never accidentally measure a negative length. A better metric would be one to help fit a Gamma distribution to your measurements: $\log(E(x)) - E(\log(x))$ Like the standard deviation, this is also non-negative and differentiable, but it is a better error statistic for this problem.
Why square the difference instead of taking the absolute value in standard deviation?
The reason that we calculate standard deviation instead of absolute error is that we are assuming error to be normally distributed. It's a part of the model. Suppose you were measuring very small len
Why square the difference instead of taking the absolute value in standard deviation? The reason that we calculate standard deviation instead of absolute error is that we are assuming error to be normally distributed. It's a part of the model. Suppose you were measuring very small lengths with a ruler, then standard deviation is a bad metric for error because you know you will never accidentally measure a negative length. A better metric would be one to help fit a Gamma distribution to your measurements: $\log(E(x)) - E(\log(x))$ Like the standard deviation, this is also non-negative and differentiable, but it is a better error statistic for this problem.
Why square the difference instead of taking the absolute value in standard deviation? The reason that we calculate standard deviation instead of absolute error is that we are assuming error to be normally distributed. It's a part of the model. Suppose you were measuring very small len
85
Why square the difference instead of taking the absolute value in standard deviation?
The answer that best satisfied me is that it falls out naturally from the generalization of a sample to n-dimensional euclidean space. It's certainly debatable whether that's something that should be done, but in any case: Assume your $n$ measurements $X_i$ are each an axis in $\mathbb R^n$. Then your data $x_i$ define a point $\bf x$ in that space. Now you might notice that the data are all very similar to each other, so you can represent them with a single location parameter $\mu$ that is constrained to lie on the line defined by $X_i=\mu$. Projecting your datapoint onto this line gets you $\hat\mu=\bar x$, and the distance from the projected point $\hat\mu\bf 1$ to the actual datapoint is $\sqrt{\frac{n-1} n}\hat\sigma=\|\bf x-\hat\mu\bf 1\|$. This approach also gets you a geometric interpretation for correlation, $\hat\rho=\cos \angle(\vec{\bf\tilde x},\vec{\bf\tilde y})$.
Why square the difference instead of taking the absolute value in standard deviation?
The answer that best satisfied me is that it falls out naturally from the generalization of a sample to n-dimensional euclidean space. It's certainly debatable whether that's something that should be
Why square the difference instead of taking the absolute value in standard deviation? The answer that best satisfied me is that it falls out naturally from the generalization of a sample to n-dimensional euclidean space. It's certainly debatable whether that's something that should be done, but in any case: Assume your $n$ measurements $X_i$ are each an axis in $\mathbb R^n$. Then your data $x_i$ define a point $\bf x$ in that space. Now you might notice that the data are all very similar to each other, so you can represent them with a single location parameter $\mu$ that is constrained to lie on the line defined by $X_i=\mu$. Projecting your datapoint onto this line gets you $\hat\mu=\bar x$, and the distance from the projected point $\hat\mu\bf 1$ to the actual datapoint is $\sqrt{\frac{n-1} n}\hat\sigma=\|\bf x-\hat\mu\bf 1\|$. This approach also gets you a geometric interpretation for correlation, $\hat\rho=\cos \angle(\vec{\bf\tilde x},\vec{\bf\tilde y})$.
Why square the difference instead of taking the absolute value in standard deviation? The answer that best satisfied me is that it falls out naturally from the generalization of a sample to n-dimensional euclidean space. It's certainly debatable whether that's something that should be
86
Why square the difference instead of taking the absolute value in standard deviation?
Squaring the difference from the mean has a couple of reasons. Variance is defined as the 2nd moment of the deviation (the R.V here is $(x-\mu)$) and thus the square as moments are simply the expectations of higher powers of the random variable. Having a square as opposed to the absolute value function gives a nice continuous and differentiable function (absolute value is not differentiable at 0) - which makes it the natural choice, especially in the context of estimation and regression analysis. The squared formulation also naturally falls out of parameters of the Normal Distribution.
Why square the difference instead of taking the absolute value in standard deviation?
Squaring the difference from the mean has a couple of reasons. Variance is defined as the 2nd moment of the deviation (the R.V here is $(x-\mu)$) and thus the square as moments are simply the expecta
Why square the difference instead of taking the absolute value in standard deviation? Squaring the difference from the mean has a couple of reasons. Variance is defined as the 2nd moment of the deviation (the R.V here is $(x-\mu)$) and thus the square as moments are simply the expectations of higher powers of the random variable. Having a square as opposed to the absolute value function gives a nice continuous and differentiable function (absolute value is not differentiable at 0) - which makes it the natural choice, especially in the context of estimation and regression analysis. The squared formulation also naturally falls out of parameters of the Normal Distribution.
Why square the difference instead of taking the absolute value in standard deviation? Squaring the difference from the mean has a couple of reasons. Variance is defined as the 2nd moment of the deviation (the R.V here is $(x-\mu)$) and thus the square as moments are simply the expecta
87
Why square the difference instead of taking the absolute value in standard deviation?
Just so people know, there is a Math Overflow question on the same topic. Why-is-it-so-cool-to-square-numbers-in-terms-of-finding-the-standard-deviation The take away message is that using the square root of the variance leads to easier maths. A similar response is given by Rich and Reed above.
Why square the difference instead of taking the absolute value in standard deviation?
Just so people know, there is a Math Overflow question on the same topic. Why-is-it-so-cool-to-square-numbers-in-terms-of-finding-the-standard-deviation The take away message is that using the square
Why square the difference instead of taking the absolute value in standard deviation? Just so people know, there is a Math Overflow question on the same topic. Why-is-it-so-cool-to-square-numbers-in-terms-of-finding-the-standard-deviation The take away message is that using the square root of the variance leads to easier maths. A similar response is given by Rich and Reed above.
Why square the difference instead of taking the absolute value in standard deviation? Just so people know, there is a Math Overflow question on the same topic. Why-is-it-so-cool-to-square-numbers-in-terms-of-finding-the-standard-deviation The take away message is that using the square
88
Why square the difference instead of taking the absolute value in standard deviation?
$\newcommand{\var}{\operatorname{var}}$ Variances are additive: for independent random variables $X_1,\ldots,X_n$, $$ \var(X_1+\cdots+X_n)=\var(X_1)+\cdots+\var(X_n). $$ Notice what this makes possible: Say I toss a fair coin 900 times. What's the probability that the number of heads I get is between 440 and 455 inclusive? Just find the expected number of heads ($450$), and the variance of the number of heads ($225=15^2$), then find the probability with a normal (or Gaussian) distribution with expectation $450$ and standard deviation $15$ is between $439.5$ and $455.5$. Abraham de Moivre did this with coin tosses in the 18th century, thereby first showing that the bell-shaped curve is worth something.
Why square the difference instead of taking the absolute value in standard deviation?
$\newcommand{\var}{\operatorname{var}}$ Variances are additive: for independent random variables $X_1,\ldots,X_n$, $$ \var(X_1+\cdots+X_n)=\var(X_1)+\cdots+\var(X_n). $$ Notice what this makes possibl
Why square the difference instead of taking the absolute value in standard deviation? $\newcommand{\var}{\operatorname{var}}$ Variances are additive: for independent random variables $X_1,\ldots,X_n$, $$ \var(X_1+\cdots+X_n)=\var(X_1)+\cdots+\var(X_n). $$ Notice what this makes possible: Say I toss a fair coin 900 times. What's the probability that the number of heads I get is between 440 and 455 inclusive? Just find the expected number of heads ($450$), and the variance of the number of heads ($225=15^2$), then find the probability with a normal (or Gaussian) distribution with expectation $450$ and standard deviation $15$ is between $439.5$ and $455.5$. Abraham de Moivre did this with coin tosses in the 18th century, thereby first showing that the bell-shaped curve is worth something.
Why square the difference instead of taking the absolute value in standard deviation? $\newcommand{\var}{\operatorname{var}}$ Variances are additive: for independent random variables $X_1,\ldots,X_n$, $$ \var(X_1+\cdots+X_n)=\var(X_1)+\cdots+\var(X_n). $$ Notice what this makes possibl
89
Why square the difference instead of taking the absolute value in standard deviation?
Yet another reason (in addition to the excellent ones above) comes from Fisher himself, who showed that the standard deviation is more "efficient" than the absolute deviation. Here, efficient has to do with how much a statistic will fluctuate in value on different samplings from a population. If your population is normally distributed, the standard deviation of various samples from that population will, on average, tend to give you values that are pretty similar to each other, whereas the absolute deviation will give you numbers that spread out a bit more. Now, obviously this is in ideal circumstances, but this reason convinced a lot of people (along with the math being cleaner), so most people worked with standard deviations.
Why square the difference instead of taking the absolute value in standard deviation?
Yet another reason (in addition to the excellent ones above) comes from Fisher himself, who showed that the standard deviation is more "efficient" than the absolute deviation. Here, efficient has to d
Why square the difference instead of taking the absolute value in standard deviation? Yet another reason (in addition to the excellent ones above) comes from Fisher himself, who showed that the standard deviation is more "efficient" than the absolute deviation. Here, efficient has to do with how much a statistic will fluctuate in value on different samplings from a population. If your population is normally distributed, the standard deviation of various samples from that population will, on average, tend to give you values that are pretty similar to each other, whereas the absolute deviation will give you numbers that spread out a bit more. Now, obviously this is in ideal circumstances, but this reason convinced a lot of people (along with the math being cleaner), so most people worked with standard deviations.
Why square the difference instead of taking the absolute value in standard deviation? Yet another reason (in addition to the excellent ones above) comes from Fisher himself, who showed that the standard deviation is more "efficient" than the absolute deviation. Here, efficient has to d
90
Why square the difference instead of taking the absolute value in standard deviation?
Estimating the standard deviation of a distribution requires to choose a distance. Any of the following distance can be used: $$d_n((X)_{i=1,\ldots,I},\mu)=\left(\sum | X-\mu|^n\right)^{1/n}$$ We usually use the natural euclidean distance ($n=2$), which is the one everybody uses in daily life. The distance that you propose is the one with $n=1$. Both are good candidates but they are different. One could decide to use $n=3$ as well. I am not sure that you will like my answer, my point contrary to others is not to demonstrate that $n=2$ is better. I think that if you want to estimate the standard deviation of a distribution, you can absolutely use a different distance.
Why square the difference instead of taking the absolute value in standard deviation?
Estimating the standard deviation of a distribution requires to choose a distance. Any of the following distance can be used: $$d_n((X)_{i=1,\ldots,I},\mu)=\left(\sum | X-\mu|^n\right)^{1/n}$$ We usua
Why square the difference instead of taking the absolute value in standard deviation? Estimating the standard deviation of a distribution requires to choose a distance. Any of the following distance can be used: $$d_n((X)_{i=1,\ldots,I},\mu)=\left(\sum | X-\mu|^n\right)^{1/n}$$ We usually use the natural euclidean distance ($n=2$), which is the one everybody uses in daily life. The distance that you propose is the one with $n=1$. Both are good candidates but they are different. One could decide to use $n=3$ as well. I am not sure that you will like my answer, my point contrary to others is not to demonstrate that $n=2$ is better. I think that if you want to estimate the standard deviation of a distribution, you can absolutely use a different distance.
Why square the difference instead of taking the absolute value in standard deviation? Estimating the standard deviation of a distribution requires to choose a distance. Any of the following distance can be used: $$d_n((X)_{i=1,\ldots,I},\mu)=\left(\sum | X-\mu|^n\right)^{1/n}$$ We usua
91
Why square the difference instead of taking the absolute value in standard deviation?
I think the contrast between using absolute deviations and squared deviations becomes clearer once you move beyond a single variable and think about linear regression. There's a nice discussion at http://en.wikipedia.org/wiki/Least_absolute_deviations, particularly the section "Contrasting Least Squares with Least Absolute Deviations" , which links to some student exercises with a neat set of applets at http://www.math.wpi.edu/Course_Materials/SAS/lablets/7.3/73_choices.html . To summarise, least absolute deviations is more robust to outliers than ordinary least squares, but it can be unstable (small change in even a single datum can give big change in fitted line) and doesn't always have a unique solution - there can be a whole range of fitted lines. Also least absolute deviations requires iterative methods, while ordinary least squares has a simple closed-form solution, though that's not such a big deal now as it was in the days of Gauss and Legendre, of course.
Why square the difference instead of taking the absolute value in standard deviation?
I think the contrast between using absolute deviations and squared deviations becomes clearer once you move beyond a single variable and think about linear regression. There's a nice discussion at htt
Why square the difference instead of taking the absolute value in standard deviation? I think the contrast between using absolute deviations and squared deviations becomes clearer once you move beyond a single variable and think about linear regression. There's a nice discussion at http://en.wikipedia.org/wiki/Least_absolute_deviations, particularly the section "Contrasting Least Squares with Least Absolute Deviations" , which links to some student exercises with a neat set of applets at http://www.math.wpi.edu/Course_Materials/SAS/lablets/7.3/73_choices.html . To summarise, least absolute deviations is more robust to outliers than ordinary least squares, but it can be unstable (small change in even a single datum can give big change in fitted line) and doesn't always have a unique solution - there can be a whole range of fitted lines. Also least absolute deviations requires iterative methods, while ordinary least squares has a simple closed-form solution, though that's not such a big deal now as it was in the days of Gauss and Legendre, of course.
Why square the difference instead of taking the absolute value in standard deviation? I think the contrast between using absolute deviations and squared deviations becomes clearer once you move beyond a single variable and think about linear regression. There's a nice discussion at htt
92
Why square the difference instead of taking the absolute value in standard deviation?
In many ways, the use of standard deviation to summarize dispersion is jumping to a conclusion. You could say that SD implicitly assumes a symmetric distribution because of its equal treatment of distance below the mean as of distance above the mean. The SD is surprisingly difficult to interpret to non-statisticians. One could argue that Gini's mean difference has broader application and is significantly more interpretable. It does not require one to declare their choice of a measure of central tendency as the use of SD does for the mean. Gini's mean difference is the average absolute difference between any two different observations. Besides being robust and easy to interpret it happens to be 0.98 as efficient as SD if the distribution were actually Gaussian.
Why square the difference instead of taking the absolute value in standard deviation?
In many ways, the use of standard deviation to summarize dispersion is jumping to a conclusion. You could say that SD implicitly assumes a symmetric distribution because of its equal treatment of dis
Why square the difference instead of taking the absolute value in standard deviation? In many ways, the use of standard deviation to summarize dispersion is jumping to a conclusion. You could say that SD implicitly assumes a symmetric distribution because of its equal treatment of distance below the mean as of distance above the mean. The SD is surprisingly difficult to interpret to non-statisticians. One could argue that Gini's mean difference has broader application and is significantly more interpretable. It does not require one to declare their choice of a measure of central tendency as the use of SD does for the mean. Gini's mean difference is the average absolute difference between any two different observations. Besides being robust and easy to interpret it happens to be 0.98 as efficient as SD if the distribution were actually Gaussian.
Why square the difference instead of taking the absolute value in standard deviation? In many ways, the use of standard deviation to summarize dispersion is jumping to a conclusion. You could say that SD implicitly assumes a symmetric distribution because of its equal treatment of dis
93
Why square the difference instead of taking the absolute value in standard deviation?
Why square the difference instead of taking the absolute value in standard deviation? We square the difference of the x's from the mean because the Euclidean distance proportional to the square root of the degrees of freedom (number of x's, in a population measure) is the best measure of dispersion. That is, when the x's have zero mean, $\mu = 0$: $$ \sigma = \sqrt{\frac{\displaystyle\sum_{i=1}^{n}(x_i - \mu)^2} {n}} = \frac{\sqrt{\displaystyle\sum_{i=1}^{n}(x_i)^2}} {\sqrt{n}} = \frac{distance}{\sqrt{n}} $$ The square root of the sum of squares is the $n$-dimensional distance from the mean to the point in the $n$ dimensional space denoted by each data point. Calculating distance What's the distance from point 0 to point 5? $5-0 = 5$, $|0-5| = 5$, and $\sqrt{5^2} = 5$ Yes, that's trivial because it's a single dimension. How about the distance from point (0, 0) to point (3, 4)? If we can only go in 1 dimension at a time (like in city blocks) then we just add the numbers up. (This is sometimes known as the Manhattan distance). But what about going in two dimensions at once? Then (by the Pythagorean theorem we all learned in high school), we square the distance in each dimension, sum the squares, and then take the square root to find the distance from the origin to the point. $$ \sqrt{3^2 + 4^2} = \sqrt{25} = 5 $$ Visually (see the markdown source of the answer for the code to generate): Calculating distance in higher dimensions Now let's consider the 3 dimensional case, for example, how about the distance from point (0, 0, 0) to point (2, 2, 1)? This is just $$ \sqrt{\sqrt{2^2 + 2^2}^2 + 1^2} = \sqrt{2^2 + 2^2 + 1^2} = \sqrt9 = 3 $$ because the distance for the first two x's forms the leg for computing the total distance with the final x. $$ \sqrt{\sqrt{x_1^2 + x_2^2}^2 + x_3^2} = \sqrt{x_1^2 + x_2^2 + x_3^2} $$ Demonstrated visually: We can continue to extend the rule of squaring each dimension's distance, this generalizes to what we call a Euclidean distance, for orthogonal measurements in $n$-dimensional space, like so: $$ distance = \sqrt{ \sum\nolimits_{i=1}^n{x_i^2} } $$ and so the sum of orthogonal squares is the squared distance: $$ distance^2 = \sum_{i=1}^n{x_i^2} $$ What makes a measurement orthogonal (or at right angles) to another? The condition is that there is no relationship between the two measurements. We would look for these measurements to be independent and individually distributed, (i.i.d.). Variance Now recall the formula for population variance (from which we'll get the standard deviation): $$ \sigma^2 = \frac{\displaystyle\sum_{i=1}^{n}(x_i - \mu)^2} {n} $$ If we've already centered the data at 0 by subtracting the mean, we have: $$ \sigma^2 = \frac{\displaystyle\sum_{i=1}^{n}(x_i)^2} {n} $$ So we see the variance is just the squared distance, or $distance^2$ (see above), divided by the number of degrees of freedom (the number of dimensions on which the variables are free to vary). This is also the average contribution to $distance^2$ per measurement. "Mean squared variance" would also be an appropriate term. Standard Deviation Then we have the standard deviation, which is just the square root of the variance: $$ \sigma = \sqrt{\frac{\displaystyle\sum_{i=1}^{n}(x_i - \mu)^2} {n}} $$ Which is equivalently, the distance, divided by the square root of the degrees of freedom: $$ \sigma = \frac{\sqrt{\displaystyle\sum_{i=1}^{n}(x_i)^2}} {\sqrt{n}} $$ Mean Absolute Deviation Mean Absolute Deviation (MAD), is a measure of dispersion that uses the Manhattan distance, or the sum of absolute values of the differences from the mean. $$ MAD = \frac{\displaystyle\sum_{i=1}^{n}|x_i - \mu|} {n} $$ Again, assuming the data is centered (the mean subtracted) we have the Manhattan distance divided by the number of measurements: $$ MAD = \frac{\displaystyle\sum_{i=1}^{n}|x_i|} {n} $$ Discussion The mean absolute deviation is about .8 times (actually $\sqrt{2/\pi}$) the size of the standard deviation for a normally distributed dataset. Regardless of the distribution, the mean absolute deviation is less than or equal to the standard deviation. MAD understates the dispersion of a data set with extreme values, relative to standard deviation. Mean Absolute Deviation is more robust to outliers (i.e. outliers do not have as great an effect on the statistic as they do on standard deviation. Geometrically speaking, if the measurements are not orthogonal to each other (i.i.d.) - for example, if they were positively correlated, mean absolute deviation would be a better descriptive statistic than standard deviation, which relies on Euclidean distance (although this is usually considered fine). This table reflects the above information in a more concise way: $$ \begin{array}{lll} & MAD & \sigma \\ \hline size & \le \sigma & \ge MAD \\ size, \sim N & .8 \times \sigma & 1.25 \times MAD \\ outliers & robust & influenced \\ not\ i.i.d. & robust & ok \end{array} $$ Comments: Do you have a reference for "mean absolute deviation is about .8 times the size of the standard deviation for a normally distributed dataset"? The simulations I'm running show this to be incorrect. Here's 10 simulations of one million samples from the standard normal distribution: >>> from numpy.random import standard_normal >>> from numpy import mean, absolute >>> for _ in range(10): ... array = standard_normal(1_000_000) ... print(numpy.std(array), mean(absolute(array - mean(array)))) ... 0.9999303226807994 0.7980634269273035 1.001126461808081 0.7985832977798981 0.9994247275533893 0.7980171649802613 0.9994142105335478 0.7972367136320848 1.0001188211817726 0.798021564315937 1.000442654481297 0.7981845236910842 1.0001537518728232 0.7975554993742403 1.0002838369191982 0.798143108250063 0.9999060114455384 0.797895284109523 1.0004871065680165 0.798726062813422 Conclusion We prefer the squared differences when calculating a measure of dispersion because we can exploit the Euclidean distance, which gives us a better discriptive statistic of the dispersion. When there are more relatively extreme values, the Euclidean distance accounts for that in the statistic, whereas the Manhattan distance gives each measurement equal weight.
Why square the difference instead of taking the absolute value in standard deviation?
Why square the difference instead of taking the absolute value in standard deviation? We square the difference of the x's from the mean because the Euclidean distance proportional to the square root
Why square the difference instead of taking the absolute value in standard deviation? Why square the difference instead of taking the absolute value in standard deviation? We square the difference of the x's from the mean because the Euclidean distance proportional to the square root of the degrees of freedom (number of x's, in a population measure) is the best measure of dispersion. That is, when the x's have zero mean, $\mu = 0$: $$ \sigma = \sqrt{\frac{\displaystyle\sum_{i=1}^{n}(x_i - \mu)^2} {n}} = \frac{\sqrt{\displaystyle\sum_{i=1}^{n}(x_i)^2}} {\sqrt{n}} = \frac{distance}{\sqrt{n}} $$ The square root of the sum of squares is the $n$-dimensional distance from the mean to the point in the $n$ dimensional space denoted by each data point. Calculating distance What's the distance from point 0 to point 5? $5-0 = 5$, $|0-5| = 5$, and $\sqrt{5^2} = 5$ Yes, that's trivial because it's a single dimension. How about the distance from point (0, 0) to point (3, 4)? If we can only go in 1 dimension at a time (like in city blocks) then we just add the numbers up. (This is sometimes known as the Manhattan distance). But what about going in two dimensions at once? Then (by the Pythagorean theorem we all learned in high school), we square the distance in each dimension, sum the squares, and then take the square root to find the distance from the origin to the point. $$ \sqrt{3^2 + 4^2} = \sqrt{25} = 5 $$ Visually (see the markdown source of the answer for the code to generate): Calculating distance in higher dimensions Now let's consider the 3 dimensional case, for example, how about the distance from point (0, 0, 0) to point (2, 2, 1)? This is just $$ \sqrt{\sqrt{2^2 + 2^2}^2 + 1^2} = \sqrt{2^2 + 2^2 + 1^2} = \sqrt9 = 3 $$ because the distance for the first two x's forms the leg for computing the total distance with the final x. $$ \sqrt{\sqrt{x_1^2 + x_2^2}^2 + x_3^2} = \sqrt{x_1^2 + x_2^2 + x_3^2} $$ Demonstrated visually: We can continue to extend the rule of squaring each dimension's distance, this generalizes to what we call a Euclidean distance, for orthogonal measurements in $n$-dimensional space, like so: $$ distance = \sqrt{ \sum\nolimits_{i=1}^n{x_i^2} } $$ and so the sum of orthogonal squares is the squared distance: $$ distance^2 = \sum_{i=1}^n{x_i^2} $$ What makes a measurement orthogonal (or at right angles) to another? The condition is that there is no relationship between the two measurements. We would look for these measurements to be independent and individually distributed, (i.i.d.). Variance Now recall the formula for population variance (from which we'll get the standard deviation): $$ \sigma^2 = \frac{\displaystyle\sum_{i=1}^{n}(x_i - \mu)^2} {n} $$ If we've already centered the data at 0 by subtracting the mean, we have: $$ \sigma^2 = \frac{\displaystyle\sum_{i=1}^{n}(x_i)^2} {n} $$ So we see the variance is just the squared distance, or $distance^2$ (see above), divided by the number of degrees of freedom (the number of dimensions on which the variables are free to vary). This is also the average contribution to $distance^2$ per measurement. "Mean squared variance" would also be an appropriate term. Standard Deviation Then we have the standard deviation, which is just the square root of the variance: $$ \sigma = \sqrt{\frac{\displaystyle\sum_{i=1}^{n}(x_i - \mu)^2} {n}} $$ Which is equivalently, the distance, divided by the square root of the degrees of freedom: $$ \sigma = \frac{\sqrt{\displaystyle\sum_{i=1}^{n}(x_i)^2}} {\sqrt{n}} $$ Mean Absolute Deviation Mean Absolute Deviation (MAD), is a measure of dispersion that uses the Manhattan distance, or the sum of absolute values of the differences from the mean. $$ MAD = \frac{\displaystyle\sum_{i=1}^{n}|x_i - \mu|} {n} $$ Again, assuming the data is centered (the mean subtracted) we have the Manhattan distance divided by the number of measurements: $$ MAD = \frac{\displaystyle\sum_{i=1}^{n}|x_i|} {n} $$ Discussion The mean absolute deviation is about .8 times (actually $\sqrt{2/\pi}$) the size of the standard deviation for a normally distributed dataset. Regardless of the distribution, the mean absolute deviation is less than or equal to the standard deviation. MAD understates the dispersion of a data set with extreme values, relative to standard deviation. Mean Absolute Deviation is more robust to outliers (i.e. outliers do not have as great an effect on the statistic as they do on standard deviation. Geometrically speaking, if the measurements are not orthogonal to each other (i.i.d.) - for example, if they were positively correlated, mean absolute deviation would be a better descriptive statistic than standard deviation, which relies on Euclidean distance (although this is usually considered fine). This table reflects the above information in a more concise way: $$ \begin{array}{lll} & MAD & \sigma \\ \hline size & \le \sigma & \ge MAD \\ size, \sim N & .8 \times \sigma & 1.25 \times MAD \\ outliers & robust & influenced \\ not\ i.i.d. & robust & ok \end{array} $$ Comments: Do you have a reference for "mean absolute deviation is about .8 times the size of the standard deviation for a normally distributed dataset"? The simulations I'm running show this to be incorrect. Here's 10 simulations of one million samples from the standard normal distribution: >>> from numpy.random import standard_normal >>> from numpy import mean, absolute >>> for _ in range(10): ... array = standard_normal(1_000_000) ... print(numpy.std(array), mean(absolute(array - mean(array)))) ... 0.9999303226807994 0.7980634269273035 1.001126461808081 0.7985832977798981 0.9994247275533893 0.7980171649802613 0.9994142105335478 0.7972367136320848 1.0001188211817726 0.798021564315937 1.000442654481297 0.7981845236910842 1.0001537518728232 0.7975554993742403 1.0002838369191982 0.798143108250063 0.9999060114455384 0.797895284109523 1.0004871065680165 0.798726062813422 Conclusion We prefer the squared differences when calculating a measure of dispersion because we can exploit the Euclidean distance, which gives us a better discriptive statistic of the dispersion. When there are more relatively extreme values, the Euclidean distance accounts for that in the statistic, whereas the Manhattan distance gives each measurement equal weight.
Why square the difference instead of taking the absolute value in standard deviation? Why square the difference instead of taking the absolute value in standard deviation? We square the difference of the x's from the mean because the Euclidean distance proportional to the square root
94
Why square the difference instead of taking the absolute value in standard deviation?
There are many reasons; probably the main is that it works well as parameter of normal distribution.
Why square the difference instead of taking the absolute value in standard deviation?
There are many reasons; probably the main is that it works well as parameter of normal distribution.
Why square the difference instead of taking the absolute value in standard deviation? There are many reasons; probably the main is that it works well as parameter of normal distribution.
Why square the difference instead of taking the absolute value in standard deviation? There are many reasons; probably the main is that it works well as parameter of normal distribution.
95
Why square the difference instead of taking the absolute value in standard deviation?
"Why square the difference" instead of "taking absolute value"? To answer very exactly, there is literature that gives the reasons it was adopted and the case for why most of those reasons do not hold. "Can't we simply take the absolute value...?". I am aware of literature in which the answer is yes it is being done and doing so is argued to be advantageous. Author Gorard states, first, using squares was previously adopted for reasons of simplicity of calculation but that those original reasons no longer hold. Gorard states, second, that OLS was adopted because Fisher found that results in samples of analyses that used OLS had smaller deviations than those that used absolute differences (roughly stated). Thus, it would seem that OLS may have benefits in some ideal circumstances; however, Gorard proceeds to note that there is some consensus (and he claims Fisher agreed) that under real world conditions (imperfect measurement of observations, non-uniform distributions, studies of a population without inference from a sample), using squares is worse than absolute differences. Gorard's response to your question "Can't we simply take the absolute value of the difference instead and get the expected value (mean) of those?" is yes. Another advantage is that using differences produces measures (measures of errors and variation) that are related to the ways we experience those ideas in life. Gorard says imagine people who split the restaurant bill evenly and some might intuitively notice that that method is unfair. Nobody there will square the errors; the differences are the point. Finally, using absolute differences, he notes, treats each observation equally, whereas by contrast squaring the differences gives observations predicted poorly greater weight than observations predicted well, which is like allowing certain observations to be included in the study multiple times. In summary, his general thrust is that there are today not many winning reasons to use squares and that by contrast using absolute differences has advantages. References: Gorard, S. (2005). Revisiting a 90-year-old debate: the advantages of the mean deviation, British Journal of Educational Studies, 53, 4, pp. 417-430. Gorard, S. (2013). The possible advantages of the mean absolute deviation ‘effect’ size, Social Research Update, 65:1.
Why square the difference instead of taking the absolute value in standard deviation?
"Why square the difference" instead of "taking absolute value"? To answer very exactly, there is literature that gives the reasons it was adopted and the case for why most of those reasons do not hol
Why square the difference instead of taking the absolute value in standard deviation? "Why square the difference" instead of "taking absolute value"? To answer very exactly, there is literature that gives the reasons it was adopted and the case for why most of those reasons do not hold. "Can't we simply take the absolute value...?". I am aware of literature in which the answer is yes it is being done and doing so is argued to be advantageous. Author Gorard states, first, using squares was previously adopted for reasons of simplicity of calculation but that those original reasons no longer hold. Gorard states, second, that OLS was adopted because Fisher found that results in samples of analyses that used OLS had smaller deviations than those that used absolute differences (roughly stated). Thus, it would seem that OLS may have benefits in some ideal circumstances; however, Gorard proceeds to note that there is some consensus (and he claims Fisher agreed) that under real world conditions (imperfect measurement of observations, non-uniform distributions, studies of a population without inference from a sample), using squares is worse than absolute differences. Gorard's response to your question "Can't we simply take the absolute value of the difference instead and get the expected value (mean) of those?" is yes. Another advantage is that using differences produces measures (measures of errors and variation) that are related to the ways we experience those ideas in life. Gorard says imagine people who split the restaurant bill evenly and some might intuitively notice that that method is unfair. Nobody there will square the errors; the differences are the point. Finally, using absolute differences, he notes, treats each observation equally, whereas by contrast squaring the differences gives observations predicted poorly greater weight than observations predicted well, which is like allowing certain observations to be included in the study multiple times. In summary, his general thrust is that there are today not many winning reasons to use squares and that by contrast using absolute differences has advantages. References: Gorard, S. (2005). Revisiting a 90-year-old debate: the advantages of the mean deviation, British Journal of Educational Studies, 53, 4, pp. 417-430. Gorard, S. (2013). The possible advantages of the mean absolute deviation ‘effect’ size, Social Research Update, 65:1.
Why square the difference instead of taking the absolute value in standard deviation? "Why square the difference" instead of "taking absolute value"? To answer very exactly, there is literature that gives the reasons it was adopted and the case for why most of those reasons do not hol
96
Why square the difference instead of taking the absolute value in standard deviation?
It depends on what you are talking about when you say "spread of the data". To me this could mean two things: The width of a sampling distribution The accuracy of a given estimate For point 1) there is no particular reason to use the standard deviation as a measure of spread, except for when you have a normal sampling distribution. The measure $E(|X-\mu|)$ is a more appropriate measure in the case of a Laplace Sampling distribution. My guess is that the standard deviation gets used here because of intuition carried over from point 2). Probably also due to the success of least squares modelling in general, for which the standard deviation is the appropriate measure. Probably also because calculating $E(X^2)$ is generally easier than calculating $E(|X|)$ for most distributions. Now, for point 2) there is a very good reason for using the variance/standard deviation as the measure of spread, in one particular, but very common case. You can see it in the Laplace approximation to a posterior. With Data $D$ and prior information $I$, write the posterior for a parameter $\theta$ as: $$p(\theta\mid DI)=\frac{\exp\left(h(\theta)\right)}{\int \exp\left(h(t)\right)\,dt}\;\;\;\;\;\;h(\theta)\equiv\log[p(\theta\mid I)p(D\mid\theta I)]$$ I have used $t$ as a dummy variable to indicate that the denominator does not depend on $\theta$. If the posterior has a single well rounded maximum (i.e. not too close to a "boundary"), we can taylor expand the log probability about its maximum $\theta_\max$. If we take the first two terms of the taylor expansion we get (using prime for differentiation): $$h(\theta)\approx h(\theta_\max)+(\theta_\max-\theta)h'(\theta_\max)+\frac{1}{2}(\theta_\max-\theta)^{2}h''(\theta_\max)$$ But we have here that because $\theta_\max$ is a "well rounded" maximum, $h'(\theta_\max)=0$, so we have: $$h(\theta)\approx h(\theta_\max)+\frac{1}{2}(\theta_\max-\theta)^{2}h''(\theta_\max)$$ If we plug in this approximation we get: $$p(\theta\mid DI)\approx\frac{\exp\left(h(\theta_\max)+\frac{1}{2}(\theta_\max-\theta)^{2}h''(\theta_\max)\right)}{\int \exp\left(h(\theta_\max)+\frac{1}{2}(\theta_\max-t)^{2}h''(\theta_\max)\right)\,dt}$$ $$=\frac{\exp\left(\frac{1}{2}(\theta_\max-\theta)^{2}h''(\theta_\max)\right)}{\int \exp\left(\frac{1}{2}(\theta_\max-t)^{2}h''(\theta_\max)\right)\,dt}$$ Which, but for notation is a normal distribution, with mean equal to $E(\theta\mid DI)\approx\theta_\max$, and variance equal to $$V(\theta\mid DI)\approx \left[-h''(\theta_\max)\right]^{-1}$$ ($-h''(\theta_\max)$ is always positive because we have a well rounded maximum). So this means that in "regular problems" (which is most of them), the variance is the fundamental quantity which determines the accuracy of estimates for $\theta$. So for estimates based on a large amount of data, the standard deviation makes a lot of sense theoretically - it tells you basically everything you need to know. Essentially the same argument applies (with same conditions required) in multi-dimensional case with $h''(\theta)_{jk}=\frac{\partial h(\theta)}{\partial \theta_j \, \partial \theta_k}$ being a Hessian matrix. The diagonal entries are also essentially variances here too. The frequentist using the method of maximum likelihood will come to essentially the same conclusion because the MLE tends to be a weighted combination of the data, and for large samples the Central Limit Theorem applies and you basically get the same result if we take $p(\theta\mid I)=1$ but with $\theta$ and $\theta_\max$ interchanged: $$p(\theta_\max\mid\theta)\approx N\left(\theta,\left[-h''(\theta_\max)\right]^{-1}\right)$$ (see if you can guess which paradigm I prefer :P ). So either way, in parameter estimation the standard deviation is an important theoretical measure of spread.
Why square the difference instead of taking the absolute value in standard deviation?
It depends on what you are talking about when you say "spread of the data". To me this could mean two things: The width of a sampling distribution The accuracy of a given estimate For point 1) ther
Why square the difference instead of taking the absolute value in standard deviation? It depends on what you are talking about when you say "spread of the data". To me this could mean two things: The width of a sampling distribution The accuracy of a given estimate For point 1) there is no particular reason to use the standard deviation as a measure of spread, except for when you have a normal sampling distribution. The measure $E(|X-\mu|)$ is a more appropriate measure in the case of a Laplace Sampling distribution. My guess is that the standard deviation gets used here because of intuition carried over from point 2). Probably also due to the success of least squares modelling in general, for which the standard deviation is the appropriate measure. Probably also because calculating $E(X^2)$ is generally easier than calculating $E(|X|)$ for most distributions. Now, for point 2) there is a very good reason for using the variance/standard deviation as the measure of spread, in one particular, but very common case. You can see it in the Laplace approximation to a posterior. With Data $D$ and prior information $I$, write the posterior for a parameter $\theta$ as: $$p(\theta\mid DI)=\frac{\exp\left(h(\theta)\right)}{\int \exp\left(h(t)\right)\,dt}\;\;\;\;\;\;h(\theta)\equiv\log[p(\theta\mid I)p(D\mid\theta I)]$$ I have used $t$ as a dummy variable to indicate that the denominator does not depend on $\theta$. If the posterior has a single well rounded maximum (i.e. not too close to a "boundary"), we can taylor expand the log probability about its maximum $\theta_\max$. If we take the first two terms of the taylor expansion we get (using prime for differentiation): $$h(\theta)\approx h(\theta_\max)+(\theta_\max-\theta)h'(\theta_\max)+\frac{1}{2}(\theta_\max-\theta)^{2}h''(\theta_\max)$$ But we have here that because $\theta_\max$ is a "well rounded" maximum, $h'(\theta_\max)=0$, so we have: $$h(\theta)\approx h(\theta_\max)+\frac{1}{2}(\theta_\max-\theta)^{2}h''(\theta_\max)$$ If we plug in this approximation we get: $$p(\theta\mid DI)\approx\frac{\exp\left(h(\theta_\max)+\frac{1}{2}(\theta_\max-\theta)^{2}h''(\theta_\max)\right)}{\int \exp\left(h(\theta_\max)+\frac{1}{2}(\theta_\max-t)^{2}h''(\theta_\max)\right)\,dt}$$ $$=\frac{\exp\left(\frac{1}{2}(\theta_\max-\theta)^{2}h''(\theta_\max)\right)}{\int \exp\left(\frac{1}{2}(\theta_\max-t)^{2}h''(\theta_\max)\right)\,dt}$$ Which, but for notation is a normal distribution, with mean equal to $E(\theta\mid DI)\approx\theta_\max$, and variance equal to $$V(\theta\mid DI)\approx \left[-h''(\theta_\max)\right]^{-1}$$ ($-h''(\theta_\max)$ is always positive because we have a well rounded maximum). So this means that in "regular problems" (which is most of them), the variance is the fundamental quantity which determines the accuracy of estimates for $\theta$. So for estimates based on a large amount of data, the standard deviation makes a lot of sense theoretically - it tells you basically everything you need to know. Essentially the same argument applies (with same conditions required) in multi-dimensional case with $h''(\theta)_{jk}=\frac{\partial h(\theta)}{\partial \theta_j \, \partial \theta_k}$ being a Hessian matrix. The diagonal entries are also essentially variances here too. The frequentist using the method of maximum likelihood will come to essentially the same conclusion because the MLE tends to be a weighted combination of the data, and for large samples the Central Limit Theorem applies and you basically get the same result if we take $p(\theta\mid I)=1$ but with $\theta$ and $\theta_\max$ interchanged: $$p(\theta_\max\mid\theta)\approx N\left(\theta,\left[-h''(\theta_\max)\right]^{-1}\right)$$ (see if you can guess which paradigm I prefer :P ). So either way, in parameter estimation the standard deviation is an important theoretical measure of spread.
Why square the difference instead of taking the absolute value in standard deviation? It depends on what you are talking about when you say "spread of the data". To me this could mean two things: The width of a sampling distribution The accuracy of a given estimate For point 1) ther
97
Why square the difference instead of taking the absolute value in standard deviation?
When adding random variables, their variances add, for all distributions. Variance (and therefore standard deviation) is a useful measure for almost all distributions, and is in no way limited to gaussian (aka "normal") distributions. That favors using it as our error measure. Lack of uniqueness is a serious problem with absolute differences, as there are often an infinite number of equal-measure "fits", and yet clearly the "one in the middle" is most realistically favored. Also, even with today's computers, computational efficiency matters. I work with large data sets, and CPU time is important. However, there is no single absolute "best" measure of residuals, as pointed out by some previous answers. Different circumstances sometimes call for different measures.
Why square the difference instead of taking the absolute value in standard deviation?
When adding random variables, their variances add, for all distributions. Variance (and therefore standard deviation) is a useful measure for almost all distributions, and is in no way limited to gau
Why square the difference instead of taking the absolute value in standard deviation? When adding random variables, their variances add, for all distributions. Variance (and therefore standard deviation) is a useful measure for almost all distributions, and is in no way limited to gaussian (aka "normal") distributions. That favors using it as our error measure. Lack of uniqueness is a serious problem with absolute differences, as there are often an infinite number of equal-measure "fits", and yet clearly the "one in the middle" is most realistically favored. Also, even with today's computers, computational efficiency matters. I work with large data sets, and CPU time is important. However, there is no single absolute "best" measure of residuals, as pointed out by some previous answers. Different circumstances sometimes call for different measures.
Why square the difference instead of taking the absolute value in standard deviation? When adding random variables, their variances add, for all distributions. Variance (and therefore standard deviation) is a useful measure for almost all distributions, and is in no way limited to gau
98
Why square the difference instead of taking the absolute value in standard deviation?
Because squares can allow use of many other mathematical operations or functions more easily than absolute values. Example: squares can be integrated, differentiated, can be used in trigonometric, logarithmic and other functions, with ease.
Why square the difference instead of taking the absolute value in standard deviation?
Because squares can allow use of many other mathematical operations or functions more easily than absolute values. Example: squares can be integrated, differentiated, can be used in trigonometric, log
Why square the difference instead of taking the absolute value in standard deviation? Because squares can allow use of many other mathematical operations or functions more easily than absolute values. Example: squares can be integrated, differentiated, can be used in trigonometric, logarithmic and other functions, with ease.
Why square the difference instead of taking the absolute value in standard deviation? Because squares can allow use of many other mathematical operations or functions more easily than absolute values. Example: squares can be integrated, differentiated, can be used in trigonometric, log
99
Why square the difference instead of taking the absolute value in standard deviation?
Naturally you can describe dispersion of a distribution in any way meaningful (absolute deviation, quantiles, etc.). One nice fact is that the variance is the second central moment, and every distribution is uniquely described by its moments if they exist. Another nice fact is that the variance is much more tractable mathematically than any comparable metric. Another fact is that the variance is one of two parameters of the normal distribution for the usual parametrization, and the normal distribution only has 2 non-zero central moments which are those two very parameters. Even for non-normal distributions it can be helpful to think in a normal framework. As I see it, the reason the standard deviation exists as such is that in applications the square-root of the variance regularly appears (such as to standardize a random varianble), which necessitated a name for it.
Why square the difference instead of taking the absolute value in standard deviation?
Naturally you can describe dispersion of a distribution in any way meaningful (absolute deviation, quantiles, etc.). One nice fact is that the variance is the second central moment, and every distrib
Why square the difference instead of taking the absolute value in standard deviation? Naturally you can describe dispersion of a distribution in any way meaningful (absolute deviation, quantiles, etc.). One nice fact is that the variance is the second central moment, and every distribution is uniquely described by its moments if they exist. Another nice fact is that the variance is much more tractable mathematically than any comparable metric. Another fact is that the variance is one of two parameters of the normal distribution for the usual parametrization, and the normal distribution only has 2 non-zero central moments which are those two very parameters. Even for non-normal distributions it can be helpful to think in a normal framework. As I see it, the reason the standard deviation exists as such is that in applications the square-root of the variance regularly appears (such as to standardize a random varianble), which necessitated a name for it.
Why square the difference instead of taking the absolute value in standard deviation? Naturally you can describe dispersion of a distribution in any way meaningful (absolute deviation, quantiles, etc.). One nice fact is that the variance is the second central moment, and every distrib
100
Why square the difference instead of taking the absolute value in standard deviation?
A different and perhaps more intuitive approach is when you think about linear regression vs. median regression. Suppose our model is that $\mathbb{E}(y|x) = x\beta$. Then we find b by minimisize the expected squared residual, $\beta = \arg \min_b \mathbb{E} (y - x b)^2$. If instead our model is that Median$(y|x) = x\beta$, then we find our parameter estimates by minimizing the absolute residuals, $\beta = \arg \min_b \mathbb{E} |y - x b|$. In other words, whether to use absolute or squared error depends on whether you want to model the expected value or the median value. If the distribution, for example, displays skewed heteroscedasticity, then there is a big difference in how the slope of the expected value of $y$ changes over $x$ to how the slope is for the median value of $y$. Koenker and Hallock have a nice piece on quantile regression, where median regression is a special case: http://master272.com/finance/QR/QRJEP.pdf.
Why square the difference instead of taking the absolute value in standard deviation?
A different and perhaps more intuitive approach is when you think about linear regression vs. median regression. Suppose our model is that $\mathbb{E}(y|x) = x\beta$. Then we find b by minimisize th
Why square the difference instead of taking the absolute value in standard deviation? A different and perhaps more intuitive approach is when you think about linear regression vs. median regression. Suppose our model is that $\mathbb{E}(y|x) = x\beta$. Then we find b by minimisize the expected squared residual, $\beta = \arg \min_b \mathbb{E} (y - x b)^2$. If instead our model is that Median$(y|x) = x\beta$, then we find our parameter estimates by minimizing the absolute residuals, $\beta = \arg \min_b \mathbb{E} |y - x b|$. In other words, whether to use absolute or squared error depends on whether you want to model the expected value or the median value. If the distribution, for example, displays skewed heteroscedasticity, then there is a big difference in how the slope of the expected value of $y$ changes over $x$ to how the slope is for the median value of $y$. Koenker and Hallock have a nice piece on quantile regression, where median regression is a special case: http://master272.com/finance/QR/QRJEP.pdf.
Why square the difference instead of taking the absolute value in standard deviation? A different and perhaps more intuitive approach is when you think about linear regression vs. median regression. Suppose our model is that $\mathbb{E}(y|x) = x\beta$. Then we find b by minimisize th