idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
501
What are common statistical sins?
Requesting, and perhaps obtaining The Flow Chart: That graphical thing where you say what the level of your variables are and what sort of relationship you're looking for, and you follow the arrows down to get a Brand Name Test or a Brand Name Statistic. Sometimes offered with mysterious 'parametric' and 'non-parametric' paths.
What are common statistical sins?
Requesting, and perhaps obtaining The Flow Chart: That graphical thing where you say what the level of your variables are and what sort of relationship you're looking for, and you follow the arrows do
What are common statistical sins? Requesting, and perhaps obtaining The Flow Chart: That graphical thing where you say what the level of your variables are and what sort of relationship you're looking for, and you follow the arrows down to get a Brand Name Test or a Brand Name Statistic. Sometimes offered with mysterious 'parametric' and 'non-parametric' paths.
What are common statistical sins? Requesting, and perhaps obtaining The Flow Chart: That graphical thing where you say what the level of your variables are and what sort of relationship you're looking for, and you follow the arrows do
502
What are common statistical sins?
Using pie charts to illustrate relative frequencies. More here.
What are common statistical sins?
Using pie charts to illustrate relative frequencies. More here.
What are common statistical sins? Using pie charts to illustrate relative frequencies. More here.
What are common statistical sins? Using pie charts to illustrate relative frequencies. More here.
503
What are common statistical sins?
Using statistics/probability in hypothesis testing to measure the "absolute truth". Statistics simply cannot do this, they can only be of use in deciding between alternatives, which must be specified from "outside" the statistical paradigm. Statements such as "the null hypothesis is proved true by the statistics" are just incorrect; statistics can only tell you "the null hypothesis is favoured by the data, compared to the alternative hypothesis". If you then assume that either the null hypothesis or the alternative must be true, you can say "the null proved true", but this is only a trivial consequence of your assumption, not anything demonstrated by the data.
What are common statistical sins?
Using statistics/probability in hypothesis testing to measure the "absolute truth". Statistics simply cannot do this, they can only be of use in deciding between alternatives, which must be specified
What are common statistical sins? Using statistics/probability in hypothesis testing to measure the "absolute truth". Statistics simply cannot do this, they can only be of use in deciding between alternatives, which must be specified from "outside" the statistical paradigm. Statements such as "the null hypothesis is proved true by the statistics" are just incorrect; statistics can only tell you "the null hypothesis is favoured by the data, compared to the alternative hypothesis". If you then assume that either the null hypothesis or the alternative must be true, you can say "the null proved true", but this is only a trivial consequence of your assumption, not anything demonstrated by the data.
What are common statistical sins? Using statistics/probability in hypothesis testing to measure the "absolute truth". Statistics simply cannot do this, they can only be of use in deciding between alternatives, which must be specified
504
What are common statistical sins?
Repeating the same or similar experiment over 20 times on the same data and then reporting a statistically significant result with $\alpha = 0.05$. Incidentally there is a comic about this one. And similarly to (or almost the same as) @ogrisel's answer, performing a Grid search and reporting only the best result.
What are common statistical sins?
Repeating the same or similar experiment over 20 times on the same data and then reporting a statistically significant result with $\alpha = 0.05$. Incidentally there is a comic about this one. And s
What are common statistical sins? Repeating the same or similar experiment over 20 times on the same data and then reporting a statistically significant result with $\alpha = 0.05$. Incidentally there is a comic about this one. And similarly to (or almost the same as) @ogrisel's answer, performing a Grid search and reporting only the best result.
What are common statistical sins? Repeating the same or similar experiment over 20 times on the same data and then reporting a statistically significant result with $\alpha = 0.05$. Incidentally there is a comic about this one. And s
505
What are common statistical sins?
(With a bit of luck this will be controversial.) Using a Neyman-Pearson approach to statistical analysis of scientific experiments. Or, worse, using an ill-defined hybrid of Neyman-Pearson and Fisher.
What are common statistical sins?
(With a bit of luck this will be controversial.) Using a Neyman-Pearson approach to statistical analysis of scientific experiments. Or, worse, using an ill-defined hybrid of Neyman-Pearson and Fisher.
What are common statistical sins? (With a bit of luck this will be controversial.) Using a Neyman-Pearson approach to statistical analysis of scientific experiments. Or, worse, using an ill-defined hybrid of Neyman-Pearson and Fisher.
What are common statistical sins? (With a bit of luck this will be controversial.) Using a Neyman-Pearson approach to statistical analysis of scientific experiments. Or, worse, using an ill-defined hybrid of Neyman-Pearson and Fisher.
506
ROC vs precision-and-recall curves
The key difference is that ROC curves will be the same no matter what the baseline probability is, but PR curves may be more useful in practice for needle-in-haystack type problems or problems where the "positive" class is more interesting than the negative class. To show this, first let's start with a very nice way to define precision, recall and specificity. Assume you have a "positive" class called 1 and a "negative" class called 0. $\hat{Y}$ is your estimate of the true class label $Y$. Then: $$ \begin{aligned} &\text{Precision} &= P(Y = 1 | \hat{Y} = 1) \\ &\text{Recall} = \text{Sensitivity} &= P(\hat{Y} = 1 | Y = 1) \\ &\text{Specificity} &= P(\hat{Y} = 0 | Y = 0) \end{aligned} $$ The key thing to note is that sensitivity/recall and specificity, which make up the ROC curve, are probabilities conditioned on the true class label. Therefore, they will be the same regardless of what $P(Y = 1)$ is. Precision is a probability conditioned on your estimate of the class label and will thus vary if you try your classifier in different populations with different baseline $P(Y = 1)$. However, it may be more useful in practice if you only care about one population with known background probability and the "positive" class is much more interesting than the "negative" class. (IIRC precision is popular in the document retrieval field, where this is the case.) This is because it directly answers the question, "What is the probability that this is a real hit given my classifier says it is?". Interestingly, by Bayes' theorem you can work out cases where specificity can be very high and precision very low simultaneously. All you have to do is assume $P(Y = 1)$ is very close to zero. In practice I've developed several classifiers with this performance characteristic when searching for needles in DNA sequence haystacks. IMHO when writing a paper you should provide whichever curve answers the question you want answered (or whichever one is more favorable to your method, if you're cynical). If your question is: "How meaningful is a positive result from my classifier given the baseline probabilities of my problem?", use a PR curve. If your question is, "How well can this classifier be expected to perform in general, at a variety of different baseline probabilities?", go with a ROC curve.
ROC vs precision-and-recall curves
The key difference is that ROC curves will be the same no matter what the baseline probability is, but PR curves may be more useful in practice for needle-in-haystack type problems or problems where t
ROC vs precision-and-recall curves The key difference is that ROC curves will be the same no matter what the baseline probability is, but PR curves may be more useful in practice for needle-in-haystack type problems or problems where the "positive" class is more interesting than the negative class. To show this, first let's start with a very nice way to define precision, recall and specificity. Assume you have a "positive" class called 1 and a "negative" class called 0. $\hat{Y}$ is your estimate of the true class label $Y$. Then: $$ \begin{aligned} &\text{Precision} &= P(Y = 1 | \hat{Y} = 1) \\ &\text{Recall} = \text{Sensitivity} &= P(\hat{Y} = 1 | Y = 1) \\ &\text{Specificity} &= P(\hat{Y} = 0 | Y = 0) \end{aligned} $$ The key thing to note is that sensitivity/recall and specificity, which make up the ROC curve, are probabilities conditioned on the true class label. Therefore, they will be the same regardless of what $P(Y = 1)$ is. Precision is a probability conditioned on your estimate of the class label and will thus vary if you try your classifier in different populations with different baseline $P(Y = 1)$. However, it may be more useful in practice if you only care about one population with known background probability and the "positive" class is much more interesting than the "negative" class. (IIRC precision is popular in the document retrieval field, where this is the case.) This is because it directly answers the question, "What is the probability that this is a real hit given my classifier says it is?". Interestingly, by Bayes' theorem you can work out cases where specificity can be very high and precision very low simultaneously. All you have to do is assume $P(Y = 1)$ is very close to zero. In practice I've developed several classifiers with this performance characteristic when searching for needles in DNA sequence haystacks. IMHO when writing a paper you should provide whichever curve answers the question you want answered (or whichever one is more favorable to your method, if you're cynical). If your question is: "How meaningful is a positive result from my classifier given the baseline probabilities of my problem?", use a PR curve. If your question is, "How well can this classifier be expected to perform in general, at a variety of different baseline probabilities?", go with a ROC curve.
ROC vs precision-and-recall curves The key difference is that ROC curves will be the same no matter what the baseline probability is, but PR curves may be more useful in practice for needle-in-haystack type problems or problems where t
507
ROC vs precision-and-recall curves
Here are the conclusions from a paper by Davis & Goadrich explaining the relationship between ROC and PR space. They answer the first two questions: First, for any dataset, the ROC curve and PR curve for a given algorithm contain the same points. This equivalence, leads to the surprising theorem that a curve dominates in ROC space if and only if it dominates in PR space. Second, as a corollary to the theorem we show the existence of the PR space analog to the convex hull in ROC space, which we call achievable PR curve. Remarkably, when constructing the achievable PR curve one discards exactly the same points omit- ted by the convex hull in ROC space. Consequently, we can efficiently compute the achievable PR curve. [...] Finally, we show that an algorithm that optimizes the area under the ROC curve is not guaranteed to optimize the area under the PR curve. In other words, in principle, ROC and PR are equally suited to compare results. But for the example case of a result of 20 hits and 1980 misses they show that the differences can be rather drastic, as shown in Figures 11 and 12. Result/curve (I) describes a result where 10 of the 20 hits are in the top ten ranks and the remaining 10 hits are then evenly spread out over the first 1500 ranks. Resut (II) describes a result where the 20 hits are evenly spread over the first 500 (out of 2000) ranks. So in cases where a result "shape" like (I) is preferable, this preference is clearly distinguishable in PR-space, while the AUC ROC of the two results are nearly equal.
ROC vs precision-and-recall curves
Here are the conclusions from a paper by Davis & Goadrich explaining the relationship between ROC and PR space. They answer the first two questions: First, for any dataset, the ROC curve and PR curve
ROC vs precision-and-recall curves Here are the conclusions from a paper by Davis & Goadrich explaining the relationship between ROC and PR space. They answer the first two questions: First, for any dataset, the ROC curve and PR curve for a given algorithm contain the same points. This equivalence, leads to the surprising theorem that a curve dominates in ROC space if and only if it dominates in PR space. Second, as a corollary to the theorem we show the existence of the PR space analog to the convex hull in ROC space, which we call achievable PR curve. Remarkably, when constructing the achievable PR curve one discards exactly the same points omit- ted by the convex hull in ROC space. Consequently, we can efficiently compute the achievable PR curve. [...] Finally, we show that an algorithm that optimizes the area under the ROC curve is not guaranteed to optimize the area under the PR curve. In other words, in principle, ROC and PR are equally suited to compare results. But for the example case of a result of 20 hits and 1980 misses they show that the differences can be rather drastic, as shown in Figures 11 and 12. Result/curve (I) describes a result where 10 of the 20 hits are in the top ten ranks and the remaining 10 hits are then evenly spread out over the first 1500 ranks. Resut (II) describes a result where the 20 hits are evenly spread over the first 500 (out of 2000) ranks. So in cases where a result "shape" like (I) is preferable, this preference is clearly distinguishable in PR-space, while the AUC ROC of the two results are nearly equal.
ROC vs precision-and-recall curves Here are the conclusions from a paper by Davis & Goadrich explaining the relationship between ROC and PR space. They answer the first two questions: First, for any dataset, the ROC curve and PR curve
508
ROC vs precision-and-recall curves
There is a lot of misunderstanding about evaluation. Part of this comes from the Machine Learning approach of trying to optimize algorithms on datasets, with no real interest in the data. In a medical context, it's about the real world outcomes - how many people you save from dying, for example. In a medical context Sensitivity (TPR) is used to see how many of the positive cases are correctly picked up (minimizing the proportion missed as false negatives = FNR) while Specificity (TNR) is used to see how many of the negative cases are correctly eliminated (minimizing the proportion found as false positives = FPR). Some diseases have a prevalence of one in a million. Thus if you always predict negative you have an Accuracy of 0.999999 - this is achieved by the simple ZeroR learner that simply predicts the maximum class. If we consider the Recall and Precision for predicting that you are disease free, then we have Recall=1 and Precision=0.999999 for ZeroR. Of course, if you reverse +ve and -ve and try to predict that a person has the disease with ZeroR you get Recall=0 and Precision=undef (as you didn't even make a positive prediction, but often people define Precision as 0 in this case). Note that Recall (+ve Recall) and Inverse Recall (-ve Recall), and the related TPR,FPR,TNR & FNR are always defined because we are only tackling the problem because we know there are two classes to distinguish and we deliberately provide examples of each. Note the huge difference between missing cancer in the medical context (someone dies and you get sued) versus missing a paper in a web search (good chance one of the others will reference it if its important). In both cases these errors are characterized as false negatives, versus a large population of negatives. In the websearch case we will automatically get a large population of true negatives simply because we only show a small number of results (e.g. 10 or 100) and not being shown shouldn't really be taking as a negative prediction (it might have been 101), whereas in the cancer test case we have a result for every person and unlike websearch we actively control the false negative level (rate). So ROC is exploring the tradeoff between true positives (versus false negatives as a proportion of the real positives) and false positives (versus true negatives as a proportion of the real negatives). It is equivalent to comparing Sensitivity (+ve Recall) and Specificity (-ve Recall). There is also a PN graph which looks the same where we plot TP vs FP rather than TPR vs FPR - but since we make the plot square the only difference is the numbers we put on the scales. They are related by constants TPR=TP/RP, FPR=TP/RN where RP=TP+FN and RN=FN+FP are the number of Real Positives and Real Negatives in the dataset and conversely biases PP=TP+FP and PN=TN+FN are the number of times we Predict Positive or Predict Negative. Note that we call rp=RP/N and rn=RN/N the prevalence of positive resp. negative and pp=PP/N and rp=RP/N the bias to positive resp. negative. If we sum or average Sensitivity and Specificity or look at the Area Under the tradeoff Curve (equivalent to ROC just reversing the x-axis) we get the same result if we interchange which class is +ve and +ve. This is NOT true for Precision and Recall (as illustrated above with disease prediction by ZeroR). This arbitrariness is a major deficiency of Precision, Recall and their averages (whether arithmetic, geometric or harmonic) and tradeoff graphs. The PR, PN, ROC, LIFT and other charts are plotted as parameters of the system are changed. This classically plot points for each individual system trained, often with a threshold being increased or decreased to change the point at which an instance is classed positive versus negative. Sometimes the plotted points may be averages over (changing parameters/thresholds/algorithms of) sets of systems trained in the same way (but using different random numbers or samplings or orderings). These are theoretical constructs that tell us about the average behaviour of the systems rather than their performance on a particular problem. The tradeoff charts are intended to help us choose the correct operating point for a particular application (dataset and approach) and this is where ROC gets its name from (Receiver Operating Characteristics aims to maximize the information received, in the sense of informedness). Let us consider what Recall or TPR or TP can be plotted against. TP vs FP (PN) - looks exactly like the ROC plot, just with different numbers TPR vs FPR (ROC) - TPR against FPR with AUC is unchanged if +/- are reversed. TPR vs TNR (alt ROC) - mirror image of ROC as TNR=1-FPR (TN+FP=RN) TP vs PP (LIFT) - X incs for positive and negative examples (nonlinear stretch) TPR vs pp (alt LIFT) - looks the same as LIFT, just with different numbers TP vs 1/PP - very similar to LIFT (but inverted with nonlinear stretch) TPR vs 1/PP - looks the same as TP vs 1/PP (different numbers on y-axis) TP vs TP/PP - similar but with expansion of the x-axis (TP = X -> TP = X*TP) TPR vs TP/PP - looks the same but with different numbers on the axes The last is Recall vs Precision! Note for these graphs any curves that dominate other curves (are better or at least as high at all points) will still dominate after these transformations. Since domination means "at least as high" at every point, the higher curve also has "at least as high" an Area under the Curve (AUC) as it includes also the area between the curves. The reverse is not true: if curves intersect, as opposed to touch, there is no dominance, but one AUC can still be bigger than the other. All the transformations do is reflect and/or zoom in different (non-linear) ways to a particular part of the ROC or PN graph. However, only ROC has the nice interpretation of Area under the Curve (probability that a positive is ranked higher than a negative - Mann-Whitney U statistic) and Distance above the Curve (probability that an informed decision is made rather than guessing - Youden J statistic as the dichotomous form of Informedness). Generally, there is no need to use the PR tradeoff curve and you can simply zoom into the ROC curve if detail is required. The ROC curve has the unique property that the diagonal (TPR=FPR) represents chance, that the Distance above the Chance line (DAC) represents Informedness or the probability of an informed decision, and the Area under the Curve (AUC) represents Rankedness or the probability of correct pairwise ranking. These results do not hold for the PR curve, and the AUC gets distorted for higher Recall or TPR as explained above. PR AUC being bigger does not imply ROC AUC is bigger and thus does not imply increased Rankedness (probability of ranked +/- pairs being correctly predicted - viz. how often it predicts +ves above -ves) and does not imply increased Informedness (probability of an informed prediction rather than a random guess - viz. how often it knows what it's doing when it makes a prediction). Sorry - no graphs! If anyone wants to add graphs to illustrate the above transformations, that would be great! I do have quite a few in my papers about ROC, LIFT, BIRD, Kappa, F-measure, Informedness, etc. but they aren't presented in quite this way although there are illustrations of ROC vs LIFT vs BIRD vs RP in https://arxiv.org/pdf/1505.00401.pdf UPDATE: To avoid trying to give full explanations in overlong answers or comments, here are some of my papers "discovering" the problem with Precision vs Recall tradeoffs inc. F1, deriving Informedness and then "exploring" the relationships with ROC, Kappa, Significance, DeltaP, AUC, etc. This is a problem one of my students bumped into 20 years ago (Entwisle) and many more have since found that realworld example of their own where there was empirical proof that the R/P/F/A approach sent the learner the WRONG way, while Informedness (or Kappa or Correlation in appropriate cases) sent them the RIGHT way - now across dozens of fields. There are also many good and relevant papers by other authors on Kappa and ROC, but when you use Kappas versus ROC AUC versus ROC Height (Informedness or Youden's J) is clarified in the 2012 papers I list (many of the important papers of others are cited in them). The 2003 Bookmaker paper derives for the first time a formula for Informedness for the multiclass case. The 2013 paper derives a multiclass version of Adaboost adapted to optimize Informedness (with links to the modified Weka that hosts and runs it). References 1998 The present use of statistics in the evaluation of NLP parsers. J Entwisle, DMW Powers - Proceedings of the Joint Conferences on New Methods in Language Processing: 215-224 https://dl.acm.org/citation.cfm?id=1603935 Cited by 15 2003 Recall & Precision versus The Bookmaker. DMW Powers - International Conference on Cognitive Science: 529-534 http://dspace2.flinders.edu.au/xmlui/handle/2328/27159 Cited by 46 2011 Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation. DMW Powers - Journal of Machine Learning Technology 2(1):37-63. http://dspace2.flinders.edu.au/xmlui/handle/2328/27165 Cited by 1749 2012 The problem with kappa. DMW Powers - Proceedings of the 13th Conference of the European ACL: 345-355 https://dl.acm.org/citation.cfm?id=2380859 Cited by 63 2012 ROC-ConCert: ROC-Based Measurement of Consistency and Certainty. DMW Powers - Spring Congress on Engineering and Technology (S-CET) 2:238-241 http://www.academia.edu/download/31939951/201203-SCET30795-ROC-ConCert-PID1124774.pdf Cited by 5 2013 ADABOOK & MULTIBOOK: : Adaptive Boosting with Chance Correction. DMW Powers- ICINCO International Conference on Informatics in Control, Automation and Robotics http://www.academia.edu/download/31947210/201309-AdaBook-ICINCO-SCITE-Harvard-2upcor_poster.pdf https://www.dropbox.com/s/artzz1l3vozb6c4/weka.jar (goes into Java Class Path) https://www.dropbox.com/s/dqws9ixew3egraj/wekagui (GUI start script for Unix) https://www.dropbox.com/s/4j3fwx997kq2xcq/wekagui.bat (GUI shortcut on Windows) Cited by 4
ROC vs precision-and-recall curves
There is a lot of misunderstanding about evaluation. Part of this comes from the Machine Learning approach of trying to optimize algorithms on datasets, with no real interest in the data. In a medica
ROC vs precision-and-recall curves There is a lot of misunderstanding about evaluation. Part of this comes from the Machine Learning approach of trying to optimize algorithms on datasets, with no real interest in the data. In a medical context, it's about the real world outcomes - how many people you save from dying, for example. In a medical context Sensitivity (TPR) is used to see how many of the positive cases are correctly picked up (minimizing the proportion missed as false negatives = FNR) while Specificity (TNR) is used to see how many of the negative cases are correctly eliminated (minimizing the proportion found as false positives = FPR). Some diseases have a prevalence of one in a million. Thus if you always predict negative you have an Accuracy of 0.999999 - this is achieved by the simple ZeroR learner that simply predicts the maximum class. If we consider the Recall and Precision for predicting that you are disease free, then we have Recall=1 and Precision=0.999999 for ZeroR. Of course, if you reverse +ve and -ve and try to predict that a person has the disease with ZeroR you get Recall=0 and Precision=undef (as you didn't even make a positive prediction, but often people define Precision as 0 in this case). Note that Recall (+ve Recall) and Inverse Recall (-ve Recall), and the related TPR,FPR,TNR & FNR are always defined because we are only tackling the problem because we know there are two classes to distinguish and we deliberately provide examples of each. Note the huge difference between missing cancer in the medical context (someone dies and you get sued) versus missing a paper in a web search (good chance one of the others will reference it if its important). In both cases these errors are characterized as false negatives, versus a large population of negatives. In the websearch case we will automatically get a large population of true negatives simply because we only show a small number of results (e.g. 10 or 100) and not being shown shouldn't really be taking as a negative prediction (it might have been 101), whereas in the cancer test case we have a result for every person and unlike websearch we actively control the false negative level (rate). So ROC is exploring the tradeoff between true positives (versus false negatives as a proportion of the real positives) and false positives (versus true negatives as a proportion of the real negatives). It is equivalent to comparing Sensitivity (+ve Recall) and Specificity (-ve Recall). There is also a PN graph which looks the same where we plot TP vs FP rather than TPR vs FPR - but since we make the plot square the only difference is the numbers we put on the scales. They are related by constants TPR=TP/RP, FPR=TP/RN where RP=TP+FN and RN=FN+FP are the number of Real Positives and Real Negatives in the dataset and conversely biases PP=TP+FP and PN=TN+FN are the number of times we Predict Positive or Predict Negative. Note that we call rp=RP/N and rn=RN/N the prevalence of positive resp. negative and pp=PP/N and rp=RP/N the bias to positive resp. negative. If we sum or average Sensitivity and Specificity or look at the Area Under the tradeoff Curve (equivalent to ROC just reversing the x-axis) we get the same result if we interchange which class is +ve and +ve. This is NOT true for Precision and Recall (as illustrated above with disease prediction by ZeroR). This arbitrariness is a major deficiency of Precision, Recall and their averages (whether arithmetic, geometric or harmonic) and tradeoff graphs. The PR, PN, ROC, LIFT and other charts are plotted as parameters of the system are changed. This classically plot points for each individual system trained, often with a threshold being increased or decreased to change the point at which an instance is classed positive versus negative. Sometimes the plotted points may be averages over (changing parameters/thresholds/algorithms of) sets of systems trained in the same way (but using different random numbers or samplings or orderings). These are theoretical constructs that tell us about the average behaviour of the systems rather than their performance on a particular problem. The tradeoff charts are intended to help us choose the correct operating point for a particular application (dataset and approach) and this is where ROC gets its name from (Receiver Operating Characteristics aims to maximize the information received, in the sense of informedness). Let us consider what Recall or TPR or TP can be plotted against. TP vs FP (PN) - looks exactly like the ROC plot, just with different numbers TPR vs FPR (ROC) - TPR against FPR with AUC is unchanged if +/- are reversed. TPR vs TNR (alt ROC) - mirror image of ROC as TNR=1-FPR (TN+FP=RN) TP vs PP (LIFT) - X incs for positive and negative examples (nonlinear stretch) TPR vs pp (alt LIFT) - looks the same as LIFT, just with different numbers TP vs 1/PP - very similar to LIFT (but inverted with nonlinear stretch) TPR vs 1/PP - looks the same as TP vs 1/PP (different numbers on y-axis) TP vs TP/PP - similar but with expansion of the x-axis (TP = X -> TP = X*TP) TPR vs TP/PP - looks the same but with different numbers on the axes The last is Recall vs Precision! Note for these graphs any curves that dominate other curves (are better or at least as high at all points) will still dominate after these transformations. Since domination means "at least as high" at every point, the higher curve also has "at least as high" an Area under the Curve (AUC) as it includes also the area between the curves. The reverse is not true: if curves intersect, as opposed to touch, there is no dominance, but one AUC can still be bigger than the other. All the transformations do is reflect and/or zoom in different (non-linear) ways to a particular part of the ROC or PN graph. However, only ROC has the nice interpretation of Area under the Curve (probability that a positive is ranked higher than a negative - Mann-Whitney U statistic) and Distance above the Curve (probability that an informed decision is made rather than guessing - Youden J statistic as the dichotomous form of Informedness). Generally, there is no need to use the PR tradeoff curve and you can simply zoom into the ROC curve if detail is required. The ROC curve has the unique property that the diagonal (TPR=FPR) represents chance, that the Distance above the Chance line (DAC) represents Informedness or the probability of an informed decision, and the Area under the Curve (AUC) represents Rankedness or the probability of correct pairwise ranking. These results do not hold for the PR curve, and the AUC gets distorted for higher Recall or TPR as explained above. PR AUC being bigger does not imply ROC AUC is bigger and thus does not imply increased Rankedness (probability of ranked +/- pairs being correctly predicted - viz. how often it predicts +ves above -ves) and does not imply increased Informedness (probability of an informed prediction rather than a random guess - viz. how often it knows what it's doing when it makes a prediction). Sorry - no graphs! If anyone wants to add graphs to illustrate the above transformations, that would be great! I do have quite a few in my papers about ROC, LIFT, BIRD, Kappa, F-measure, Informedness, etc. but they aren't presented in quite this way although there are illustrations of ROC vs LIFT vs BIRD vs RP in https://arxiv.org/pdf/1505.00401.pdf UPDATE: To avoid trying to give full explanations in overlong answers or comments, here are some of my papers "discovering" the problem with Precision vs Recall tradeoffs inc. F1, deriving Informedness and then "exploring" the relationships with ROC, Kappa, Significance, DeltaP, AUC, etc. This is a problem one of my students bumped into 20 years ago (Entwisle) and many more have since found that realworld example of their own where there was empirical proof that the R/P/F/A approach sent the learner the WRONG way, while Informedness (or Kappa or Correlation in appropriate cases) sent them the RIGHT way - now across dozens of fields. There are also many good and relevant papers by other authors on Kappa and ROC, but when you use Kappas versus ROC AUC versus ROC Height (Informedness or Youden's J) is clarified in the 2012 papers I list (many of the important papers of others are cited in them). The 2003 Bookmaker paper derives for the first time a formula for Informedness for the multiclass case. The 2013 paper derives a multiclass version of Adaboost adapted to optimize Informedness (with links to the modified Weka that hosts and runs it). References 1998 The present use of statistics in the evaluation of NLP parsers. J Entwisle, DMW Powers - Proceedings of the Joint Conferences on New Methods in Language Processing: 215-224 https://dl.acm.org/citation.cfm?id=1603935 Cited by 15 2003 Recall & Precision versus The Bookmaker. DMW Powers - International Conference on Cognitive Science: 529-534 http://dspace2.flinders.edu.au/xmlui/handle/2328/27159 Cited by 46 2011 Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation. DMW Powers - Journal of Machine Learning Technology 2(1):37-63. http://dspace2.flinders.edu.au/xmlui/handle/2328/27165 Cited by 1749 2012 The problem with kappa. DMW Powers - Proceedings of the 13th Conference of the European ACL: 345-355 https://dl.acm.org/citation.cfm?id=2380859 Cited by 63 2012 ROC-ConCert: ROC-Based Measurement of Consistency and Certainty. DMW Powers - Spring Congress on Engineering and Technology (S-CET) 2:238-241 http://www.academia.edu/download/31939951/201203-SCET30795-ROC-ConCert-PID1124774.pdf Cited by 5 2013 ADABOOK & MULTIBOOK: : Adaptive Boosting with Chance Correction. DMW Powers- ICINCO International Conference on Informatics in Control, Automation and Robotics http://www.academia.edu/download/31947210/201309-AdaBook-ICINCO-SCITE-Harvard-2upcor_poster.pdf https://www.dropbox.com/s/artzz1l3vozb6c4/weka.jar (goes into Java Class Path) https://www.dropbox.com/s/dqws9ixew3egraj/wekagui (GUI start script for Unix) https://www.dropbox.com/s/4j3fwx997kq2xcq/wekagui.bat (GUI shortcut on Windows) Cited by 4
ROC vs precision-and-recall curves There is a lot of misunderstanding about evaluation. Part of this comes from the Machine Learning approach of trying to optimize algorithms on datasets, with no real interest in the data. In a medica
509
ROC vs precision-and-recall curves
TL;DR $AUC_{PvR}$ highlights the amount of False Positives relative to the class size, whereas $AUC_{ROC}$ better reflects the total amount of False Positives independent of in which class they come up. Definition The $AUC_{ROC}$ (receiver operator) is the area under the curve of true positive to false positive rate. The $AUC_{PvR}$ (precision vs recall) is the area under the curve of the precison to recall metrics. As true-positive rate equals recall, they only differ in comparing the recall to either precision or false-positive rate $$ \begin{align}AUC_{PvR} =& AUC\left(\frac{\mathbf{TP}}{\mathbf{TP}+\mathbf{FP}}, \frac{TP}{TP + FN}\right)= AUC \left( \textbf{precision}, \text{recall}\right) \\AUC_{ROC} =& AUC\left(\frac{\mathbf{FP}}{\mathbf{FP} + \mathbf{TN}}, \frac{TP}{TP+FN}\right)= AUC \left(\textbf{FPR}, \text{recall} \right)\end{align} $$ Illustration Consider an unbalanced multiclass dataset with a million instances. We are given two classifiers and observe their predictions at a fixed threshold for one of the classes: Classifier A: 100 instances predicted (positive), 90 of which correctly (with 100 true cases) Classifier B: 2000 instances predicted (positive), 90 of which correctly (with 100 true cases) The respective values for the $AUC_{ROC}$ at the specific threshold will be Classifier A: 0.9 TPR, 0.00001 FPR Classifier: B: 0.9 TPR, 0.00191 FPR (gain of 0.0019) The respective values for the $AUC_{PvR}$ for the specific threshold will be Classifier A: 0.9 recall, 0.9 precision Classifier B: 0.9 recall, 0.045 precision (gain of 0.855) Discussion As you can see, by choosing classifier B over A, the gain in false positive rate is comparably low compared to the gains observed in precision. This is because the false-positive rate is a ratio of the false positives to the vast amount of true negatives, whereas the precision is a ratio of the false positives to the rather small amount of true positives. Therefore the $AUC_{ROC}$ is more globally in that it is irritated by false positives relative to how many you could have drawn from the whole dataset. The $AUC_{PvR}$, however, is more local, in that it is irritated by false positives relative to how many positives you have set for the class at hand. Your choice of metric, therefore, depends on what really irritates you more. If you are irritated by a minority class having many false-positives relative to its class size $AUC_{PvR}$ will highlight this more thoroughly. (also false-positives of a majority class will have less impact) If you are however more irritated by the global amount of false positives, the $AUC_{ROC}$ would be more indicative of how well you are doing.
ROC vs precision-and-recall curves
TL;DR $AUC_{PvR}$ highlights the amount of False Positives relative to the class size, whereas $AUC_{ROC}$ better reflects the total amount of False Positives independent of in which class they come u
ROC vs precision-and-recall curves TL;DR $AUC_{PvR}$ highlights the amount of False Positives relative to the class size, whereas $AUC_{ROC}$ better reflects the total amount of False Positives independent of in which class they come up. Definition The $AUC_{ROC}$ (receiver operator) is the area under the curve of true positive to false positive rate. The $AUC_{PvR}$ (precision vs recall) is the area under the curve of the precison to recall metrics. As true-positive rate equals recall, they only differ in comparing the recall to either precision or false-positive rate $$ \begin{align}AUC_{PvR} =& AUC\left(\frac{\mathbf{TP}}{\mathbf{TP}+\mathbf{FP}}, \frac{TP}{TP + FN}\right)= AUC \left( \textbf{precision}, \text{recall}\right) \\AUC_{ROC} =& AUC\left(\frac{\mathbf{FP}}{\mathbf{FP} + \mathbf{TN}}, \frac{TP}{TP+FN}\right)= AUC \left(\textbf{FPR}, \text{recall} \right)\end{align} $$ Illustration Consider an unbalanced multiclass dataset with a million instances. We are given two classifiers and observe their predictions at a fixed threshold for one of the classes: Classifier A: 100 instances predicted (positive), 90 of which correctly (with 100 true cases) Classifier B: 2000 instances predicted (positive), 90 of which correctly (with 100 true cases) The respective values for the $AUC_{ROC}$ at the specific threshold will be Classifier A: 0.9 TPR, 0.00001 FPR Classifier: B: 0.9 TPR, 0.00191 FPR (gain of 0.0019) The respective values for the $AUC_{PvR}$ for the specific threshold will be Classifier A: 0.9 recall, 0.9 precision Classifier B: 0.9 recall, 0.045 precision (gain of 0.855) Discussion As you can see, by choosing classifier B over A, the gain in false positive rate is comparably low compared to the gains observed in precision. This is because the false-positive rate is a ratio of the false positives to the vast amount of true negatives, whereas the precision is a ratio of the false positives to the rather small amount of true positives. Therefore the $AUC_{ROC}$ is more globally in that it is irritated by false positives relative to how many you could have drawn from the whole dataset. The $AUC_{PvR}$, however, is more local, in that it is irritated by false positives relative to how many positives you have set for the class at hand. Your choice of metric, therefore, depends on what really irritates you more. If you are irritated by a minority class having many false-positives relative to its class size $AUC_{PvR}$ will highlight this more thoroughly. (also false-positives of a majority class will have less impact) If you are however more irritated by the global amount of false positives, the $AUC_{ROC}$ would be more indicative of how well you are doing.
ROC vs precision-and-recall curves TL;DR $AUC_{PvR}$ highlights the amount of False Positives relative to the class size, whereas $AUC_{ROC}$ better reflects the total amount of False Positives independent of in which class they come u
510
What exactly are keys, queries, and values in attention mechanisms?
The key/value/query formulation of attention is from the paper Attention Is All You Need. How should one understand the queries, keys, and values The key/value/query concept is analogous to retrieval systems. For example, when you search for videos on Youtube, the search engine will map your query (text in the search bar) against a set of keys (video title, description, etc.) associated with candidate videos in their database, then present you the best matched videos (values). The attention operation can be thought of as a retrieval process as well. As mentioned in the paper you referenced (Neural Machine Translation by Jointly Learning to Align and Translate), attention by definition is just a weighted average of values, $$c=\sum_{j}\alpha_jh_j$$ where $\sum \alpha_j=1$. If we restrict $\alpha$ to be a one-hot vector, this operation becomes the same as retrieving from a set of elements $h$ with index $\alpha$. With the restriction removed, the attention operation can be thought of as doing "proportional retrieval" according to the probability vector $\alpha$. It should be clear that $h$ in this context is the value. The difference between the two papers lies in how the probability vector $\alpha$ is calculated. The first paper (Bahdanau et al. 2015) computes the score through a neural network $$e_{ij}=a(s_i,h_j), \qquad \alpha_{i,j}=\frac{\exp(e_{ij})}{\sum_k\exp(e_{ik})}$$ where $h_j$ is from the encoder sequence, and $s_i$ is from the decoder sequence. One problem of this approach is, say the encoder sequence is of length $m$ and the decoding sequence is of length $n$, we have to go through the network $m*n$ times to acquire all the attention scores $e_{ij}$. A more efficient model would be to first project $s$ and $h$ onto a common space, then choose a similarity measure (e.g. dot product) as the attention score, like $$e_{ij}=f(s_i)g(h_j)^T$$ so we only have to compute $g(h_j)$ $m$ times and $f(s_i)$ $n$ times to get the projection vectors and $e_{ij}$ can be computed efficiently by matrix multiplication. This is essentially the approach proposed by the second paper (Vaswani et al. 2017), where the two projection vectors are called query (for decoder) and key (for encoder), which is well aligned with the concepts in retrieval systems. (There are later techniques to further reduce the computational complexity, for example Reformer, Linformer.) How are the queries, keys, and values obtained The proposed multihead attention alone doesn't say much about how the queries, keys, and values are obtained, they can come from different sources depending on the application scenario. $$ \begin{align}\text{MultiHead($Q$, $K$, $V$)} & = \text{Concat}(\text{head}_1, \dots, \text{head}_h) W^{O} \\ \text{where head$_i$} & = \text{Attention($QW_i^Q$, $KW_i^K$, $VW_i^V$)} \end{align}$$ Where the projections are parameter matrices: $$ \begin{align} W_i^Q & \in \mathbb{R}^{d_\text{model} \times d_k}, \\ W_i^K & \in \mathbb{R}^{d_\text{model} \times d_k}, \\ W_i^V & \in \mathbb{R}^{d_\text{model} \times d_v}, \\ W_i^O & \in \mathbb{R}^{hd_v \times d_{\text{model}}}. \end{align}$$ For unsupervised language model training like GPT, $Q, K, V$ are usually from the same source, so such operation is also called self-attention. For the machine translation task in the second paper, it first applies self-attention separately to source and target sequences, then on top of that it applies another attention where $Q$ is from the target sequence and $K, V$ are from the source sequence. For recommendation systems, $Q$ can be from the target items, $K, V$ can be from the user profile and history.
What exactly are keys, queries, and values in attention mechanisms?
The key/value/query formulation of attention is from the paper Attention Is All You Need. How should one understand the queries, keys, and values The key/value/query concept is analogous to retrieva
What exactly are keys, queries, and values in attention mechanisms? The key/value/query formulation of attention is from the paper Attention Is All You Need. How should one understand the queries, keys, and values The key/value/query concept is analogous to retrieval systems. For example, when you search for videos on Youtube, the search engine will map your query (text in the search bar) against a set of keys (video title, description, etc.) associated with candidate videos in their database, then present you the best matched videos (values). The attention operation can be thought of as a retrieval process as well. As mentioned in the paper you referenced (Neural Machine Translation by Jointly Learning to Align and Translate), attention by definition is just a weighted average of values, $$c=\sum_{j}\alpha_jh_j$$ where $\sum \alpha_j=1$. If we restrict $\alpha$ to be a one-hot vector, this operation becomes the same as retrieving from a set of elements $h$ with index $\alpha$. With the restriction removed, the attention operation can be thought of as doing "proportional retrieval" according to the probability vector $\alpha$. It should be clear that $h$ in this context is the value. The difference between the two papers lies in how the probability vector $\alpha$ is calculated. The first paper (Bahdanau et al. 2015) computes the score through a neural network $$e_{ij}=a(s_i,h_j), \qquad \alpha_{i,j}=\frac{\exp(e_{ij})}{\sum_k\exp(e_{ik})}$$ where $h_j$ is from the encoder sequence, and $s_i$ is from the decoder sequence. One problem of this approach is, say the encoder sequence is of length $m$ and the decoding sequence is of length $n$, we have to go through the network $m*n$ times to acquire all the attention scores $e_{ij}$. A more efficient model would be to first project $s$ and $h$ onto a common space, then choose a similarity measure (e.g. dot product) as the attention score, like $$e_{ij}=f(s_i)g(h_j)^T$$ so we only have to compute $g(h_j)$ $m$ times and $f(s_i)$ $n$ times to get the projection vectors and $e_{ij}$ can be computed efficiently by matrix multiplication. This is essentially the approach proposed by the second paper (Vaswani et al. 2017), where the two projection vectors are called query (for decoder) and key (for encoder), which is well aligned with the concepts in retrieval systems. (There are later techniques to further reduce the computational complexity, for example Reformer, Linformer.) How are the queries, keys, and values obtained The proposed multihead attention alone doesn't say much about how the queries, keys, and values are obtained, they can come from different sources depending on the application scenario. $$ \begin{align}\text{MultiHead($Q$, $K$, $V$)} & = \text{Concat}(\text{head}_1, \dots, \text{head}_h) W^{O} \\ \text{where head$_i$} & = \text{Attention($QW_i^Q$, $KW_i^K$, $VW_i^V$)} \end{align}$$ Where the projections are parameter matrices: $$ \begin{align} W_i^Q & \in \mathbb{R}^{d_\text{model} \times d_k}, \\ W_i^K & \in \mathbb{R}^{d_\text{model} \times d_k}, \\ W_i^V & \in \mathbb{R}^{d_\text{model} \times d_v}, \\ W_i^O & \in \mathbb{R}^{hd_v \times d_{\text{model}}}. \end{align}$$ For unsupervised language model training like GPT, $Q, K, V$ are usually from the same source, so such operation is also called self-attention. For the machine translation task in the second paper, it first applies self-attention separately to source and target sequences, then on top of that it applies another attention where $Q$ is from the target sequence and $K, V$ are from the source sequence. For recommendation systems, $Q$ can be from the target items, $K, V$ can be from the user profile and history.
What exactly are keys, queries, and values in attention mechanisms? The key/value/query formulation of attention is from the paper Attention Is All You Need. How should one understand the queries, keys, and values The key/value/query concept is analogous to retrieva
511
What exactly are keys, queries, and values in attention mechanisms?
I was also puzzled by the keys, queries, and values in the attention mechanisms for a while. After searching on the Web and digesting relevant information, I have a clear picture about how the keys, queries, and values work and why they would work! Let's see how they work, followed by why they work. Attention to replace context vector In a seq2seq model, we encode the input sequence to a context vector, and then feed this context vector to the decoder to yield expected good output. However, if the input sequence becomes long, relying on only one context vector become less effective. We need all the information from the hidden states in the input sequence (encoder) for better decoding (the attention mechanism). One way to utilize the input hidden states is shown below: Image source: https://towardsdatascience.com/attn-illustrated-attention-5ec4ad276ee3 In other words, in this attention mechanism, the context vector is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key (this is a slightly modified sentence from [Attention Is All You Need] https://arxiv.org/pdf/1706.03762.pdf). Here, the query is from the decoder hidden state, the key and value are from the encoder hidden states (key and value are the same in this figure). The score is the compatibility between the query and key, which can be a dot product between the query and key (or other form of compatibility). The scores then go through the softmax function to yield a set of weights whose sum equals 1. Each weight multiplies its corresponding values to yield the context vector which utilizes all the input hidden states. Note that if we manually set the weight of the last input to 1 and all its precedences to 0s, we reduce the attention mechanism to the original seq2seq context vector mechanism. That is, there is no attention to the earlier input encoder states. Self-Attention uses Q, K, V all from the input Now, let's consider the self-attention mechanism as shown in the figure below: Image source: https://towardsdatascience.com/illustrated-self-attention-2d627e33b20a The difference from the above figure is that the queries, keys, and values are transformations of the corresponding input state vectors. The others remain the same. Note that we could still use the original encoder state vectors as the queries, keys, and values. So, why we need the transformation? The transformation is simply a matrix multiplication like this: Query = I x W(Q) Key = I x W(K) Value = I x W(V) where I is the input (encoder) state vector, and W(Q), W(K), and W(V) are the corresponding matrices to transform the I vector into the Query, Key, Value vectors. What are the benefits of this matrix multiplication (vector transformation)? The obvious reason is that if we do not transform the input vectors, the dot product for computing the weight for each input's value will always yield a maximum weight score for the individual input token itself. In other words, when we compute the n attention weights (j for j=1, 2, ..., n) for input token at position i, the weight at i (j==i) is always the largest than the other weights at j=1, 2, ..., n (j<>i). This may not be the desired case. For example, for the pronoun token, we need it to attend to its referent, not the pronoun token itself. Another less obvious but important reason is that the transformation may yield better representations for Query, Key, and Value. Recall the effect of Singular Value Decomposition (SVD) like that in the following figure: Image source: https://youtu.be/K38wVcdNuFc?t=10 By multiplying an input vector with a matrix V (from the SVD), we obtain a better representation for computing the compatibility between two vectors, if these two vectors are similar in the topic space as shown in the example in the figure. And these matrices for transformation can be learned in a neural network! In short, by multiplying the input vector with a matrix, we got: increase of the possibility for each input token to attend to other tokens in the input sequence, instead of individual token itself possibly better (latent) representations of the input vector conversion of the input vector into a space with a desired dimension, say, from dimension 5 to 2, or from n to m, etc (which is practically useful) I hope this help you understand the queries, keys, and values in the (self-)attention mechanism of deep neural networks.
What exactly are keys, queries, and values in attention mechanisms?
I was also puzzled by the keys, queries, and values in the attention mechanisms for a while. After searching on the Web and digesting relevant information, I have a clear picture about how the keys,
What exactly are keys, queries, and values in attention mechanisms? I was also puzzled by the keys, queries, and values in the attention mechanisms for a while. After searching on the Web and digesting relevant information, I have a clear picture about how the keys, queries, and values work and why they would work! Let's see how they work, followed by why they work. Attention to replace context vector In a seq2seq model, we encode the input sequence to a context vector, and then feed this context vector to the decoder to yield expected good output. However, if the input sequence becomes long, relying on only one context vector become less effective. We need all the information from the hidden states in the input sequence (encoder) for better decoding (the attention mechanism). One way to utilize the input hidden states is shown below: Image source: https://towardsdatascience.com/attn-illustrated-attention-5ec4ad276ee3 In other words, in this attention mechanism, the context vector is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key (this is a slightly modified sentence from [Attention Is All You Need] https://arxiv.org/pdf/1706.03762.pdf). Here, the query is from the decoder hidden state, the key and value are from the encoder hidden states (key and value are the same in this figure). The score is the compatibility between the query and key, which can be a dot product between the query and key (or other form of compatibility). The scores then go through the softmax function to yield a set of weights whose sum equals 1. Each weight multiplies its corresponding values to yield the context vector which utilizes all the input hidden states. Note that if we manually set the weight of the last input to 1 and all its precedences to 0s, we reduce the attention mechanism to the original seq2seq context vector mechanism. That is, there is no attention to the earlier input encoder states. Self-Attention uses Q, K, V all from the input Now, let's consider the self-attention mechanism as shown in the figure below: Image source: https://towardsdatascience.com/illustrated-self-attention-2d627e33b20a The difference from the above figure is that the queries, keys, and values are transformations of the corresponding input state vectors. The others remain the same. Note that we could still use the original encoder state vectors as the queries, keys, and values. So, why we need the transformation? The transformation is simply a matrix multiplication like this: Query = I x W(Q) Key = I x W(K) Value = I x W(V) where I is the input (encoder) state vector, and W(Q), W(K), and W(V) are the corresponding matrices to transform the I vector into the Query, Key, Value vectors. What are the benefits of this matrix multiplication (vector transformation)? The obvious reason is that if we do not transform the input vectors, the dot product for computing the weight for each input's value will always yield a maximum weight score for the individual input token itself. In other words, when we compute the n attention weights (j for j=1, 2, ..., n) for input token at position i, the weight at i (j==i) is always the largest than the other weights at j=1, 2, ..., n (j<>i). This may not be the desired case. For example, for the pronoun token, we need it to attend to its referent, not the pronoun token itself. Another less obvious but important reason is that the transformation may yield better representations for Query, Key, and Value. Recall the effect of Singular Value Decomposition (SVD) like that in the following figure: Image source: https://youtu.be/K38wVcdNuFc?t=10 By multiplying an input vector with a matrix V (from the SVD), we obtain a better representation for computing the compatibility between two vectors, if these two vectors are similar in the topic space as shown in the example in the figure. And these matrices for transformation can be learned in a neural network! In short, by multiplying the input vector with a matrix, we got: increase of the possibility for each input token to attend to other tokens in the input sequence, instead of individual token itself possibly better (latent) representations of the input vector conversion of the input vector into a space with a desired dimension, say, from dimension 5 to 2, or from n to m, etc (which is practically useful) I hope this help you understand the queries, keys, and values in the (self-)attention mechanism of deep neural networks.
What exactly are keys, queries, and values in attention mechanisms? I was also puzzled by the keys, queries, and values in the attention mechanisms for a while. After searching on the Web and digesting relevant information, I have a clear picture about how the keys,
512
What exactly are keys, queries, and values in attention mechanisms?
First, understand Q and K First, focus on the objective of First MatMul in the Scaled dot product attention using Q and K. Intuition on what is Attention For the sentence "jane visits africa". When your eyes see jane, your brain looks for the most related word in the rest of the sentence to understand what jane is about (query). Your brain focuses or attends to the word visit (key). This process happens for each word in the sentence as your eyes progress through the sentence. First MatMul as Inquiry System using Vector Similarity The first MatMul implements an inquiry system or question-answer system that imitates this brain function, using Vector Similarity Calculation. Watch CS480/680 Lecture 19: Attention and Transformer Networks by professor Pascal Poupart to understand further. Think about the attention essentially being some form of approximation of SELECT that you would do in the database. Think of the MatMul as an inquiry system that processes the inquiry: "For the word q that your eyes see in the given sentence, what is the most related word k in the sentence to understand what q is about?" The inquiry system provides the answer as the probability. q k probability jane visit 0.94 visit africa 0.86 africa visit 0.76 Note that the softmax is used to scale (in yellow) to normalize values into probabilities so that their sum becomes 1.0. There are multiple ways to calculate the similarity between vectors such as cosine similarity. Transformer attention uses simple dot product. Where are Q and K from The transformer encoder training builds the weight parameter matrices WQ and Wk in the way Q and K builds the Inquiry System that answers the inquiry "What is k for the word q". The calculation goes like below where x is a sequence of position-encoded word embedding vectors that represents an input sentence. Picks up a word vector (position encoded) from the input sentence sequence, and transfer it to a vector space Q. This becomes the query. $Q = X \cdot W_{Q}^T$ Pick all the words in the sentence and transfer them to the vector space K. They become keys and each of them is used as key. $K = X \cdot W_K^T$ For each (q, k) pair, their relation strength is calculated using dot product. $q\_to\_k\_similarity\_scores = matmul(Q, K^T)$ Weight matrices $W_Q$ and $W_K$ are trained via the back propagations during the Transformer training. We first needs to understand this part that involves Q and K before moving to V. Then, understand how V is created using Q and K Second Matmul Self Attention then generates the embedding vector called attention value as a bag of words where each word contributes proportionally according to its relationship strength to q. This occurs for each q from the sentence sequence. The embedding vector is encoding the relations from q to all the words in the sentence. References There are multiple concepts that will help understand how the self attention in transformer works, e.g. embedding to group similars in a vector space, data retrieval to answer query Q using the neural network and vector similarity. Transformers Explained Visually (Part 2): How it works, step-by-step give in-detail explanation of what the Transformer is doing. CS480/680 Lecture 19: Attention and Transformer Networks - This is probably the best explanation I found that actually explains the attention mechanism from the database perspective. Illustrated Guide to Transformers Neural Network: A step by step explanation Distributed Representations of Words and Phrases and their Compositionality - It helps understand how word2vec works to group/categorize words in a vector space by pulling similar words together, and pushing away non-similar words using negative sampling. Generalized End-to-End Loss for Speaker Verification - Continuation to understand embedding to pull together siimilars and pushing away non-similars in a vector space. Transformer model for language understanding - TensorFlow implementation of transformer The Annotated Transformer - PyTorch implementation of Transformer Update Getting meaning from text: self-attention step-by-step video has visual representation of query, key, value. Update 2 Andrej Karpathy explained by regarding a sentence as a graph as in CS25 I Stanford Seminar - Transformers United 2023: Introduction to Transformers w/ Andrej Karpathy. Each token (position encoded word embedding) in a sentence is a node having edges to other nodes that represent attentions. When we focus on one node, it is a query that communicates with other nodes which are keys.
What exactly are keys, queries, and values in attention mechanisms?
First, understand Q and K First, focus on the objective of First MatMul in the Scaled dot product attention using Q and K. Intuition on what is Attention For the sentence "jane visits africa". When y
What exactly are keys, queries, and values in attention mechanisms? First, understand Q and K First, focus on the objective of First MatMul in the Scaled dot product attention using Q and K. Intuition on what is Attention For the sentence "jane visits africa". When your eyes see jane, your brain looks for the most related word in the rest of the sentence to understand what jane is about (query). Your brain focuses or attends to the word visit (key). This process happens for each word in the sentence as your eyes progress through the sentence. First MatMul as Inquiry System using Vector Similarity The first MatMul implements an inquiry system or question-answer system that imitates this brain function, using Vector Similarity Calculation. Watch CS480/680 Lecture 19: Attention and Transformer Networks by professor Pascal Poupart to understand further. Think about the attention essentially being some form of approximation of SELECT that you would do in the database. Think of the MatMul as an inquiry system that processes the inquiry: "For the word q that your eyes see in the given sentence, what is the most related word k in the sentence to understand what q is about?" The inquiry system provides the answer as the probability. q k probability jane visit 0.94 visit africa 0.86 africa visit 0.76 Note that the softmax is used to scale (in yellow) to normalize values into probabilities so that their sum becomes 1.0. There are multiple ways to calculate the similarity between vectors such as cosine similarity. Transformer attention uses simple dot product. Where are Q and K from The transformer encoder training builds the weight parameter matrices WQ and Wk in the way Q and K builds the Inquiry System that answers the inquiry "What is k for the word q". The calculation goes like below where x is a sequence of position-encoded word embedding vectors that represents an input sentence. Picks up a word vector (position encoded) from the input sentence sequence, and transfer it to a vector space Q. This becomes the query. $Q = X \cdot W_{Q}^T$ Pick all the words in the sentence and transfer them to the vector space K. They become keys and each of them is used as key. $K = X \cdot W_K^T$ For each (q, k) pair, their relation strength is calculated using dot product. $q\_to\_k\_similarity\_scores = matmul(Q, K^T)$ Weight matrices $W_Q$ and $W_K$ are trained via the back propagations during the Transformer training. We first needs to understand this part that involves Q and K before moving to V. Then, understand how V is created using Q and K Second Matmul Self Attention then generates the embedding vector called attention value as a bag of words where each word contributes proportionally according to its relationship strength to q. This occurs for each q from the sentence sequence. The embedding vector is encoding the relations from q to all the words in the sentence. References There are multiple concepts that will help understand how the self attention in transformer works, e.g. embedding to group similars in a vector space, data retrieval to answer query Q using the neural network and vector similarity. Transformers Explained Visually (Part 2): How it works, step-by-step give in-detail explanation of what the Transformer is doing. CS480/680 Lecture 19: Attention and Transformer Networks - This is probably the best explanation I found that actually explains the attention mechanism from the database perspective. Illustrated Guide to Transformers Neural Network: A step by step explanation Distributed Representations of Words and Phrases and their Compositionality - It helps understand how word2vec works to group/categorize words in a vector space by pulling similar words together, and pushing away non-similar words using negative sampling. Generalized End-to-End Loss for Speaker Verification - Continuation to understand embedding to pull together siimilars and pushing away non-similars in a vector space. Transformer model for language understanding - TensorFlow implementation of transformer The Annotated Transformer - PyTorch implementation of Transformer Update Getting meaning from text: self-attention step-by-step video has visual representation of query, key, value. Update 2 Andrej Karpathy explained by regarding a sentence as a graph as in CS25 I Stanford Seminar - Transformers United 2023: Introduction to Transformers w/ Andrej Karpathy. Each token (position encoded word embedding) in a sentence is a node having edges to other nodes that represent attentions. When we focus on one node, it is a query that communicates with other nodes which are keys.
What exactly are keys, queries, and values in attention mechanisms? First, understand Q and K First, focus on the objective of First MatMul in the Scaled dot product attention using Q and K. Intuition on what is Attention For the sentence "jane visits africa". When y
513
What exactly are keys, queries, and values in attention mechanisms?
See Attention is all you need - masterclass, from 15:46 onwards Lukasz Kaiser explains what q, K and V are. So basically: q = the vector representing a word K and V = your memory, thus all the words that have been generated before. Note that K and V can be the same (but don't have to). So what you do with attention is that you take your current query (word in most cases) and look in your memory for similar keys. To come up with a distribution of relevant words, the softmax function is then used.
What exactly are keys, queries, and values in attention mechanisms?
See Attention is all you need - masterclass, from 15:46 onwards Lukasz Kaiser explains what q, K and V are. So basically: q = the vector representing a word K and V = your memory, thus all the words
What exactly are keys, queries, and values in attention mechanisms? See Attention is all you need - masterclass, from 15:46 onwards Lukasz Kaiser explains what q, K and V are. So basically: q = the vector representing a word K and V = your memory, thus all the words that have been generated before. Note that K and V can be the same (but don't have to). So what you do with attention is that you take your current query (word in most cases) and look in your memory for similar keys. To come up with a distribution of relevant words, the softmax function is then used.
What exactly are keys, queries, and values in attention mechanisms? See Attention is all you need - masterclass, from 15:46 onwards Lukasz Kaiser explains what q, K and V are. So basically: q = the vector representing a word K and V = your memory, thus all the words
514
What exactly are keys, queries, and values in attention mechanisms?
I'm going to try provide an English text example. The following is based solely on my intuitive understanding of the paper 'Attention is all you need'. Say you have a sentence: I like Natural Language Processing , a lot ! Assume that we already have input word vectors for all the 9 tokens in the previous sentence. So, 9 input word vectors. Looking at the encoder from the paper 'Attention is all you need', the encoder needs to produce 9 output vectors, one for each word. This is done, through the Scaled Dot-Product Attention mechanism, coupled with the Multi-Head Attention mechanism. I'm going to focus only on an intuitive understanding of the Scaled Dot-Product Attention mechanism, and I'm not going to go into the scaling mechanism. Walking through an example for the first word 'I': The query is the input word vector for the token "I" The keys are the input word vectors for all the other tokens, and for the query token too, i.e (semi-colon delimited in the list below): [like;Natural;Language;Processing;,;a;lot;!] + [I] The word vector of the query is then DotProduct-ed with the word vectors of each of the keys, to get 9 scalars / numbers a.k.a "weights" These weights are then scaled, but this is not important to understand the intuition The weights then go through a 'softmax' which is a particular way of normalizing the 9 weights to values between 0 and 1. This becomes important to get a "weighted-average" of the value vectors , which we see in the next step. Finally, the initial 9 input word vectors a.k.a values are summed in a "weighted average", with the normalized weights of the previous step. This final step results in a single output word vector representation of the word "I" Now that we have the process for the word "I", rinse and repeat to get word vectors for the remaining 8 tokens. We now have 9 output word vectors, each put through the Scaled Dot-Product attention mechanism. You can then add a new attention layer/mechanism to the encoder, by taking these 9 new outputs (a.k.a "hidden vectors"), and considering these as inputs to the new attention layer, which outputs 9 new word vectors of its own. And so on ad infinitum. If this Scaled Dot-Product Attention layer summarizable, I would summarize it by pointing out that each token (query) is free to take as much information using the dot-product mechanism from the other words (values), and it can pay as much or as little attention to the other words as it likes by weighting the other words with (keys) . The real power of the attention layer / transformer comes from the fact that each token is looking at all the other tokens at the same time (unlike an RNN / LSTM which is restricted to looking at the tokens to the left) The Multi-head Attention mechanism in my understanding is this same process happening independently in parallel a given number of times (i.e number of heads), and then the result of each parallel process is combined and processed later on using math. I didn't fully understand the rationale of having the same thing done multiple times in parallel before combining, but i wonder if its something to do with, as the authors might mention, the fact that each parallel process takes place in a separate Linear Algebraic 'space' so combining the results from multiple 'spaces' might be a good and robust thing (though the math to prove that is way beyond my understanding...)
What exactly are keys, queries, and values in attention mechanisms?
I'm going to try provide an English text example. The following is based solely on my intuitive understanding of the paper 'Attention is all you need'. Say you have a sentence: I like Natural Languag
What exactly are keys, queries, and values in attention mechanisms? I'm going to try provide an English text example. The following is based solely on my intuitive understanding of the paper 'Attention is all you need'. Say you have a sentence: I like Natural Language Processing , a lot ! Assume that we already have input word vectors for all the 9 tokens in the previous sentence. So, 9 input word vectors. Looking at the encoder from the paper 'Attention is all you need', the encoder needs to produce 9 output vectors, one for each word. This is done, through the Scaled Dot-Product Attention mechanism, coupled with the Multi-Head Attention mechanism. I'm going to focus only on an intuitive understanding of the Scaled Dot-Product Attention mechanism, and I'm not going to go into the scaling mechanism. Walking through an example for the first word 'I': The query is the input word vector for the token "I" The keys are the input word vectors for all the other tokens, and for the query token too, i.e (semi-colon delimited in the list below): [like;Natural;Language;Processing;,;a;lot;!] + [I] The word vector of the query is then DotProduct-ed with the word vectors of each of the keys, to get 9 scalars / numbers a.k.a "weights" These weights are then scaled, but this is not important to understand the intuition The weights then go through a 'softmax' which is a particular way of normalizing the 9 weights to values between 0 and 1. This becomes important to get a "weighted-average" of the value vectors , which we see in the next step. Finally, the initial 9 input word vectors a.k.a values are summed in a "weighted average", with the normalized weights of the previous step. This final step results in a single output word vector representation of the word "I" Now that we have the process for the word "I", rinse and repeat to get word vectors for the remaining 8 tokens. We now have 9 output word vectors, each put through the Scaled Dot-Product attention mechanism. You can then add a new attention layer/mechanism to the encoder, by taking these 9 new outputs (a.k.a "hidden vectors"), and considering these as inputs to the new attention layer, which outputs 9 new word vectors of its own. And so on ad infinitum. If this Scaled Dot-Product Attention layer summarizable, I would summarize it by pointing out that each token (query) is free to take as much information using the dot-product mechanism from the other words (values), and it can pay as much or as little attention to the other words as it likes by weighting the other words with (keys) . The real power of the attention layer / transformer comes from the fact that each token is looking at all the other tokens at the same time (unlike an RNN / LSTM which is restricted to looking at the tokens to the left) The Multi-head Attention mechanism in my understanding is this same process happening independently in parallel a given number of times (i.e number of heads), and then the result of each parallel process is combined and processed later on using math. I didn't fully understand the rationale of having the same thing done multiple times in parallel before combining, but i wonder if its something to do with, as the authors might mention, the fact that each parallel process takes place in a separate Linear Algebraic 'space' so combining the results from multiple 'spaces' might be a good and robust thing (though the math to prove that is way beyond my understanding...)
What exactly are keys, queries, and values in attention mechanisms? I'm going to try provide an English text example. The following is based solely on my intuitive understanding of the paper 'Attention is all you need'. Say you have a sentence: I like Natural Languag
515
What exactly are keys, queries, and values in attention mechanisms?
Tensorflow and Keras just expanded on their documentation for the Attention and AdditiveAttention layers. Here is a sneaky peek from the docs: The meaning of query, value and key depend on the application. In the case of text similarity, for example, query is the sequence embeddings of the first piece of text and value is the sequence embeddings of the second piece of text. key is usually the same tensor as value. But for my own explanation, different attention layers try to accomplish the same task with mapping a function $f: \Bbb{R}^{T\times D} \mapsto \Bbb{R}^{T \times D}$ where T is the hidden sequence length and D is the feature vector size. For the case of global self- attention which is the most common application, you first need sequence data in the shape of $B\times T \times D$, where $B$ is the batch size. Each forward propagation (particularly after an encoder such as a Bi-LSTM, GRU or LSTM layer with return_state and return_sequences=True for TF), it tries to map the selected hidden state (Query) to the most similar other hidden states (Keys). After repeating it for each hidden state, and softmax the results, multiply with the keys again (which are also the values) to get the vector that indicates how much attention you should give for each hidden state. I hope this helps anyone as it took me days to figure it out.
What exactly are keys, queries, and values in attention mechanisms?
Tensorflow and Keras just expanded on their documentation for the Attention and AdditiveAttention layers. Here is a sneaky peek from the docs: The meaning of query, value and key depend on the appli
What exactly are keys, queries, and values in attention mechanisms? Tensorflow and Keras just expanded on their documentation for the Attention and AdditiveAttention layers. Here is a sneaky peek from the docs: The meaning of query, value and key depend on the application. In the case of text similarity, for example, query is the sequence embeddings of the first piece of text and value is the sequence embeddings of the second piece of text. key is usually the same tensor as value. But for my own explanation, different attention layers try to accomplish the same task with mapping a function $f: \Bbb{R}^{T\times D} \mapsto \Bbb{R}^{T \times D}$ where T is the hidden sequence length and D is the feature vector size. For the case of global self- attention which is the most common application, you first need sequence data in the shape of $B\times T \times D$, where $B$ is the batch size. Each forward propagation (particularly after an encoder such as a Bi-LSTM, GRU or LSTM layer with return_state and return_sequences=True for TF), it tries to map the selected hidden state (Query) to the most similar other hidden states (Keys). After repeating it for each hidden state, and softmax the results, multiply with the keys again (which are also the values) to get the vector that indicates how much attention you should give for each hidden state. I hope this helps anyone as it took me days to figure it out.
What exactly are keys, queries, and values in attention mechanisms? Tensorflow and Keras just expanded on their documentation for the Attention and AdditiveAttention layers. Here is a sneaky peek from the docs: The meaning of query, value and key depend on the appli
516
What exactly are keys, queries, and values in attention mechanisms?
Queries is a set of vectors you want to calculate attention for. Keys is a set of vectors you want to calculate attention against. As a result of dot product multiplication you'll get set of weights a (also vectors) showing how attended each query against Keys. Then you multiply it by Values to get resulting set of vectors. Now let's look at word processing from the article "Attention is all you need". There are two self-attending (xN times each) blocks, separately for inputs and outputs plus cross-attending block transmitting knowledge from inputs to outputs. Each self-attending block gets just one set of vectors (embeddings added to positional values). In this case you are calculating attention for vectors against each other. So Q=K=V. You just need to calculate attention for each q in Q. Cross-attending block transmits knowledge from inputs to outputs. In this case you get K=V from inputs and Q are received from outputs. I think it's pretty logical: you have database of knowledge you derive from the inputs and by asking Queries from the output you extract required knowledge. How attention works: dot product between vectors gets bigger value when vectors are better aligned. Then you divide by some value (scale) to evade problem of small gradients and calculate softmax (when sum of weights=1). At this point you get set of weights sum=1 that tell you for which vectors in Keys your query is better aligned. All that's left is to multiply by Values.
What exactly are keys, queries, and values in attention mechanisms?
Queries is a set of vectors you want to calculate attention for. Keys is a set of vectors you want to calculate attention against. As a result of dot product multiplication you'll get set of weights a
What exactly are keys, queries, and values in attention mechanisms? Queries is a set of vectors you want to calculate attention for. Keys is a set of vectors you want to calculate attention against. As a result of dot product multiplication you'll get set of weights a (also vectors) showing how attended each query against Keys. Then you multiply it by Values to get resulting set of vectors. Now let's look at word processing from the article "Attention is all you need". There are two self-attending (xN times each) blocks, separately for inputs and outputs plus cross-attending block transmitting knowledge from inputs to outputs. Each self-attending block gets just one set of vectors (embeddings added to positional values). In this case you are calculating attention for vectors against each other. So Q=K=V. You just need to calculate attention for each q in Q. Cross-attending block transmits knowledge from inputs to outputs. In this case you get K=V from inputs and Q are received from outputs. I think it's pretty logical: you have database of knowledge you derive from the inputs and by asking Queries from the output you extract required knowledge. How attention works: dot product between vectors gets bigger value when vectors are better aligned. Then you divide by some value (scale) to evade problem of small gradients and calculate softmax (when sum of weights=1). At this point you get set of weights sum=1 that tell you for which vectors in Keys your query is better aligned. All that's left is to multiply by Values.
What exactly are keys, queries, and values in attention mechanisms? Queries is a set of vectors you want to calculate attention for. Keys is a set of vectors you want to calculate attention against. As a result of dot product multiplication you'll get set of weights a
517
What exactly are keys, queries, and values in attention mechanisms?
Where are people getting the key, query, and value from these equations? The paper you refer to does not use such terminology as "key", "query", or "value", so it is not clear what you mean in here. There is no single definition of "attention" for neural networks, so my guess is that you confused two definitions from different papers. In the paper, the attention module has weights $\alpha$ and the values to be weighted $h$, where the weights are derived from the recurrent neural network outputs, as described by the equations you quoted, and on the figure from the paper reproduced below. Similar thing happens in the Transformer model from the Attention is all you need paper by Vaswani et al, where they do use "keys", "querys", and "values" ($Q$, $K$, $V$). Vaswani et al define the attention cell differently: $$ \mathrm{Attention}(Q, K, V) = \mathrm{softmax}\Big(\frac{QK^T}{\sqrt{d_k}}\Big)V $$ What they also use is multi-head attention, where instead of a single value for each $Q$, $K$, $V$, they provide multiple such values. Where in the Transformer model, the $Q$, $K$, $V$ values can either come from the same inputs in the encoder (bottom part of the figure below), or from different sources in the decoder (upper right part of the figure). This part is crucial for using this model in translation tasks. In both papers, as described, the values that come as input to the attention layers are calculated from the outputs of the preceding layers of the network. Both paper define different ways of obtaining those values, since they use different definition of attention layer.
What exactly are keys, queries, and values in attention mechanisms?
Where are people getting the key, query, and value from these equations? The paper you refer to does not use such terminology as "key", "query", or "value", so it is not clear what you mean in here
What exactly are keys, queries, and values in attention mechanisms? Where are people getting the key, query, and value from these equations? The paper you refer to does not use such terminology as "key", "query", or "value", so it is not clear what you mean in here. There is no single definition of "attention" for neural networks, so my guess is that you confused two definitions from different papers. In the paper, the attention module has weights $\alpha$ and the values to be weighted $h$, where the weights are derived from the recurrent neural network outputs, as described by the equations you quoted, and on the figure from the paper reproduced below. Similar thing happens in the Transformer model from the Attention is all you need paper by Vaswani et al, where they do use "keys", "querys", and "values" ($Q$, $K$, $V$). Vaswani et al define the attention cell differently: $$ \mathrm{Attention}(Q, K, V) = \mathrm{softmax}\Big(\frac{QK^T}{\sqrt{d_k}}\Big)V $$ What they also use is multi-head attention, where instead of a single value for each $Q$, $K$, $V$, they provide multiple such values. Where in the Transformer model, the $Q$, $K$, $V$ values can either come from the same inputs in the encoder (bottom part of the figure below), or from different sources in the decoder (upper right part of the figure). This part is crucial for using this model in translation tasks. In both papers, as described, the values that come as input to the attention layers are calculated from the outputs of the preceding layers of the network. Both paper define different ways of obtaining those values, since they use different definition of attention layer.
What exactly are keys, queries, and values in attention mechanisms? Where are people getting the key, query, and value from these equations? The paper you refer to does not use such terminology as "key", "query", or "value", so it is not clear what you mean in here
518
What exactly are keys, queries, and values in attention mechanisms?
This is an add up of what is K and V and why the author use different parameter to represent K and V. Short answer is technically K and V can be different and there is a case where people use different values for K and V. K and V can be different! Example Offered! What are K and V? Are they the same? The short answer is that they can be the same, but technically they do not need to be the same. Briefly introduce K, V, Q but highly recommend the previous answers: In the Attention is all you need paper, this Q, K, V are first introduced. In that paper, generally(which means not self attention), the Q is the decoder embedding vector(the side we want), K is the encoder embedding vector(the side we are given), V is also the encoder embedding vector. And this attention mechanism is all about trying to find the relationship(weights) between the Q with all those Ks, then we can use these weights(freshly computed for each Q) to compute a new vector using Vs(which should related with Ks). If this is self attention: Q, V, K can even come from the same side -- eg. compute the relationship among the features in the encoding side between each other.(Why not show strong relation between itself? Projection.) Case where they are the same: here in the Attention is all you need paper, they are the same before projection. Also in this transformer code tutorial, V and K is also the same before projection. Case where K and V is not the same: In the paper End-to-End Object Detection Appendix A.1 Single head(this part is an introduction for multi head attention, you do not have to read the paper to figure out what this is about), they offer an intro to multi-head attention that is used in the Attention is All You Need papar, here they add some positional info to the K but not to the V in equation (7), which makes the K and the V here are not the same. Hope this helps. Edit: As recommended by @alelom, I put my very shallow and informal understand of K, Q, V here. For me, informally, the Key, Value and Query are all features/embeddings. Though it actually depends on the implementation but commonly, Query is feature/embedding from the output side(eg. target language in translation). Key is feature/embedding from the input side(eg. source language in translation), and for Value, basing on what I read by far, it should certainly relate to / be derived from Key since the parameter in front of it is computed basing on relationship between K and Q, but it can be a feature that is based on K but being added some external information or being removed some information from the source(like some feature that is special for source but not helpful for the target)... What I have read(very limited, and I cannot recall the complete list since it is already a year ago, but all these are the ones that I found helpful and impressive, and basically it is just a summary of what I referred above...): Neural Machine Translation By Jointly Learning To Align And Translate. Attention Is All You Need. and a tensorflow tutorial of transformer:https://www.tensorflow.org/text/tutorials/nmt_with_attention. End-to-end object detection with Transformers, and its code: https://github.com/facebookresearch/detr. lil'log: https://lilianweng.github.io/posts/2018-06-24-attention/
What exactly are keys, queries, and values in attention mechanisms?
This is an add up of what is K and V and why the author use different parameter to represent K and V. Short answer is technically K and V can be different and there is a case where people use differen
What exactly are keys, queries, and values in attention mechanisms? This is an add up of what is K and V and why the author use different parameter to represent K and V. Short answer is technically K and V can be different and there is a case where people use different values for K and V. K and V can be different! Example Offered! What are K and V? Are they the same? The short answer is that they can be the same, but technically they do not need to be the same. Briefly introduce K, V, Q but highly recommend the previous answers: In the Attention is all you need paper, this Q, K, V are first introduced. In that paper, generally(which means not self attention), the Q is the decoder embedding vector(the side we want), K is the encoder embedding vector(the side we are given), V is also the encoder embedding vector. And this attention mechanism is all about trying to find the relationship(weights) between the Q with all those Ks, then we can use these weights(freshly computed for each Q) to compute a new vector using Vs(which should related with Ks). If this is self attention: Q, V, K can even come from the same side -- eg. compute the relationship among the features in the encoding side between each other.(Why not show strong relation between itself? Projection.) Case where they are the same: here in the Attention is all you need paper, they are the same before projection. Also in this transformer code tutorial, V and K is also the same before projection. Case where K and V is not the same: In the paper End-to-End Object Detection Appendix A.1 Single head(this part is an introduction for multi head attention, you do not have to read the paper to figure out what this is about), they offer an intro to multi-head attention that is used in the Attention is All You Need papar, here they add some positional info to the K but not to the V in equation (7), which makes the K and the V here are not the same. Hope this helps. Edit: As recommended by @alelom, I put my very shallow and informal understand of K, Q, V here. For me, informally, the Key, Value and Query are all features/embeddings. Though it actually depends on the implementation but commonly, Query is feature/embedding from the output side(eg. target language in translation). Key is feature/embedding from the input side(eg. source language in translation), and for Value, basing on what I read by far, it should certainly relate to / be derived from Key since the parameter in front of it is computed basing on relationship between K and Q, but it can be a feature that is based on K but being added some external information or being removed some information from the source(like some feature that is special for source but not helpful for the target)... What I have read(very limited, and I cannot recall the complete list since it is already a year ago, but all these are the ones that I found helpful and impressive, and basically it is just a summary of what I referred above...): Neural Machine Translation By Jointly Learning To Align And Translate. Attention Is All You Need. and a tensorflow tutorial of transformer:https://www.tensorflow.org/text/tutorials/nmt_with_attention. End-to-end object detection with Transformers, and its code: https://github.com/facebookresearch/detr. lil'log: https://lilianweng.github.io/posts/2018-06-24-attention/
What exactly are keys, queries, and values in attention mechanisms? This is an add up of what is K and V and why the author use different parameter to represent K and V. Short answer is technically K and V can be different and there is a case where people use differen
519
Why is Newton's method not widely used in machine learning?
Gradient descent maximizes a function using knowledge of its derivative. Newton's method, a root finding algorithm, maximizes a function using knowledge of its second derivative. That can be faster when the second derivative is known and easy to compute (the Newton-Raphson algorithm is used in logistic regression). However, the analytic expression for the second derivative is often complicated or intractable, requiring a lot of computation. Numerical methods for computing the second derivative also require a lot of computation -- if $N$ values are required to compute the first derivative, $N^2$ are required for the second derivative.
Why is Newton's method not widely used in machine learning?
Gradient descent maximizes a function using knowledge of its derivative. Newton's method, a root finding algorithm, maximizes a function using knowledge of its second derivative. That can be faster wh
Why is Newton's method not widely used in machine learning? Gradient descent maximizes a function using knowledge of its derivative. Newton's method, a root finding algorithm, maximizes a function using knowledge of its second derivative. That can be faster when the second derivative is known and easy to compute (the Newton-Raphson algorithm is used in logistic regression). However, the analytic expression for the second derivative is often complicated or intractable, requiring a lot of computation. Numerical methods for computing the second derivative also require a lot of computation -- if $N$ values are required to compute the first derivative, $N^2$ are required for the second derivative.
Why is Newton's method not widely used in machine learning? Gradient descent maximizes a function using knowledge of its derivative. Newton's method, a root finding algorithm, maximizes a function using knowledge of its second derivative. That can be faster wh
520
Why is Newton's method not widely used in machine learning?
More people should be using Newton's method in machine learning*. I say this as someone with a background in numerical optimization, who has dabbled in machine learning over the past couple of years. The drawbacks in answers here (and even in the literature) are not an issue if you use Newton's method correctly. Moreover, the drawbacks that do matter also slow down gradient descent the same amount or more, but through less obvious mechanisms. Using linesearch with the Wolfe conditions or using or trust regions prevents convergence to saddle points. A proper gradient descent implementation should be doing this too. The paper referenced in Cam.Davidson.Pilon's answer points out problems with "Newton's method" in the presence of saddle points, but the fix they advocate is also a Newton method. Using Newton's method does not require constructing the whole (dense) Hessian; you can apply the inverse of the Hessian to a vector with iterative methods that only use matrix-vector products (e.g., Krylov methods like conjugate gradient). See, for example, the CG-Steihaug trust region method. You can compute Hessian matrix-vector products efficiently by solving two higher order adjoint equations of the same form as the adjoint equation that is already used to compute the gradient (e.g., the work of two backpropagation steps in neural network training). Ill conditioning slows the convergence of iterative linear solvers, but it also slows gradient descent equally or worse. Using Newton's method instead of gradient descent shifts the difficulty from the nonlinear optimization stage (where not much can be done to improve the situation) to the linear algebra stage (where we can attack it with the entire arsenal of numerical linear algebra preconditioning techniques). Also, the computation shifts from "many many cheap steps" to "a few costly steps", opening up more opportunities for parallelism at the sub-step (linear algebra) level. For background information about these concepts, I recommend the book "Numerical Optimization" by Nocedal and Wright. *Of course, Newton's method will not help you with L1 or other similar compressed sensing/sparsity promoting penalty functions, since they lack the required smoothness.
Why is Newton's method not widely used in machine learning?
More people should be using Newton's method in machine learning*. I say this as someone with a background in numerical optimization, who has dabbled in machine learning over the past couple of years.
Why is Newton's method not widely used in machine learning? More people should be using Newton's method in machine learning*. I say this as someone with a background in numerical optimization, who has dabbled in machine learning over the past couple of years. The drawbacks in answers here (and even in the literature) are not an issue if you use Newton's method correctly. Moreover, the drawbacks that do matter also slow down gradient descent the same amount or more, but through less obvious mechanisms. Using linesearch with the Wolfe conditions or using or trust regions prevents convergence to saddle points. A proper gradient descent implementation should be doing this too. The paper referenced in Cam.Davidson.Pilon's answer points out problems with "Newton's method" in the presence of saddle points, but the fix they advocate is also a Newton method. Using Newton's method does not require constructing the whole (dense) Hessian; you can apply the inverse of the Hessian to a vector with iterative methods that only use matrix-vector products (e.g., Krylov methods like conjugate gradient). See, for example, the CG-Steihaug trust region method. You can compute Hessian matrix-vector products efficiently by solving two higher order adjoint equations of the same form as the adjoint equation that is already used to compute the gradient (e.g., the work of two backpropagation steps in neural network training). Ill conditioning slows the convergence of iterative linear solvers, but it also slows gradient descent equally or worse. Using Newton's method instead of gradient descent shifts the difficulty from the nonlinear optimization stage (where not much can be done to improve the situation) to the linear algebra stage (where we can attack it with the entire arsenal of numerical linear algebra preconditioning techniques). Also, the computation shifts from "many many cheap steps" to "a few costly steps", opening up more opportunities for parallelism at the sub-step (linear algebra) level. For background information about these concepts, I recommend the book "Numerical Optimization" by Nocedal and Wright. *Of course, Newton's method will not help you with L1 or other similar compressed sensing/sparsity promoting penalty functions, since they lack the required smoothness.
Why is Newton's method not widely used in machine learning? More people should be using Newton's method in machine learning*. I say this as someone with a background in numerical optimization, who has dabbled in machine learning over the past couple of years.
521
Why is Newton's method not widely used in machine learning?
A combination of two reasons: Newton method attracts to saddle points; saddle points are common in machine learning, or in fact any multivariable optimization. Look at the function $$f=x^2-y^2$$ If you apply multivariate Newton method, you get the following. $$\mathbf{x}_{n+1} = \mathbf{x}_n - [\mathbf{H}f(\mathbf{x}_n)]^{-1} \nabla f(\mathbf{x}_n)$$ Let's get the Hessian: $$\mathbf{H}= \begin{bmatrix} \dfrac{\partial^2 f}{\partial x_1^2} & \dfrac{\partial^2 f}{\partial x_1\,\partial x_2} & \cdots & \dfrac{\partial^2 f}{\partial x_1\,\partial x_n} \\[2.2ex] \dfrac{\partial^2 f}{\partial x_2\,\partial x_1} & \dfrac{\partial^2 f}{\partial x_2^2} & \cdots & \dfrac{\partial^2 f}{\partial x_2\,\partial x_n} \\[2.2ex] \vdots & \vdots & \ddots & \vdots \\[2.2ex] \dfrac{\partial^2 f}{\partial x_n\,\partial x_1} & \dfrac{\partial^2 f}{\partial x_n\,\partial x_2} & \cdots & \dfrac{\partial^2 f}{\partial x_n^2} \end{bmatrix}.$$ $$\mathbf{H}= \begin{bmatrix} 2 & 0 \\[2.2ex] 0 & -2 \end{bmatrix}$$ Invert it: $$[\mathbf{H} f]^{-1}= \begin{bmatrix} 1/2 & 0 \\[2.2ex] 0 & -1/2 \end{bmatrix}$$ Get the gradient: $$\nabla f=\begin{bmatrix} 2x \\[2.2ex] -2y \end{bmatrix}$$ Get the final equation: $$\mathbf{\begin{bmatrix} x \\[2.2ex] y \end{bmatrix}}_{n+1} = \begin{bmatrix} x \\[2.2ex] y \end{bmatrix}_n -\begin{bmatrix} 1/2 & 0 \\[2.2ex] 0 & -1/2 \end{bmatrix} \begin{bmatrix} 2x_n \\[2.2ex] -2y_n \end{bmatrix}= \mathbf{\begin{bmatrix} x \\[2.2ex] y \end{bmatrix}}_n - \begin{bmatrix} x \\[2.2ex] y \end{bmatrix}_n = \begin{bmatrix} 0 \\[2.2ex] 0 \end{bmatrix} $$ So, you see how the Newton method led you to the saddle point at $x=0,y=0$. In contrast, the gradient descent method will not lead to the saddle point. The gradient is zero at the saddle point, but a tiny step out would pull the optimization away as you can see from the gradient above - its gradient on y-variable is negative.
Why is Newton's method not widely used in machine learning?
A combination of two reasons: Newton method attracts to saddle points; saddle points are common in machine learning, or in fact any multivariable optimization. Look at the function $$f=x^2-y^2$$
Why is Newton's method not widely used in machine learning? A combination of two reasons: Newton method attracts to saddle points; saddle points are common in machine learning, or in fact any multivariable optimization. Look at the function $$f=x^2-y^2$$ If you apply multivariate Newton method, you get the following. $$\mathbf{x}_{n+1} = \mathbf{x}_n - [\mathbf{H}f(\mathbf{x}_n)]^{-1} \nabla f(\mathbf{x}_n)$$ Let's get the Hessian: $$\mathbf{H}= \begin{bmatrix} \dfrac{\partial^2 f}{\partial x_1^2} & \dfrac{\partial^2 f}{\partial x_1\,\partial x_2} & \cdots & \dfrac{\partial^2 f}{\partial x_1\,\partial x_n} \\[2.2ex] \dfrac{\partial^2 f}{\partial x_2\,\partial x_1} & \dfrac{\partial^2 f}{\partial x_2^2} & \cdots & \dfrac{\partial^2 f}{\partial x_2\,\partial x_n} \\[2.2ex] \vdots & \vdots & \ddots & \vdots \\[2.2ex] \dfrac{\partial^2 f}{\partial x_n\,\partial x_1} & \dfrac{\partial^2 f}{\partial x_n\,\partial x_2} & \cdots & \dfrac{\partial^2 f}{\partial x_n^2} \end{bmatrix}.$$ $$\mathbf{H}= \begin{bmatrix} 2 & 0 \\[2.2ex] 0 & -2 \end{bmatrix}$$ Invert it: $$[\mathbf{H} f]^{-1}= \begin{bmatrix} 1/2 & 0 \\[2.2ex] 0 & -1/2 \end{bmatrix}$$ Get the gradient: $$\nabla f=\begin{bmatrix} 2x \\[2.2ex] -2y \end{bmatrix}$$ Get the final equation: $$\mathbf{\begin{bmatrix} x \\[2.2ex] y \end{bmatrix}}_{n+1} = \begin{bmatrix} x \\[2.2ex] y \end{bmatrix}_n -\begin{bmatrix} 1/2 & 0 \\[2.2ex] 0 & -1/2 \end{bmatrix} \begin{bmatrix} 2x_n \\[2.2ex] -2y_n \end{bmatrix}= \mathbf{\begin{bmatrix} x \\[2.2ex] y \end{bmatrix}}_n - \begin{bmatrix} x \\[2.2ex] y \end{bmatrix}_n = \begin{bmatrix} 0 \\[2.2ex] 0 \end{bmatrix} $$ So, you see how the Newton method led you to the saddle point at $x=0,y=0$. In contrast, the gradient descent method will not lead to the saddle point. The gradient is zero at the saddle point, but a tiny step out would pull the optimization away as you can see from the gradient above - its gradient on y-variable is negative.
Why is Newton's method not widely used in machine learning? A combination of two reasons: Newton method attracts to saddle points; saddle points are common in machine learning, or in fact any multivariable optimization. Look at the function $$f=x^2-y^2$$
522
Why is Newton's method not widely used in machine learning?
I recently learned this myself - the problem is the proliferation of saddle points in high-dimensional space, that Newton methods want to converge to. See this article: Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. Indeed the ratio of the number of saddle points to local minima increases exponentially with the dimensionality N. While gradient descent dynamics are repelled away from a saddle point to lower error by following directions of negative curvature, ...the Newton method does not treat saddle points appropriately; as argued below, saddle-points instead become attractive under the Newton dynamics.
Why is Newton's method not widely used in machine learning?
I recently learned this myself - the problem is the proliferation of saddle points in high-dimensional space, that Newton methods want to converge to. See this article: Identifying and attacking the s
Why is Newton's method not widely used in machine learning? I recently learned this myself - the problem is the proliferation of saddle points in high-dimensional space, that Newton methods want to converge to. See this article: Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. Indeed the ratio of the number of saddle points to local minima increases exponentially with the dimensionality N. While gradient descent dynamics are repelled away from a saddle point to lower error by following directions of negative curvature, ...the Newton method does not treat saddle points appropriately; as argued below, saddle-points instead become attractive under the Newton dynamics.
Why is Newton's method not widely used in machine learning? I recently learned this myself - the problem is the proliferation of saddle points in high-dimensional space, that Newton methods want to converge to. See this article: Identifying and attacking the s
523
Why is Newton's method not widely used in machine learning?
You asked two questions: Why don't more people use Newton's method, and why do so many people use stochastic gradient descent? These questions have different answers, because there are many algorithms that lessen the computational burden of Newton's method but often work better than SGD. First: Newton's Method takes a long time per iteration and is memory-intensive. As jwimberley points out, Newton's Method requires computing the second derivative, $H$, which is $O(N^2)$, where $N$ is the number of features, while computing the gradient, $g$, is only $O(N)$. But the next step is $H^{-1} g$, which is $O(N^3)$ to compute. So while computing the Hessian is expensive, inverting it or solving least squares is often even worse. (If you have sparse features, the asymptotics look better, but other methods also perform better, so sparsity doesn't make Newton relatively more appealing.) Second, many methods, not just gradient descent, are used more often than Newton; they are often knockoffs of Newton's method, in the sense that they approximate a Newton step at a lower computational cost per step but take more iterations to converge. Some examples: Because of the expense of inverting the Hessian, ``quasi-Newton" methods like BFGS approximate the inverse Hessian, $H^{-1}$, by looking at how the gradient has changed over the last few steps. BFGS is still very memory-intensive in high-dimensional settings because it requires storing the entire $O(N^2)$ approximate inverse Hessian. Limited memory BFGS (L-BFGS) calculates the next step direction as the approximate inverse Hessian times the gradient, but it only requires storing the last several gradient updates; it doesn't explicitly store the approximate inverse Hessian. When you don't want to deal with approximating second derivatives at all, gradient descent is appealing because it only uses only first-order information. Gradient descent is implicitly approximating the inverse Hessian as the learning rate times the identity matrix. I, personally, rarely use gradient descent: L-BFGS is just as easy to implement, since it only requires specifying the objective function and gradient; it has a better inverse Hessian approximation than gradient descent; and because gradient descent requires tuning the learning rate. Sometimes you have a very large number of observations (data points), but you could learn almost as well from a smaller number of observations. When that is the case, you can use "batch methods", like stochastic gradient descent, that cycle through using subsets of the observations.
Why is Newton's method not widely used in machine learning?
You asked two questions: Why don't more people use Newton's method, and why do so many people use stochastic gradient descent? These questions have different answers, because there are many algorithms
Why is Newton's method not widely used in machine learning? You asked two questions: Why don't more people use Newton's method, and why do so many people use stochastic gradient descent? These questions have different answers, because there are many algorithms that lessen the computational burden of Newton's method but often work better than SGD. First: Newton's Method takes a long time per iteration and is memory-intensive. As jwimberley points out, Newton's Method requires computing the second derivative, $H$, which is $O(N^2)$, where $N$ is the number of features, while computing the gradient, $g$, is only $O(N)$. But the next step is $H^{-1} g$, which is $O(N^3)$ to compute. So while computing the Hessian is expensive, inverting it or solving least squares is often even worse. (If you have sparse features, the asymptotics look better, but other methods also perform better, so sparsity doesn't make Newton relatively more appealing.) Second, many methods, not just gradient descent, are used more often than Newton; they are often knockoffs of Newton's method, in the sense that they approximate a Newton step at a lower computational cost per step but take more iterations to converge. Some examples: Because of the expense of inverting the Hessian, ``quasi-Newton" methods like BFGS approximate the inverse Hessian, $H^{-1}$, by looking at how the gradient has changed over the last few steps. BFGS is still very memory-intensive in high-dimensional settings because it requires storing the entire $O(N^2)$ approximate inverse Hessian. Limited memory BFGS (L-BFGS) calculates the next step direction as the approximate inverse Hessian times the gradient, but it only requires storing the last several gradient updates; it doesn't explicitly store the approximate inverse Hessian. When you don't want to deal with approximating second derivatives at all, gradient descent is appealing because it only uses only first-order information. Gradient descent is implicitly approximating the inverse Hessian as the learning rate times the identity matrix. I, personally, rarely use gradient descent: L-BFGS is just as easy to implement, since it only requires specifying the objective function and gradient; it has a better inverse Hessian approximation than gradient descent; and because gradient descent requires tuning the learning rate. Sometimes you have a very large number of observations (data points), but you could learn almost as well from a smaller number of observations. When that is the case, you can use "batch methods", like stochastic gradient descent, that cycle through using subsets of the observations.
Why is Newton's method not widely used in machine learning? You asked two questions: Why don't more people use Newton's method, and why do so many people use stochastic gradient descent? These questions have different answers, because there are many algorithms
524
Why is Newton's method not widely used in machine learning?
Gradient descent direction's cheaper to calculate, and performing a line search in that direction is a more reliable, steady source of progress toward an optimum. In short, gradient descent's relatively reliable. Newton's method is relatively expensive in that you need to calculate the Hessian on the first iteration. Then, on each subsequent iteration, you can either fully recalculate the Hessian (as in Newton's method) or merely "update" the prior iteration's Hessian (in quasi-Newton methods) which is cheaper but less robust. In the extreme case of a very well-behaved function, especially a perfectly quadratic function, Newton's method is the clear winner. If it's perfectly quadratic, Newton's method will converge in a single iteration. In the opposite extreme case of a very poorly behaved function, gradient descent will tend to win out. It'll pick a search direction, search down that direction, and ultimately take a small-but-productive step. By contrast, Newton's method will tend to fail in these cases, especially if you try to use the quasi-Newton approximations. In-between gradient descent and Newton's method, there're methods like Levenberg–Marquardt algorithm (LMA), though I've seen the names confused a bit. The gist is to use more gradient-descent-informed search when things are chaotic and confusing, then switch to a more Newton-method-informed search when things are getting more linear and reliable.
Why is Newton's method not widely used in machine learning?
Gradient descent direction's cheaper to calculate, and performing a line search in that direction is a more reliable, steady source of progress toward an optimum. In short, gradient descent's relativ
Why is Newton's method not widely used in machine learning? Gradient descent direction's cheaper to calculate, and performing a line search in that direction is a more reliable, steady source of progress toward an optimum. In short, gradient descent's relatively reliable. Newton's method is relatively expensive in that you need to calculate the Hessian on the first iteration. Then, on each subsequent iteration, you can either fully recalculate the Hessian (as in Newton's method) or merely "update" the prior iteration's Hessian (in quasi-Newton methods) which is cheaper but less robust. In the extreme case of a very well-behaved function, especially a perfectly quadratic function, Newton's method is the clear winner. If it's perfectly quadratic, Newton's method will converge in a single iteration. In the opposite extreme case of a very poorly behaved function, gradient descent will tend to win out. It'll pick a search direction, search down that direction, and ultimately take a small-but-productive step. By contrast, Newton's method will tend to fail in these cases, especially if you try to use the quasi-Newton approximations. In-between gradient descent and Newton's method, there're methods like Levenberg–Marquardt algorithm (LMA), though I've seen the names confused a bit. The gist is to use more gradient-descent-informed search when things are chaotic and confusing, then switch to a more Newton-method-informed search when things are getting more linear and reliable.
Why is Newton's method not widely used in machine learning? Gradient descent direction's cheaper to calculate, and performing a line search in that direction is a more reliable, steady source of progress toward an optimum. In short, gradient descent's relativ
525
Why is Newton's method not widely used in machine learning?
For large dimensions, the Hessian is typically expensive to store and solving $Hd = g$ for a direction can be expensive. It is also more difficult to parallelise. Newton's method works well when close to a solution, or if the Hessian is slowly varying, but needs some tricks to deal with lack of convergence and lack of definiteness. Often an improvement is sought, rather than an exact solution, in which case the extra cost of Newton or Newton like methods is not justified. There are various ways of ameliorating the above such as variable metric or trust region methods. As a side note, in many problems a key issue is scaling and the Hessian provides excellent scaling information, albeit at a cost. If one can approximate the Hessian, it can often improve performance considerably. To some extent, Newton's method provides the 'best' scaling in that it is affine invariant.
Why is Newton's method not widely used in machine learning?
For large dimensions, the Hessian is typically expensive to store and solving $Hd = g$ for a direction can be expensive. It is also more difficult to parallelise. Newton's method works well when clos
Why is Newton's method not widely used in machine learning? For large dimensions, the Hessian is typically expensive to store and solving $Hd = g$ for a direction can be expensive. It is also more difficult to parallelise. Newton's method works well when close to a solution, or if the Hessian is slowly varying, but needs some tricks to deal with lack of convergence and lack of definiteness. Often an improvement is sought, rather than an exact solution, in which case the extra cost of Newton or Newton like methods is not justified. There are various ways of ameliorating the above such as variable metric or trust region methods. As a side note, in many problems a key issue is scaling and the Hessian provides excellent scaling information, albeit at a cost. If one can approximate the Hessian, it can often improve performance considerably. To some extent, Newton's method provides the 'best' scaling in that it is affine invariant.
Why is Newton's method not widely used in machine learning? For large dimensions, the Hessian is typically expensive to store and solving $Hd = g$ for a direction can be expensive. It is also more difficult to parallelise. Newton's method works well when clos
526
Why is Newton's method not widely used in machine learning?
There are many difficulties regarding the use of Newton's method for SGD, especially: it requires to know local Hessian matrix - how to estimate Hessian e.g. from noisy gradients with a sufficient precision at a reasonable cost? full Hessian is too costly - we rather need some its restriction, e.g. to a linear subspace (like its top eigenspace), it needs inverted Hessian $H^{-1}$, what is costly and very unstable for noisy estimation - can be statistically blurred around $\lambda=0$ eigenvalues which invert to infinity, Newton's method directly attracts to close point with zero gradient ... which is usually a saddle here. How to avoid this saddle attraction e.g. repelling them instead? For example saddle-free Newton reverses negative curvature directions, but it requires controlling signs of eigenvalues, it would be good to do it online - instead of performing a lot of computation in a single point, try to split it into many small steps to exploit local information about the landscape. We can go from 1st order to 2nd order in small steps, e.g. adding update of just 3 averages to momentum method we can simultaneously MSE fit parabola in its direction for smarter choice of step size. ps. I have prepared SGD overview lecture focused on 2nd order methods: slides: https://www.dropbox.com/s/54v8cwqyp7uvddk/SGD.pdf, video: https://youtu.be/ZSnYtPINcug
Why is Newton's method not widely used in machine learning?
There are many difficulties regarding the use of Newton's method for SGD, especially: it requires to know local Hessian matrix - how to estimate Hessian e.g. from noisy gradients with a sufficient pr
Why is Newton's method not widely used in machine learning? There are many difficulties regarding the use of Newton's method for SGD, especially: it requires to know local Hessian matrix - how to estimate Hessian e.g. from noisy gradients with a sufficient precision at a reasonable cost? full Hessian is too costly - we rather need some its restriction, e.g. to a linear subspace (like its top eigenspace), it needs inverted Hessian $H^{-1}$, what is costly and very unstable for noisy estimation - can be statistically blurred around $\lambda=0$ eigenvalues which invert to infinity, Newton's method directly attracts to close point with zero gradient ... which is usually a saddle here. How to avoid this saddle attraction e.g. repelling them instead? For example saddle-free Newton reverses negative curvature directions, but it requires controlling signs of eigenvalues, it would be good to do it online - instead of performing a lot of computation in a single point, try to split it into many small steps to exploit local information about the landscape. We can go from 1st order to 2nd order in small steps, e.g. adding update of just 3 averages to momentum method we can simultaneously MSE fit parabola in its direction for smarter choice of step size. ps. I have prepared SGD overview lecture focused on 2nd order methods: slides: https://www.dropbox.com/s/54v8cwqyp7uvddk/SGD.pdf, video: https://youtu.be/ZSnYtPINcug
Why is Newton's method not widely used in machine learning? There are many difficulties regarding the use of Newton's method for SGD, especially: it requires to know local Hessian matrix - how to estimate Hessian e.g. from noisy gradients with a sufficient pr
527
Why is Newton's method not widely used in machine learning?
Just some comments: First order methods have very well theoretical guarantee about convergence and avoidance of saddle points, see Backtracking GD and modifications. Backtracking GD can be implemented in DNN, with very good performance. Backtracking GD allows big learning rates, can be of the size of inverse of the size of gradient, when the gradient is small. This is very handy when you converge to a degenerate critical point. References: https://github.com/hank-nguyen/MBT-optimizer https://arxiv.org/abs/2007.03618 (Here you also find a heuristic argument, that backtracking gd has correct unit, in the sense of Zeiler in his adadelta paper) Concerning Newton’s method: with a correct modification, you can avoid saddle points, as several previous comments pointed out. Here is a rigorous proof, where we also give a simple way to proceed if the hessian is singular https://arxiv.org/abs/2006.01512 Github link for the codes: https://github.com/hphuongdhsp/Q-Newton-method Remaining issues: cost of implementation and no guarantee of convergence. Addendum: The paper of Caplan mentioned by LMB: I took a quick look. I don’t think that paper presented any algorithm which computes the Hessian in O(N). It only says that you can compute the Hessian with only N “function evaluation” - I don’t know yet what that precisely means - and the final complexity is still O(N^2). It also did some experiments and says that the usual Newton’s method works better than (L-)BFGS for those experiments. (related to the previous sentence). I should add this as comments to JPJ and elizabeth santorella but cannot (not enough points) so write here: since you two mentioned bfgs and l-bfgs, can you give a link to sourcodes for these for DNN (for example for datasets MNIST, CIFAR10, CIFAR100) with reported experimental results, so people can compare with first order methods (variants of gd, including backtracking gd), to have an impression of how good they are in large scale? Tuyen Truong, UiO
Why is Newton's method not widely used in machine learning?
Just some comments: First order methods have very well theoretical guarantee about convergence and avoidance of saddle points, see Backtracking GD and modifications. Backtracking GD can be implemente
Why is Newton's method not widely used in machine learning? Just some comments: First order methods have very well theoretical guarantee about convergence and avoidance of saddle points, see Backtracking GD and modifications. Backtracking GD can be implemented in DNN, with very good performance. Backtracking GD allows big learning rates, can be of the size of inverse of the size of gradient, when the gradient is small. This is very handy when you converge to a degenerate critical point. References: https://github.com/hank-nguyen/MBT-optimizer https://arxiv.org/abs/2007.03618 (Here you also find a heuristic argument, that backtracking gd has correct unit, in the sense of Zeiler in his adadelta paper) Concerning Newton’s method: with a correct modification, you can avoid saddle points, as several previous comments pointed out. Here is a rigorous proof, where we also give a simple way to proceed if the hessian is singular https://arxiv.org/abs/2006.01512 Github link for the codes: https://github.com/hphuongdhsp/Q-Newton-method Remaining issues: cost of implementation and no guarantee of convergence. Addendum: The paper of Caplan mentioned by LMB: I took a quick look. I don’t think that paper presented any algorithm which computes the Hessian in O(N). It only says that you can compute the Hessian with only N “function evaluation” - I don’t know yet what that precisely means - and the final complexity is still O(N^2). It also did some experiments and says that the usual Newton’s method works better than (L-)BFGS for those experiments. (related to the previous sentence). I should add this as comments to JPJ and elizabeth santorella but cannot (not enough points) so write here: since you two mentioned bfgs and l-bfgs, can you give a link to sourcodes for these for DNN (for example for datasets MNIST, CIFAR10, CIFAR100) with reported experimental results, so people can compare with first order methods (variants of gd, including backtracking gd), to have an impression of how good they are in large scale? Tuyen Truong, UiO
Why is Newton's method not widely used in machine learning? Just some comments: First order methods have very well theoretical guarantee about convergence and avoidance of saddle points, see Backtracking GD and modifications. Backtracking GD can be implemente
528
What is the best introductory Bayesian statistics textbook?
John Kruschke released a book in mid 2011 called Doing Bayesian Data Analysis: A Tutorial with R and BUGS. (A second edition was released in Nov 2014: Doing Bayesian Data Analysis, Second Edition: A Tutorial with R, JAGS, and Stan.) It is truly introductory. If you want to walk from frequentist stats into Bayes though, especially with multilevel modelling, I recommend Gelman and Hill. John Kruschke also has a website for the book that has all the examples in the book in BUGS and JAGS. His blog on Bayesian statistics also links in with the book.
What is the best introductory Bayesian statistics textbook?
John Kruschke released a book in mid 2011 called Doing Bayesian Data Analysis: A Tutorial with R and BUGS. (A second edition was released in Nov 2014: Doing Bayesian Data Analysis, Second Edition: A T
What is the best introductory Bayesian statistics textbook? John Kruschke released a book in mid 2011 called Doing Bayesian Data Analysis: A Tutorial with R and BUGS. (A second edition was released in Nov 2014: Doing Bayesian Data Analysis, Second Edition: A Tutorial with R, JAGS, and Stan.) It is truly introductory. If you want to walk from frequentist stats into Bayes though, especially with multilevel modelling, I recommend Gelman and Hill. John Kruschke also has a website for the book that has all the examples in the book in BUGS and JAGS. His blog on Bayesian statistics also links in with the book.
What is the best introductory Bayesian statistics textbook? John Kruschke released a book in mid 2011 called Doing Bayesian Data Analysis: A Tutorial with R and BUGS. (A second edition was released in Nov 2014: Doing Bayesian Data Analysis, Second Edition: A T
529
What is the best introductory Bayesian statistics textbook?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. My favorite is "Bayesian Data Analysis" by Gelman, et al. (The pdf version is legally free since April 2020!)
What is the best introductory Bayesian statistics textbook?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
What is the best introductory Bayesian statistics textbook? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. My favorite is "Bayesian Data Analysis" by Gelman, et al. (The pdf version is legally free since April 2020!)
What is the best introductory Bayesian statistics textbook? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
530
What is the best introductory Bayesian statistics textbook?
Statistical Rethinking, has been released just a few weeks ago and hence I am still reading it, but I think is a very nice and fresh addition to the really introductory books about Bayesian Statistics. The author uses a similar approach as the one used by John Kruschke in his puppy books; very verbose, detailed explanations, nice pedagogical examples, he also uses a computational rather than mathematical approach. Youtube lectures and other material is also available from here. Code ported to Python/PyMC3
What is the best introductory Bayesian statistics textbook?
Statistical Rethinking, has been released just a few weeks ago and hence I am still reading it, but I think is a very nice and fresh addition to the really introductory books about Bayesian Statistics
What is the best introductory Bayesian statistics textbook? Statistical Rethinking, has been released just a few weeks ago and hence I am still reading it, but I think is a very nice and fresh addition to the really introductory books about Bayesian Statistics. The author uses a similar approach as the one used by John Kruschke in his puppy books; very verbose, detailed explanations, nice pedagogical examples, he also uses a computational rather than mathematical approach. Youtube lectures and other material is also available from here. Code ported to Python/PyMC3
What is the best introductory Bayesian statistics textbook? Statistical Rethinking, has been released just a few weeks ago and hence I am still reading it, but I think is a very nice and fresh addition to the really introductory books about Bayesian Statistics
531
What is the best introductory Bayesian statistics textbook?
Sivia and Skilling, Data analysis: a Bayesian tutorial (2ed) 2006 246p 0198568320 books.goo: Statistics lectures have been a source of much bewilderment and frustration for generations of students. This book attempts to remedy the situation by expounding a logical and unified approach to the whole subject of data analysis. This text is intended as a tutorial guide for senior undergraduates and research students in science and engineering ... I don't know the other recommendations though.
What is the best introductory Bayesian statistics textbook?
Sivia and Skilling, Data analysis: a Bayesian tutorial (2ed) 2006 246p 0198568320 books.goo: Statistics lectures have been a source of much bewilderment and frustration for generations of student
What is the best introductory Bayesian statistics textbook? Sivia and Skilling, Data analysis: a Bayesian tutorial (2ed) 2006 246p 0198568320 books.goo: Statistics lectures have been a source of much bewilderment and frustration for generations of students. This book attempts to remedy the situation by expounding a logical and unified approach to the whole subject of data analysis. This text is intended as a tutorial guide for senior undergraduates and research students in science and engineering ... I don't know the other recommendations though.
What is the best introductory Bayesian statistics textbook? Sivia and Skilling, Data analysis: a Bayesian tutorial (2ed) 2006 246p 0198568320 books.goo: Statistics lectures have been a source of much bewilderment and frustration for generations of student
532
What is the best introductory Bayesian statistics textbook?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Another vote for Gelman et al., but a close second for me -- being of the learn-by-doing persuasion -- is Jim Albert's "Bayesian Computation with R".
What is the best introductory Bayesian statistics textbook?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
What is the best introductory Bayesian statistics textbook? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Another vote for Gelman et al., but a close second for me -- being of the learn-by-doing persuasion -- is Jim Albert's "Bayesian Computation with R".
What is the best introductory Bayesian statistics textbook? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
533
What is the best introductory Bayesian statistics textbook?
For an introduction, I would recommend Probabilistic Programming & Bayesian Methods for Hackers by Cam Davidson-Pilon, freely available online. From its description: An intro to Bayesian methods and probabilistic programming from a computation/understanding-first, mathematics-second point of view. It's highly visual, cuts straight to the value and backfills gritty details later, has lots of examples, has interactive code (in IPython Notebook).
What is the best introductory Bayesian statistics textbook?
For an introduction, I would recommend Probabilistic Programming & Bayesian Methods for Hackers by Cam Davidson-Pilon, freely available online. From its description: An intro to Bayesian methods and
What is the best introductory Bayesian statistics textbook? For an introduction, I would recommend Probabilistic Programming & Bayesian Methods for Hackers by Cam Davidson-Pilon, freely available online. From its description: An intro to Bayesian methods and probabilistic programming from a computation/understanding-first, mathematics-second point of view. It's highly visual, cuts straight to the value and backfills gritty details later, has lots of examples, has interactive code (in IPython Notebook).
What is the best introductory Bayesian statistics textbook? For an introduction, I would recommend Probabilistic Programming & Bayesian Methods for Hackers by Cam Davidson-Pilon, freely available online. From its description: An intro to Bayesian methods and
534
What is the best introductory Bayesian statistics textbook?
I thoroughly recommend the entertaining polemic "Probability Theory: The Logic of Science" by E.T. Jaynes. This is an introductory text in the sense of not requiring (and in fact preferring) no previous knowledge of statistics, but it does eventually employ fairly sophisticated mathematics. Compared to most of the other answers provided, this book is not nearly as practical or easy to digest, rather it provides the philosophical bedrock to why you would want to employ Bayesian methods, and why not to use frequentist approaches. It is introductory in a historical and philosophical, but not pedagogical way.
What is the best introductory Bayesian statistics textbook?
I thoroughly recommend the entertaining polemic "Probability Theory: The Logic of Science" by E.T. Jaynes. This is an introductory text in the sense of not requiring (and in fact preferring) no previ
What is the best introductory Bayesian statistics textbook? I thoroughly recommend the entertaining polemic "Probability Theory: The Logic of Science" by E.T. Jaynes. This is an introductory text in the sense of not requiring (and in fact preferring) no previous knowledge of statistics, but it does eventually employ fairly sophisticated mathematics. Compared to most of the other answers provided, this book is not nearly as practical or easy to digest, rather it provides the philosophical bedrock to why you would want to employ Bayesian methods, and why not to use frequentist approaches. It is introductory in a historical and philosophical, but not pedagogical way.
What is the best introductory Bayesian statistics textbook? I thoroughly recommend the entertaining polemic "Probability Theory: The Logic of Science" by E.T. Jaynes. This is an introductory text in the sense of not requiring (and in fact preferring) no previ
535
What is the best introductory Bayesian statistics textbook?
Its focus isn't strictly on Bayesian statistics, so it lacks some methodology, but David MacKay's Information Theory, Inference, and Learning Algorithms made me intuitively grasp Bayesian statistics better than others - most do the how quite nicely, but I felt MacKay explained why better.
What is the best introductory Bayesian statistics textbook?
Its focus isn't strictly on Bayesian statistics, so it lacks some methodology, but David MacKay's Information Theory, Inference, and Learning Algorithms made me intuitively grasp Bayesian statistics b
What is the best introductory Bayesian statistics textbook? Its focus isn't strictly on Bayesian statistics, so it lacks some methodology, but David MacKay's Information Theory, Inference, and Learning Algorithms made me intuitively grasp Bayesian statistics better than others - most do the how quite nicely, but I felt MacKay explained why better.
What is the best introductory Bayesian statistics textbook? Its focus isn't strictly on Bayesian statistics, so it lacks some methodology, but David MacKay's Information Theory, Inference, and Learning Algorithms made me intuitively grasp Bayesian statistics b
536
What is the best introductory Bayesian statistics textbook?
I am an electrical engineer and not a statistician. I spent a lot of time to go through Gelman but I don't think one can refer to Gelman as introductory at all. My bayesian-guru professor from Carnegie Mellon agrees with me on this. having the minimum knowledge of statistics and R and Bugs(as the easy way to DO something with Bayesian stat) Doing Bayesian Data Analysis: A Tutorial with R and BUGS is an amazing start. You can compare all offered books easily by their book cover! 5 years later update: I want to add that perhaps one other major way of learning in a fast way(40 mins) is to go through the documentation of a Bayesian Net GUI based tool such as Netica2. It starts with basics, walks you through the steps of building a net based on a situation and data, and how to run your own questions back and forth to "get it!".
What is the best introductory Bayesian statistics textbook?
I am an electrical engineer and not a statistician. I spent a lot of time to go through Gelman but I don't think one can refer to Gelman as introductory at all. My bayesian-guru professor from Carnegi
What is the best introductory Bayesian statistics textbook? I am an electrical engineer and not a statistician. I spent a lot of time to go through Gelman but I don't think one can refer to Gelman as introductory at all. My bayesian-guru professor from Carnegie Mellon agrees with me on this. having the minimum knowledge of statistics and R and Bugs(as the easy way to DO something with Bayesian stat) Doing Bayesian Data Analysis: A Tutorial with R and BUGS is an amazing start. You can compare all offered books easily by their book cover! 5 years later update: I want to add that perhaps one other major way of learning in a fast way(40 mins) is to go through the documentation of a Bayesian Net GUI based tool such as Netica2. It starts with basics, walks you through the steps of building a net based on a situation and data, and how to run your own questions back and forth to "get it!".
What is the best introductory Bayesian statistics textbook? I am an electrical engineer and not a statistician. I spent a lot of time to go through Gelman but I don't think one can refer to Gelman as introductory at all. My bayesian-guru professor from Carnegi
537
What is the best introductory Bayesian statistics textbook?
The Gelman books are all excellent but not necessarily introductory in that they assume that you know some statistics already. Therefore they are an introduction to the Bayesian way of doing statistics rather than to statistics in general. I would still give them the thumbs up, however. As an introductory statistics/econometrics book which takes a Bayesian perspective, I would recommend Gary Koop's Bayesian Econometrics.
What is the best introductory Bayesian statistics textbook?
The Gelman books are all excellent but not necessarily introductory in that they assume that you know some statistics already. Therefore they are an introduction to the Bayesian way of doing statistic
What is the best introductory Bayesian statistics textbook? The Gelman books are all excellent but not necessarily introductory in that they assume that you know some statistics already. Therefore they are an introduction to the Bayesian way of doing statistics rather than to statistics in general. I would still give them the thumbs up, however. As an introductory statistics/econometrics book which takes a Bayesian perspective, I would recommend Gary Koop's Bayesian Econometrics.
What is the best introductory Bayesian statistics textbook? The Gelman books are all excellent but not necessarily introductory in that they assume that you know some statistics already. Therefore they are an introduction to the Bayesian way of doing statistic
538
What is the best introductory Bayesian statistics textbook?
I don't know why nobody has mentioned the very introductory book on Bayesian: There's a free PDF version for the book. The book offers enough material for anyone who has very little experience in bayesian. It introduces the concept of prior distribution, posterior distribution, beta distribution etc. Give it a go, it's free. http://greenteapress.com/thinkbayes/
What is the best introductory Bayesian statistics textbook?
I don't know why nobody has mentioned the very introductory book on Bayesian: There's a free PDF version for the book. The book offers enough material for anyone who has very little experience in bay
What is the best introductory Bayesian statistics textbook? I don't know why nobody has mentioned the very introductory book on Bayesian: There's a free PDF version for the book. The book offers enough material for anyone who has very little experience in bayesian. It introduces the concept of prior distribution, posterior distribution, beta distribution etc. Give it a go, it's free. http://greenteapress.com/thinkbayes/
What is the best introductory Bayesian statistics textbook? I don't know why nobody has mentioned the very introductory book on Bayesian: There's a free PDF version for the book. The book offers enough material for anyone who has very little experience in bay
539
What is the best introductory Bayesian statistics textbook?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. "Bayesian Core: A Practical Approach to Computational Bayesian Statistics" by Marin and Robert, Springer-Verlag (2007). "Why?": the author explain the why of the bayesian choice and the how very well. It's a practical book, but written by one of the finest bayesian thinkers alive. It's not exhaustive. Other books have that objective. It picks up a few topics that are relevant, useful, and illuminating the foundations. About "choice": if you really want to delve into bayesian foundation, Xi'an' "The Bayesian Choice" is clear, deep, essential.
What is the best introductory Bayesian statistics textbook?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
What is the best introductory Bayesian statistics textbook? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. "Bayesian Core: A Practical Approach to Computational Bayesian Statistics" by Marin and Robert, Springer-Verlag (2007). "Why?": the author explain the why of the bayesian choice and the how very well. It's a practical book, but written by one of the finest bayesian thinkers alive. It's not exhaustive. Other books have that objective. It picks up a few topics that are relevant, useful, and illuminating the foundations. About "choice": if you really want to delve into bayesian foundation, Xi'an' "The Bayesian Choice" is clear, deep, essential.
What is the best introductory Bayesian statistics textbook? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
540
What is the best introductory Bayesian statistics textbook?
My favourite first undergraduate text for bayesian statistics is by Bolstad, Introduction to Bayesian Statistics. If you're looking for something graduate level, this will be too elementary, but for someone who is new to statistics this is ideal.
What is the best introductory Bayesian statistics textbook?
My favourite first undergraduate text for bayesian statistics is by Bolstad, Introduction to Bayesian Statistics. If you're looking for something graduate level, this will be too elementary, but for
What is the best introductory Bayesian statistics textbook? My favourite first undergraduate text for bayesian statistics is by Bolstad, Introduction to Bayesian Statistics. If you're looking for something graduate level, this will be too elementary, but for someone who is new to statistics this is ideal.
What is the best introductory Bayesian statistics textbook? My favourite first undergraduate text for bayesian statistics is by Bolstad, Introduction to Bayesian Statistics. If you're looking for something graduate level, this will be too elementary, but for
541
What is the best introductory Bayesian statistics textbook?
I have read some parts of A First Course in Bayesian Statistical Methods by Peter Hoff, and I found it easy to follow. (Example R-code is provided throughout the text)
What is the best introductory Bayesian statistics textbook?
I have read some parts of A First Course in Bayesian Statistical Methods by Peter Hoff, and I found it easy to follow. (Example R-code is provided throughout the text)
What is the best introductory Bayesian statistics textbook? I have read some parts of A First Course in Bayesian Statistical Methods by Peter Hoff, and I found it easy to follow. (Example R-code is provided throughout the text)
What is the best introductory Bayesian statistics textbook? I have read some parts of A First Course in Bayesian Statistical Methods by Peter Hoff, and I found it easy to follow. (Example R-code is provided throughout the text)
542
What is the best introductory Bayesian statistics textbook?
Coming from non-statistical background I found Introduction to Applied Bayesian Statistics and Estimation for Social Scientists quite informative and easy to follow.
What is the best introductory Bayesian statistics textbook?
Coming from non-statistical background I found Introduction to Applied Bayesian Statistics and Estimation for Social Scientists quite informative and easy to follow.
What is the best introductory Bayesian statistics textbook? Coming from non-statistical background I found Introduction to Applied Bayesian Statistics and Estimation for Social Scientists quite informative and easy to follow.
What is the best introductory Bayesian statistics textbook? Coming from non-statistical background I found Introduction to Applied Bayesian Statistics and Estimation for Social Scientists quite informative and easy to follow.
543
What is the best introductory Bayesian statistics textbook?
If you're looking for an elementary text, i.e. one that doesn't have a calculus prerequisite, there's Don Berry's Statistics: A Bayesian Perspective.
What is the best introductory Bayesian statistics textbook?
If you're looking for an elementary text, i.e. one that doesn't have a calculus prerequisite, there's Don Berry's Statistics: A Bayesian Perspective.
What is the best introductory Bayesian statistics textbook? If you're looking for an elementary text, i.e. one that doesn't have a calculus prerequisite, there's Don Berry's Statistics: A Bayesian Perspective.
What is the best introductory Bayesian statistics textbook? If you're looking for an elementary text, i.e. one that doesn't have a calculus prerequisite, there's Don Berry's Statistics: A Bayesian Perspective.
544
What is the best introductory Bayesian statistics textbook?
I found an excellent introduction in Gelman and Hill (2007) Data Analysis Using Regression and Multilevel/Hierarchical Models. (Other comments mention it, but it deserves to get upvoted on its own.)
What is the best introductory Bayesian statistics textbook?
I found an excellent introduction in Gelman and Hill (2007) Data Analysis Using Regression and Multilevel/Hierarchical Models. (Other comments mention it, but it deserves to get upvoted on its own.)
What is the best introductory Bayesian statistics textbook? I found an excellent introduction in Gelman and Hill (2007) Data Analysis Using Regression and Multilevel/Hierarchical Models. (Other comments mention it, but it deserves to get upvoted on its own.)
What is the best introductory Bayesian statistics textbook? I found an excellent introduction in Gelman and Hill (2007) Data Analysis Using Regression and Multilevel/Hierarchical Models. (Other comments mention it, but it deserves to get upvoted on its own.)
545
What is the best introductory Bayesian statistics textbook?
Take a look at "The Bayesian Choice". It has the full package: foundations, applications and computation. Clearly written.
What is the best introductory Bayesian statistics textbook?
Take a look at "The Bayesian Choice". It has the full package: foundations, applications and computation. Clearly written.
What is the best introductory Bayesian statistics textbook? Take a look at "The Bayesian Choice". It has the full package: foundations, applications and computation. Clearly written.
What is the best introductory Bayesian statistics textbook? Take a look at "The Bayesian Choice". It has the full package: foundations, applications and computation. Clearly written.
546
What is the best introductory Bayesian statistics textbook?
I've at least glanced at most of these on this list and none are as good as the new Bayesian Ideas and Data Analysis in my opinion. Edit: It is easy to immediately begin doing Bayesian analysis while reading this book. Not just model the mean from a Normal distribution with known variance, but actual data analysis after the first couple of chapters. All code examples and data are on the book's website. Covers a decent amount of theory but the focus is applications. Lots of examples over a wide range of models. Nice chapter on Bayesian Nonparametrics. Winbugs, R, and SAS examples. I prefer it over Doing Bayesian Data Analysis (I have both). Most of the books on here (Gelman, Robert, ...) are not introductory in my opinion and unless you have someone to talk to you will probably be left with more questions then answers. Albert's book does not cover enough material to feel comfortable analyzing data different from what is presented in the book (again my opinion).
What is the best introductory Bayesian statistics textbook?
I've at least glanced at most of these on this list and none are as good as the new Bayesian Ideas and Data Analysis in my opinion. Edit: It is easy to immediately begin doing Bayesian analysis while
What is the best introductory Bayesian statistics textbook? I've at least glanced at most of these on this list and none are as good as the new Bayesian Ideas and Data Analysis in my opinion. Edit: It is easy to immediately begin doing Bayesian analysis while reading this book. Not just model the mean from a Normal distribution with known variance, but actual data analysis after the first couple of chapters. All code examples and data are on the book's website. Covers a decent amount of theory but the focus is applications. Lots of examples over a wide range of models. Nice chapter on Bayesian Nonparametrics. Winbugs, R, and SAS examples. I prefer it over Doing Bayesian Data Analysis (I have both). Most of the books on here (Gelman, Robert, ...) are not introductory in my opinion and unless you have someone to talk to you will probably be left with more questions then answers. Albert's book does not cover enough material to feel comfortable analyzing data different from what is presented in the book (again my opinion).
What is the best introductory Bayesian statistics textbook? I've at least glanced at most of these on this list and none are as good as the new Bayesian Ideas and Data Analysis in my opinion. Edit: It is easy to immediately begin doing Bayesian analysis while
547
What is the best introductory Bayesian statistics textbook?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. I quite like Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference by Gamerman and Lopes.
What is the best introductory Bayesian statistics textbook?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
What is the best introductory Bayesian statistics textbook? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. I quite like Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference by Gamerman and Lopes.
What is the best introductory Bayesian statistics textbook? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
548
What is the best introductory Bayesian statistics textbook?
If I had to choose a single text for a beginner, it would be Sivia DS and Skilling J (2006) book (see below). Of all the books listed below it strives hardest to give an intuitive grasp of the essential ideas, but it still requires some mathematical sophistication from page 1. Below is a list of Further Readings from my book, with comments on each publication. Bernardo, JM and Smith, A, (2000) 4 . Bayesian Theory A rigorous account of Bayesian methods, with many real-world examples. Bishop, C (2006) 5 . Pattern Recognition and Machine Learning. As the title suggests, this is mainly about machine learning, but it provides a lucid and comprehensive account of Bayesian methods. Cowan G (1998) 6 . Statistical Data Analysis. An excellent non-Bayesian introduction to statistical analysis. Dienes, Z (2008) 8 . Understanding Psychology as a Science: An Introduction to Scientific and Statistical Inference. Provides tutorial material on Bayes’ rule and a lucid analysis of the distinction between Bayesian and frequentist statistics. Gelman A, Carlin J, Stern H, and Rubin D. (2003) 14 . Bayesian Data Analysis. A rigorous and comprehensive account of Bayesian analysis, with many real-world examples. Jaynes E and Bretthorst G (2003) 18 . Probability Theory: The Logic of Science. The modern classic of Bayesian analysis. It is comprehensive and wise. Its discursive style makes it long (600 pages) but never dull,and it is packed ful l of insights. Khan, S, 2012, Introduction to Bayes’ Theorem. Salman Khan’s online mathematics videos make a good introduction to various topics, including Bayes’ rule. Lee PM (2004) 27 . Bayesian Statistics: An Introduction. A rigorous and comprehensive text with a strident Bayesian style. MacKay DJC (2003) 28 . Information theory, inference, and learning algorithms. The modern classic on information theory. A very readable text that roams far and wide over many topics, almost all of which make use of Bayes’ rule. Migon, HS and Gamerman, D (1999) 30. Statistical Inference: An Integrated Approach. A straightforward (and clearly laid out) account of inference, which compares Bayesian and non-Bayesian approaches. Despite being fairly advanced, the writing style is tutorial in nature. Pierce JR (1980) 34 2nd Edition. An introduction to information theory: symbols, signals and noise. Pierce writes with an informal, tutorial style of writing, but does not flinch from presenting the fundamental theorems of information theory. Reza, FM (1961) 35 . An introduction to information theory. A more comprehensive and mathematical ly rigorous book than the Pierce book above, and should ideally be read only after first reading Pierce’s more informal text. Sivia DS and Skilling J (2006) 38 . Data Analysis: A Bayesian Tutorial. This is an excellent tutorial style introduction to Bayesian methods. Spiegelhalter, D and Rice, K (2009) 36 . Bayesian statistics. Scholarpedia, 4(8):5230. http://www.scholarpedia.org/article/Bayesian_statistics A reliable and comprehensive summary of the current status of Bayesian statistics. And, here is my book, published June 2013. Bayes' Rule: A Tutorial Introduction to Bayesian Analysis, Dr James V Stone, ISBN 978-0956372840 Chapter 1 can be downloaded from: http://jim-stone.staff.shef.ac.uk/BookBayes2012/BayesRuleBookMain.html Description: Discovered by an 18th century mathematician and preacher, Bayes' rule is a cornerstone of modern probability theory. In this richly illustrated book, a range of accessible examples are used to show how Bayes' rule is actually a natural consequence of commonsense reasoning. Bayes' rule is derived using intuitive graphical representations of probability, and Bayesian analysis is applied to parameter estimation using the MatLab programs provided. The tutorial style of writing, combined with a comprehensive glossary, makes this an ideal primer for the novice who wishes to become familiar with the basic principles of Bayesian analysis.
What is the best introductory Bayesian statistics textbook?
If I had to choose a single text for a beginner, it would be Sivia DS and Skilling J (2006) book (see below). Of all the books listed below it strives hardest to give an intuitive gra
What is the best introductory Bayesian statistics textbook? If I had to choose a single text for a beginner, it would be Sivia DS and Skilling J (2006) book (see below). Of all the books listed below it strives hardest to give an intuitive grasp of the essential ideas, but it still requires some mathematical sophistication from page 1. Below is a list of Further Readings from my book, with comments on each publication. Bernardo, JM and Smith, A, (2000) 4 . Bayesian Theory A rigorous account of Bayesian methods, with many real-world examples. Bishop, C (2006) 5 . Pattern Recognition and Machine Learning. As the title suggests, this is mainly about machine learning, but it provides a lucid and comprehensive account of Bayesian methods. Cowan G (1998) 6 . Statistical Data Analysis. An excellent non-Bayesian introduction to statistical analysis. Dienes, Z (2008) 8 . Understanding Psychology as a Science: An Introduction to Scientific and Statistical Inference. Provides tutorial material on Bayes’ rule and a lucid analysis of the distinction between Bayesian and frequentist statistics. Gelman A, Carlin J, Stern H, and Rubin D. (2003) 14 . Bayesian Data Analysis. A rigorous and comprehensive account of Bayesian analysis, with many real-world examples. Jaynes E and Bretthorst G (2003) 18 . Probability Theory: The Logic of Science. The modern classic of Bayesian analysis. It is comprehensive and wise. Its discursive style makes it long (600 pages) but never dull,and it is packed ful l of insights. Khan, S, 2012, Introduction to Bayes’ Theorem. Salman Khan’s online mathematics videos make a good introduction to various topics, including Bayes’ rule. Lee PM (2004) 27 . Bayesian Statistics: An Introduction. A rigorous and comprehensive text with a strident Bayesian style. MacKay DJC (2003) 28 . Information theory, inference, and learning algorithms. The modern classic on information theory. A very readable text that roams far and wide over many topics, almost all of which make use of Bayes’ rule. Migon, HS and Gamerman, D (1999) 30. Statistical Inference: An Integrated Approach. A straightforward (and clearly laid out) account of inference, which compares Bayesian and non-Bayesian approaches. Despite being fairly advanced, the writing style is tutorial in nature. Pierce JR (1980) 34 2nd Edition. An introduction to information theory: symbols, signals and noise. Pierce writes with an informal, tutorial style of writing, but does not flinch from presenting the fundamental theorems of information theory. Reza, FM (1961) 35 . An introduction to information theory. A more comprehensive and mathematical ly rigorous book than the Pierce book above, and should ideally be read only after first reading Pierce’s more informal text. Sivia DS and Skilling J (2006) 38 . Data Analysis: A Bayesian Tutorial. This is an excellent tutorial style introduction to Bayesian methods. Spiegelhalter, D and Rice, K (2009) 36 . Bayesian statistics. Scholarpedia, 4(8):5230. http://www.scholarpedia.org/article/Bayesian_statistics A reliable and comprehensive summary of the current status of Bayesian statistics. And, here is my book, published June 2013. Bayes' Rule: A Tutorial Introduction to Bayesian Analysis, Dr James V Stone, ISBN 978-0956372840 Chapter 1 can be downloaded from: http://jim-stone.staff.shef.ac.uk/BookBayes2012/BayesRuleBookMain.html Description: Discovered by an 18th century mathematician and preacher, Bayes' rule is a cornerstone of modern probability theory. In this richly illustrated book, a range of accessible examples are used to show how Bayes' rule is actually a natural consequence of commonsense reasoning. Bayes' rule is derived using intuitive graphical representations of probability, and Bayesian analysis is applied to parameter estimation using the MatLab programs provided. The tutorial style of writing, combined with a comprehensive glossary, makes this an ideal primer for the novice who wishes to become familiar with the basic principles of Bayesian analysis.
What is the best introductory Bayesian statistics textbook? If I had to choose a single text for a beginner, it would be Sivia DS and Skilling J (2006) book (see below). Of all the books listed below it strives hardest to give an intuitive gra
549
What is the best introductory Bayesian statistics textbook?
For complete beginners, try William Briggs Breaking the Law of Averages: Real-Life Probability and Statistics in Plain English
What is the best introductory Bayesian statistics textbook?
For complete beginners, try William Briggs Breaking the Law of Averages: Real-Life Probability and Statistics in Plain English
What is the best introductory Bayesian statistics textbook? For complete beginners, try William Briggs Breaking the Law of Averages: Real-Life Probability and Statistics in Plain English
What is the best introductory Bayesian statistics textbook? For complete beginners, try William Briggs Breaking the Law of Averages: Real-Life Probability and Statistics in Plain English
550
What is the best introductory Bayesian statistics textbook?
I simply must to include MCMC in Practice. It provides an excellent introduction to MCMC, perhaps not as general as other books mentioned, but excellent for gaining insight and intuition. I would recommend reading it after (or in parallel with) Bayesian Computation with R.
What is the best introductory Bayesian statistics textbook?
I simply must to include MCMC in Practice. It provides an excellent introduction to MCMC, perhaps not as general as other books mentioned, but excellent for gaining insight and intuition. I would re
What is the best introductory Bayesian statistics textbook? I simply must to include MCMC in Practice. It provides an excellent introduction to MCMC, perhaps not as general as other books mentioned, but excellent for gaining insight and intuition. I would recommend reading it after (or in parallel with) Bayesian Computation with R.
What is the best introductory Bayesian statistics textbook? I simply must to include MCMC in Practice. It provides an excellent introduction to MCMC, perhaps not as general as other books mentioned, but excellent for gaining insight and intuition. I would re
551
What is the best introductory Bayesian statistics textbook?
If you happen to come from the physical sciencies (physics/astronomy) I would recommend you Bayesian Logical Data Analysis for the Physical Sciences: A Comparative Approach with Mathematica® Support by Gregory (2006). Although the "with Mathematica® Support" part of the title is there only for commercial issues (the usages of Mathematica code are very poor), the good thing about this book is that it is truly an introduction to the subject of probabilities and statistics. It even has some chapters on frequentist statistics. However, once you give it a shot, go for the book of Gelman et. al that a lot of people recommended you. Most of the material in the book of Gregory is taken lightly (if not, it wouldn't be an introduction): Gelman's book has been a truly re-awakening from Gregory's for me.
What is the best introductory Bayesian statistics textbook?
If you happen to come from the physical sciencies (physics/astronomy) I would recommend you Bayesian Logical Data Analysis for the Physical Sciences: A Comparative Approach with Mathematica® Support b
What is the best introductory Bayesian statistics textbook? If you happen to come from the physical sciencies (physics/astronomy) I would recommend you Bayesian Logical Data Analysis for the Physical Sciences: A Comparative Approach with Mathematica® Support by Gregory (2006). Although the "with Mathematica® Support" part of the title is there only for commercial issues (the usages of Mathematica code are very poor), the good thing about this book is that it is truly an introduction to the subject of probabilities and statistics. It even has some chapters on frequentist statistics. However, once you give it a shot, go for the book of Gelman et. al that a lot of people recommended you. Most of the material in the book of Gregory is taken lightly (if not, it wouldn't be an introduction): Gelman's book has been a truly re-awakening from Gregory's for me.
What is the best introductory Bayesian statistics textbook? If you happen to come from the physical sciencies (physics/astronomy) I would recommend you Bayesian Logical Data Analysis for the Physical Sciences: A Comparative Approach with Mathematica® Support b
552
What is the best introductory Bayesian statistics textbook?
I read: Gelman et al (2013). Bayesian Data Analysis. CRC Press LLC. 3rd ed. Hoff, Peter D (2009). A First Course in Bayesian Statistical Methods. Springer Texts in Statistics. Kruschke, Doing Bayesian Data Analysis: A Tutorial with R and Bugs, 2011. Academic Press / Elsevier. and I think that the better one to start with is Kruschke's book. It's perfect for a first approach to Bayesian thinking: concepts are explained very clearly, there is not too much mathematics, and there are lots of nice examples! Gelman et al. is a great book, but it is more advanced and I suggest to read it after the Kruschke's one. Conversely, I did not like Hoff's book because it is an introductory book, but concepts (and Bayesian thinking) are not explained in a clear way. I suggest to pass over.
What is the best introductory Bayesian statistics textbook?
I read: Gelman et al (2013). Bayesian Data Analysis. CRC Press LLC. 3rd ed. Hoff, Peter D (2009). A First Course in Bayesian Statistical Methods. Springer Texts in Statistics. Kruschke, Doing Bayesian
What is the best introductory Bayesian statistics textbook? I read: Gelman et al (2013). Bayesian Data Analysis. CRC Press LLC. 3rd ed. Hoff, Peter D (2009). A First Course in Bayesian Statistical Methods. Springer Texts in Statistics. Kruschke, Doing Bayesian Data Analysis: A Tutorial with R and Bugs, 2011. Academic Press / Elsevier. and I think that the better one to start with is Kruschke's book. It's perfect for a first approach to Bayesian thinking: concepts are explained very clearly, there is not too much mathematics, and there are lots of nice examples! Gelman et al. is a great book, but it is more advanced and I suggest to read it after the Kruschke's one. Conversely, I did not like Hoff's book because it is an introductory book, but concepts (and Bayesian thinking) are not explained in a clear way. I suggest to pass over.
What is the best introductory Bayesian statistics textbook? I read: Gelman et al (2013). Bayesian Data Analysis. CRC Press LLC. 3rd ed. Hoff, Peter D (2009). A First Course in Bayesian Statistical Methods. Springer Texts in Statistics. Kruschke, Doing Bayesian
553
What is the best introductory Bayesian statistics textbook?
Not strictly Bayesian Statistics as such, but I can strongly recommend "A First Course on Machine Learning" by Rogers and Girolami, which is essentially an introduction to Bayesian approaches to machine learning. Its very well structured and clear and aimed at students without a strong mathematical background. This means it is a pretty good first introduction to Bayesian ideas. There is also MATLAB/OCTAVE code which is a nice feature.
What is the best introductory Bayesian statistics textbook?
Not strictly Bayesian Statistics as such, but I can strongly recommend "A First Course on Machine Learning" by Rogers and Girolami, which is essentially an introduction to Bayesian approaches to machi
What is the best introductory Bayesian statistics textbook? Not strictly Bayesian Statistics as such, but I can strongly recommend "A First Course on Machine Learning" by Rogers and Girolami, which is essentially an introduction to Bayesian approaches to machine learning. Its very well structured and clear and aimed at students without a strong mathematical background. This means it is a pretty good first introduction to Bayesian ideas. There is also MATLAB/OCTAVE code which is a nice feature.
What is the best introductory Bayesian statistics textbook? Not strictly Bayesian Statistics as such, but I can strongly recommend "A First Course on Machine Learning" by Rogers and Girolami, which is essentially an introduction to Bayesian approaches to machi
554
What is the best introductory Bayesian statistics textbook?
Bayesian Statistics for Social Scientists. Phillips, Lawrence D. (1973), Thomas Crowell & Co. It's very clear, very accessible, assumes no statistics knowledge, and, unlike Bolstad which I found dry, has some personality.
What is the best introductory Bayesian statistics textbook?
Bayesian Statistics for Social Scientists. Phillips, Lawrence D. (1973), Thomas Crowell & Co. It's very clear, very accessible, assumes no statistics knowledge, and, unlike Bolstad which I found dry,
What is the best introductory Bayesian statistics textbook? Bayesian Statistics for Social Scientists. Phillips, Lawrence D. (1973), Thomas Crowell & Co. It's very clear, very accessible, assumes no statistics knowledge, and, unlike Bolstad which I found dry, has some personality.
What is the best introductory Bayesian statistics textbook? Bayesian Statistics for Social Scientists. Phillips, Lawrence D. (1973), Thomas Crowell & Co. It's very clear, very accessible, assumes no statistics knowledge, and, unlike Bolstad which I found dry,
555
What is the best introductory Bayesian statistics textbook?
This book suggests it is aimed at entry level undergraduate level Biostatistics: A Bayesian Introduction. By George G Woodsworth. Published by John Wiley & Sons
What is the best introductory Bayesian statistics textbook?
This book suggests it is aimed at entry level undergraduate level Biostatistics: A Bayesian Introduction. By George G Woodsworth. Published by John Wiley & Sons
What is the best introductory Bayesian statistics textbook? This book suggests it is aimed at entry level undergraduate level Biostatistics: A Bayesian Introduction. By George G Woodsworth. Published by John Wiley & Sons
What is the best introductory Bayesian statistics textbook? This book suggests it is aimed at entry level undergraduate level Biostatistics: A Bayesian Introduction. By George G Woodsworth. Published by John Wiley & Sons
556
What is the best introductory Bayesian statistics textbook?
Computational Bayesian Statistics by Turkman et. al. is a high-quality and all-inclusive introduction to Bayesian statistics and its computational aspects. It has the right mix of theory, model assessment and selection, and a dedicated chapter on software for Bayesian statistics (with code examples). It should serve nicely as a practical textbook for a first course in Bayesian methods.
What is the best introductory Bayesian statistics textbook?
Computational Bayesian Statistics by Turkman et. al. is a high-quality and all-inclusive introduction to Bayesian statistics and its computational aspects. It has the right mix of theory, model assess
What is the best introductory Bayesian statistics textbook? Computational Bayesian Statistics by Turkman et. al. is a high-quality and all-inclusive introduction to Bayesian statistics and its computational aspects. It has the right mix of theory, model assessment and selection, and a dedicated chapter on software for Bayesian statistics (with code examples). It should serve nicely as a practical textbook for a first course in Bayesian methods.
What is the best introductory Bayesian statistics textbook? Computational Bayesian Statistics by Turkman et. al. is a high-quality and all-inclusive introduction to Bayesian statistics and its computational aspects. It has the right mix of theory, model assess
557
What is the best introductory Bayesian statistics textbook?
A good book from the basics to advanced, and which you can download, is Andrew Gelman, John Carlin, Hal Stern, David Dunson, Aki Vehtari, and Donald Rubin, Bayesian Data Analysis, http://www.stat.columbia.edu/~gelman/book/ You can also download the first two chapters of Richard McElreath, A Bayesian Course with Examples in R and Stan, https://xcelab.net/rm/statistical-rethinking/
What is the best introductory Bayesian statistics textbook?
A good book from the basics to advanced, and which you can download, is Andrew Gelman, John Carlin, Hal Stern, David Dunson, Aki Vehtari, and Donald Rubin, Bayesian Data Analysis, http://www.stat.colu
What is the best introductory Bayesian statistics textbook? A good book from the basics to advanced, and which you can download, is Andrew Gelman, John Carlin, Hal Stern, David Dunson, Aki Vehtari, and Donald Rubin, Bayesian Data Analysis, http://www.stat.columbia.edu/~gelman/book/ You can also download the first two chapters of Richard McElreath, A Bayesian Course with Examples in R and Stan, https://xcelab.net/rm/statistical-rethinking/
What is the best introductory Bayesian statistics textbook? A good book from the basics to advanced, and which you can download, is Andrew Gelman, John Carlin, Hal Stern, David Dunson, Aki Vehtari, and Donald Rubin, Bayesian Data Analysis, http://www.stat.colu
558
Algorithms for automatic model selection
I think this approach is mistaken, but perhaps it will be more helpful if I explain why. Wanting to know the best model given some information about a large number of variables is quite understandable. Moreover, it is a situation in which people seem to find themselves regularly. In addition, many textbooks (and courses) on regression cover stepwise selection methods, which implies that they must be legitimate. Unfortunately, however, they are not, and the pairing of this situation and goal is quite difficult to successfully navigate. The following is a list of problems with automated stepwise model selection procedures (attributed to Frank Harrell, and copied from here): It yields R-squared values that are badly biased to be high. The F and chi-squared tests quoted next to each variable on the printout do not have the claimed distribution. The method yields confidence intervals for effects and predicted values that are falsely narrow; see Altman and Andersen (1989). It yields p-values that do not have the proper meaning, and the proper correction for them is a difficult problem. It gives biased regression coefficients that need shrinkage (the coefficients for remaining variables are too large; see Tibshirani [1996]). It has severe problems in the presence of collinearity. It is based on methods (e.g., F tests for nested models) that were intended to be used to test prespecified hypotheses. Increasing the sample size does not help very much; see Derksen and Keselman (1992). It allows us to not think about the problem. It uses a lot of paper. The question is, what's so bad about these procedures / why do these problems occur? Most people who have taken a basic regression course are familiar with the concept of regression to the mean, so this is what I use to explain these issues. (Although this may seem off-topic at first, bear with me, I promise it's relevant.) Imagine a high school track coach on the first day of tryouts. Thirty kids show up. These kids have some underlying level of intrinsic ability to which neither the coach nor anyone else, has direct access. As a result, the coach does the only thing he can do, which is have them all run a 100m dash. The times are presumably a measure of their intrinsic ability and are taken as such. However, they are probabilistic; some proportion of how well someone does is based on their actual ability, and some proportion is random. Imagine that the true situation is the following: set.seed(59) intrinsic_ability = runif(30, min=9, max=10) time = 31 - 2*intrinsic_ability + rnorm(30, mean=0, sd=.5) The results of the first race are displayed in the following figure along with the coach's comments to the kids. Note that partitioning the kids by their race times leaves overlaps on their intrinsic ability--this fact is crucial. After praising some, and yelling at some others (as coaches tend to do), he has them run again. Here are the results of the second race with the coach's reactions (simulated from the same model above): Notice that their intrinsic ability is identical, but the times bounced around relative to the first race. From the coach's point of view, those he yelled at tended to improve, and those he praised tended to do worse (I adapted this concrete example from the Kahneman quote listed on the wiki page), although actually regression to the mean is a simple mathematical consequence of the fact that the coach is selecting athletes for the team based on a measurement that is partly random. Now, what does this have to do with automated (e.g., stepwise) model selection techniques? Developing and confirming a model based on the same dataset is sometimes called data dredging. Although there is some underlying relationship amongst the variables, and stronger relationships are expected to yield stronger scores (e.g., higher t-statistics), these are random variables, and the realized values contain error. Thus, when you select variables based on having higher (or lower) realized values, they may be such because of their underlying true value, error, or both. If you proceed in this manner, you will be as surprised as the coach was after the second race. This is true whether you select variables based on having high t-statistics, or low intercorrelations. True, using the AIC is better than using p-values, because it penalizes the model for complexity, but the AIC is itself a random variable (if you run a study several times and fit the same model, the AIC will bounce around just like everything else). Unfortunately, this is just a problem intrinsic to the epistemic nature of reality itself. I hope this is helpful.
Algorithms for automatic model selection
I think this approach is mistaken, but perhaps it will be more helpful if I explain why. Wanting to know the best model given some information about a large number of variables is quite understandabl
Algorithms for automatic model selection I think this approach is mistaken, but perhaps it will be more helpful if I explain why. Wanting to know the best model given some information about a large number of variables is quite understandable. Moreover, it is a situation in which people seem to find themselves regularly. In addition, many textbooks (and courses) on regression cover stepwise selection methods, which implies that they must be legitimate. Unfortunately, however, they are not, and the pairing of this situation and goal is quite difficult to successfully navigate. The following is a list of problems with automated stepwise model selection procedures (attributed to Frank Harrell, and copied from here): It yields R-squared values that are badly biased to be high. The F and chi-squared tests quoted next to each variable on the printout do not have the claimed distribution. The method yields confidence intervals for effects and predicted values that are falsely narrow; see Altman and Andersen (1989). It yields p-values that do not have the proper meaning, and the proper correction for them is a difficult problem. It gives biased regression coefficients that need shrinkage (the coefficients for remaining variables are too large; see Tibshirani [1996]). It has severe problems in the presence of collinearity. It is based on methods (e.g., F tests for nested models) that were intended to be used to test prespecified hypotheses. Increasing the sample size does not help very much; see Derksen and Keselman (1992). It allows us to not think about the problem. It uses a lot of paper. The question is, what's so bad about these procedures / why do these problems occur? Most people who have taken a basic regression course are familiar with the concept of regression to the mean, so this is what I use to explain these issues. (Although this may seem off-topic at first, bear with me, I promise it's relevant.) Imagine a high school track coach on the first day of tryouts. Thirty kids show up. These kids have some underlying level of intrinsic ability to which neither the coach nor anyone else, has direct access. As a result, the coach does the only thing he can do, which is have them all run a 100m dash. The times are presumably a measure of their intrinsic ability and are taken as such. However, they are probabilistic; some proportion of how well someone does is based on their actual ability, and some proportion is random. Imagine that the true situation is the following: set.seed(59) intrinsic_ability = runif(30, min=9, max=10) time = 31 - 2*intrinsic_ability + rnorm(30, mean=0, sd=.5) The results of the first race are displayed in the following figure along with the coach's comments to the kids. Note that partitioning the kids by their race times leaves overlaps on their intrinsic ability--this fact is crucial. After praising some, and yelling at some others (as coaches tend to do), he has them run again. Here are the results of the second race with the coach's reactions (simulated from the same model above): Notice that their intrinsic ability is identical, but the times bounced around relative to the first race. From the coach's point of view, those he yelled at tended to improve, and those he praised tended to do worse (I adapted this concrete example from the Kahneman quote listed on the wiki page), although actually regression to the mean is a simple mathematical consequence of the fact that the coach is selecting athletes for the team based on a measurement that is partly random. Now, what does this have to do with automated (e.g., stepwise) model selection techniques? Developing and confirming a model based on the same dataset is sometimes called data dredging. Although there is some underlying relationship amongst the variables, and stronger relationships are expected to yield stronger scores (e.g., higher t-statistics), these are random variables, and the realized values contain error. Thus, when you select variables based on having higher (or lower) realized values, they may be such because of their underlying true value, error, or both. If you proceed in this manner, you will be as surprised as the coach was after the second race. This is true whether you select variables based on having high t-statistics, or low intercorrelations. True, using the AIC is better than using p-values, because it penalizes the model for complexity, but the AIC is itself a random variable (if you run a study several times and fit the same model, the AIC will bounce around just like everything else). Unfortunately, this is just a problem intrinsic to the epistemic nature of reality itself. I hope this is helpful.
Algorithms for automatic model selection I think this approach is mistaken, but perhaps it will be more helpful if I explain why. Wanting to know the best model given some information about a large number of variables is quite understandabl
559
Algorithms for automatic model selection
Check out the caret package in R. It will help you cross-validate step-wise regression models (use method='lmStepAIC' or method='glmStepAIC'), and might help you understand how these sorts of models tend to have poor predictive performance. Furthermore, you can use the findCorrelation function in caret to identify and eliminate collinear variables, and the rfe function in caret to eliminate variables with a low t-statistic (use rfeControl=rfeControl(functions=lmFuncs)). However, as mentioned in the previous answers, these methods of variable selection are likely to get you in trouble, particularly if you do them iteratively. Make absolutely certain you evaluate your performance on a COMPLETELY held-out test set. Don't even look at the test set until you are happy with your algorithm! Finally, it might be better (and simpler) to use predictive model with "built-in" feature selection, such as ridge regression, the lasso, or the elastic net. Specifically, try the method=glmnet argument for caret, and compare the cross-validated accuracy of that model to the method=lmStepAIC argument. My guess is that the former will give you much higher out-of-sample accuracy, and you don't have to worry about implementing and validating your custom variable selection algorithm.
Algorithms for automatic model selection
Check out the caret package in R. It will help you cross-validate step-wise regression models (use method='lmStepAIC' or method='glmStepAIC'), and might help you understand how these sorts of models
Algorithms for automatic model selection Check out the caret package in R. It will help you cross-validate step-wise regression models (use method='lmStepAIC' or method='glmStepAIC'), and might help you understand how these sorts of models tend to have poor predictive performance. Furthermore, you can use the findCorrelation function in caret to identify and eliminate collinear variables, and the rfe function in caret to eliminate variables with a low t-statistic (use rfeControl=rfeControl(functions=lmFuncs)). However, as mentioned in the previous answers, these methods of variable selection are likely to get you in trouble, particularly if you do them iteratively. Make absolutely certain you evaluate your performance on a COMPLETELY held-out test set. Don't even look at the test set until you are happy with your algorithm! Finally, it might be better (and simpler) to use predictive model with "built-in" feature selection, such as ridge regression, the lasso, or the elastic net. Specifically, try the method=glmnet argument for caret, and compare the cross-validated accuracy of that model to the method=lmStepAIC argument. My guess is that the former will give you much higher out-of-sample accuracy, and you don't have to worry about implementing and validating your custom variable selection algorithm.
Algorithms for automatic model selection Check out the caret package in R. It will help you cross-validate step-wise regression models (use method='lmStepAIC' or method='glmStepAIC'), and might help you understand how these sorts of models
560
Algorithms for automatic model selection
I fully concur with the problems outlined by @gung. That said, realistically speaking, model selection is a real problem in need of a real solution. Here's something I would use in practice. Split your data into training, validation, and test sets. Train models on your training set. Measure model performance on the validation set using a metric such as prediction RMSE, and choose the model with the lowest prediction error. Devise new models as necessary, repeat steps 2-3. Report how well the model performs on the test set. For an example of the use of this method in the real world, I believe that it was used in the Netflix Prize competition.
Algorithms for automatic model selection
I fully concur with the problems outlined by @gung. That said, realistically speaking, model selection is a real problem in need of a real solution. Here's something I would use in practice. Split yo
Algorithms for automatic model selection I fully concur with the problems outlined by @gung. That said, realistically speaking, model selection is a real problem in need of a real solution. Here's something I would use in practice. Split your data into training, validation, and test sets. Train models on your training set. Measure model performance on the validation set using a metric such as prediction RMSE, and choose the model with the lowest prediction error. Devise new models as necessary, repeat steps 2-3. Report how well the model performs on the test set. For an example of the use of this method in the real world, I believe that it was used in the Netflix Prize competition.
Algorithms for automatic model selection I fully concur with the problems outlined by @gung. That said, realistically speaking, model selection is a real problem in need of a real solution. Here's something I would use in practice. Split yo
561
Algorithms for automatic model selection
To answer the question, there are several options: all-subset by AIC/BIC stepwise by p-value stepwise by AIC/BIC regularisation such as LASSO (can be based on either AIC/BIC or CV) genetic algorithm (GA) others? use of non-automatic, theory ("subject matter knowledge") oriented selection Next question would be which method is better. This paper (doi:10.1016/j.amc.2013.05.016) indicates “all possible regression” gave the same results to their proposed new method and stepwise is worse. A simple GA is between them. This paper (DOI:10.1080/10618600.1998.10474784) compares penalized regression (Bridge, Lasso etc) with “leaps-and-bounds” (seems an exhaustive search algorithm but quicker) and also found “the bridge model agrees with the best model from the subset selection by the leaps and bounds method”. This paper (doi:10.1186/1471-2105-15-88) shows GA is better than LASSO. This paper (DOI:10.1198/jcgs.2009.06164) proposed a method - essentially an all-subset (based on BIC) approach but cleverly reduce the computation time. They demonstrate this method is better than LASSO. Interestingly, this paper (DOI: 10.1111/j.1461-0248.2009.01361.x) shows methods (1)-(3) produce similar performance. So overall the results are mixed but I got an impression that GA seems very good although stepwise may not be too bad and it is quick. As for 7), the use of non-automatic, theory ("subject matter knowledge") oriented selection. It is time consuming and it is not necessarily better than automatic method. In fact in time-series literature, it is well established that automated method (especially commercial software) outperforms human experts "by a substantial margin" (doi:10.1016/S0169-2070(01)00119-4, page561 e.g. selecting various exponential smoothing and ARIMA models).
Algorithms for automatic model selection
To answer the question, there are several options: all-subset by AIC/BIC stepwise by p-value stepwise by AIC/BIC regularisation such as LASSO (can be based on either AIC/BIC or CV) genetic algor
Algorithms for automatic model selection To answer the question, there are several options: all-subset by AIC/BIC stepwise by p-value stepwise by AIC/BIC regularisation such as LASSO (can be based on either AIC/BIC or CV) genetic algorithm (GA) others? use of non-automatic, theory ("subject matter knowledge") oriented selection Next question would be which method is better. This paper (doi:10.1016/j.amc.2013.05.016) indicates “all possible regression” gave the same results to their proposed new method and stepwise is worse. A simple GA is between them. This paper (DOI:10.1080/10618600.1998.10474784) compares penalized regression (Bridge, Lasso etc) with “leaps-and-bounds” (seems an exhaustive search algorithm but quicker) and also found “the bridge model agrees with the best model from the subset selection by the leaps and bounds method”. This paper (doi:10.1186/1471-2105-15-88) shows GA is better than LASSO. This paper (DOI:10.1198/jcgs.2009.06164) proposed a method - essentially an all-subset (based on BIC) approach but cleverly reduce the computation time. They demonstrate this method is better than LASSO. Interestingly, this paper (DOI: 10.1111/j.1461-0248.2009.01361.x) shows methods (1)-(3) produce similar performance. So overall the results are mixed but I got an impression that GA seems very good although stepwise may not be too bad and it is quick. As for 7), the use of non-automatic, theory ("subject matter knowledge") oriented selection. It is time consuming and it is not necessarily better than automatic method. In fact in time-series literature, it is well established that automated method (especially commercial software) outperforms human experts "by a substantial margin" (doi:10.1016/S0169-2070(01)00119-4, page561 e.g. selecting various exponential smoothing and ARIMA models).
Algorithms for automatic model selection To answer the question, there are several options: all-subset by AIC/BIC stepwise by p-value stepwise by AIC/BIC regularisation such as LASSO (can be based on either AIC/BIC or CV) genetic algor
562
Algorithms for automatic model selection
Here's an answer out of left field- instead of using linear regression, use a regression tree (rpart package). This is suitable for automatic model selection because with a little work you can automate the selection of cp, the parameter used to avoid over-fitting.
Algorithms for automatic model selection
Here's an answer out of left field- instead of using linear regression, use a regression tree (rpart package). This is suitable for automatic model selection because with a little work you can automat
Algorithms for automatic model selection Here's an answer out of left field- instead of using linear regression, use a regression tree (rpart package). This is suitable for automatic model selection because with a little work you can automate the selection of cp, the parameter used to avoid over-fitting.
Algorithms for automatic model selection Here's an answer out of left field- instead of using linear regression, use a regression tree (rpart package). This is suitable for automatic model selection because with a little work you can automat
563
Algorithms for automatic model selection
linear model can be optimised by implementing genetic algorithm in the way of choosing most valuable independant variables. The variables are represented as genes in the algorithm, and the best chromosome (set of genes) are then being selected after crossover, mutation etc. operators. It is based on natural selection - then best 'generation' may survive, in other words, the algorithm optimises estimation function that depends on the particular model.
Algorithms for automatic model selection
linear model can be optimised by implementing genetic algorithm in the way of choosing most valuable independant variables. The variables are represented as genes in the algorithm, and the best chromo
Algorithms for automatic model selection linear model can be optimised by implementing genetic algorithm in the way of choosing most valuable independant variables. The variables are represented as genes in the algorithm, and the best chromosome (set of genes) are then being selected after crossover, mutation etc. operators. It is based on natural selection - then best 'generation' may survive, in other words, the algorithm optimises estimation function that depends on the particular model.
Algorithms for automatic model selection linear model can be optimised by implementing genetic algorithm in the way of choosing most valuable independant variables. The variables are represented as genes in the algorithm, and the best chromo
564
Algorithms for automatic model selection
Answers here advises against variable selection, but the problem is real ... and still done. One idea that should be tried out more in practice is blind analyses, as discussed in this nature paper Blind analysis: Hide results to seek the truth. This idea has been mentioned in another post at this site, Multiple comparison and secondary research. The idea of blinding data or introducing extra, simulated noise variables have certainly been used in simulation studies to show problems with stepwise, but the idea here is to use it, blinded, in actual data analysis.
Algorithms for automatic model selection
Answers here advises against variable selection, but the problem is real ... and still done. One idea that should be tried out more in practice is blind analyses, as discussed in this nature paper Bli
Algorithms for automatic model selection Answers here advises against variable selection, but the problem is real ... and still done. One idea that should be tried out more in practice is blind analyses, as discussed in this nature paper Blind analysis: Hide results to seek the truth. This idea has been mentioned in another post at this site, Multiple comparison and secondary research. The idea of blinding data or introducing extra, simulated noise variables have certainly been used in simulation studies to show problems with stepwise, but the idea here is to use it, blinded, in actual data analysis.
Algorithms for automatic model selection Answers here advises against variable selection, but the problem is real ... and still done. One idea that should be tried out more in practice is blind analyses, as discussed in this nature paper Bli
565
Algorithms for automatic model selection
I see my question generated lots of interest and an interesting debate about the validity of the automatic model selection approach. While I agree that taking for granted the result of an automatic selection is risky, it can be used as a starting point. So here is how I implemented it for my particular problem, which is to find the best n factors to explain a given variable do all the regressions variable vs individual factors sort the regression by a given criterion (say AIC) remove the factors that have a low t-stat: they are useless in explaining our variable with the order given in 2., try to add the factors one by one to the model, and keep them when they improve our criterion. iterate for all the factors. Again, this is very rough, there may be ways to improve the methodology, but that is my starting point. I am posting this answer hoping it can be useful for someone else. Comments are welcome!
Algorithms for automatic model selection
I see my question generated lots of interest and an interesting debate about the validity of the automatic model selection approach. While I agree that taking for granted the result of an automatic se
Algorithms for automatic model selection I see my question generated lots of interest and an interesting debate about the validity of the automatic model selection approach. While I agree that taking for granted the result of an automatic selection is risky, it can be used as a starting point. So here is how I implemented it for my particular problem, which is to find the best n factors to explain a given variable do all the regressions variable vs individual factors sort the regression by a given criterion (say AIC) remove the factors that have a low t-stat: they are useless in explaining our variable with the order given in 2., try to add the factors one by one to the model, and keep them when they improve our criterion. iterate for all the factors. Again, this is very rough, there may be ways to improve the methodology, but that is my starting point. I am posting this answer hoping it can be useful for someone else. Comments are welcome!
Algorithms for automatic model selection I see my question generated lots of interest and an interesting debate about the validity of the automatic model selection approach. While I agree that taking for granted the result of an automatic se
566
When (and why) should you take the log of a distribution (of numbers)?
If you assume a model form that is non-linear but can be transformed to a linear model such as $\log Y = \beta_0 + \beta_1t$ then one would be justified in taking logarithms of $Y$ to meet the specified model form. In general whether or not you have causal series , the only time you would be justified or correct in taking the Log of $Y$ is when it can be proven that the Variance of $Y$ is proportional to the Expected Value of $Y^2$ . I don't remember the original source for the following but it nicely summarizes the role of power transformations. It is important to note that the distributional assumptions are always about the error process not the observed Y, thus it is a definite "no-no" to analyze the original series for an appropriate transformation unless the series is defined by a simple constant. Unwarranted or incorrect transformations including differences should be studiously avoided as they are often an ill-fashioned /ill-conceived attempt to deal with unidentified anomalies/level shifts/time trends or changes in parameters or changes in error variance. A classic example of this is discussed starting at slide 60 here http://www.autobox.com/cms/index.php/afs-university/intro-to-forecasting/doc_download/53-capabilities-presentation where three pulse anomalies (untreated) led to an unwarranted log transformation by early researchers. Unfortunately some of our current researchers are still making the same mistake. Several common used variance-stabilizing transformations Relationship of $\sigma^2$ to $E(y)$ Transformation $\sigma^2 \propto$ constant $y'=y$ (no transformation) $\sigma^2 \propto E(y)$ $y' = \sqrt y$ (square root: Poisson data) $\sigma^2 \propto E(y)(1-E(y))$ $y' = sin^{-1}(\sqrt y)$ (arcsin; binomial proportions $0\le y_i \le 1$) $\sigma^2 \propto (E(y))^2$ $y'=log(y)$ $\sigma^2 \propto (E(y))^3$ $y' = y^{-1/2}$ (reciprocal square root) $\sigma^2 \propto (E(y))^4$ $y' = y^{-1}$ (reciprocal) The optimal power transformation is found via the Box-Cox Test where -1. is a reciprocal -.5 is a recriprocal square root 0.0 is a log transformation .5 is a square toot transform and 1.0 is no transform. Note that when you have no predictor/causal/supporting input series, the model is $Y_t=u +a_t$ and that there are no requirements made about the distribution of $Y$ BUT are made about $a_t$, the error process. In this case the distributional requirements about $a_t$ pass directly on to $Y_t$. When you have supporting series such as in a regression or in a Autoregressive–moving-average model with exogenous inputs model (ARMAX model) the distributional assumptions are all about $a_t$ and have nothing whatsoever to do with the distribution of $Y_t$. Thus in the case of ARIMA model or an ARMAX Model one would never assume any transformation on $Y$ before finding the optimal Box-Cox transformation which would then suggest the remedy (transformation) for $Y$. In earlier times some analysts would transform both $Y$ and $X$ in a presumptive way just to be able to reflect upon the percent change in $Y$ as a result in the percent change in $X$ by examining the regression coefficient between $\log Y$ and $\log X$. In summary, transformations are like drugs some are good and some are bad for you! They should only be used when necessary and then with caution.
When (and why) should you take the log of a distribution (of numbers)?
If you assume a model form that is non-linear but can be transformed to a linear model such as $\log Y = \beta_0 + \beta_1t$ then one would be justified in taking logarithms of $Y$ to meet the specifi
When (and why) should you take the log of a distribution (of numbers)? If you assume a model form that is non-linear but can be transformed to a linear model such as $\log Y = \beta_0 + \beta_1t$ then one would be justified in taking logarithms of $Y$ to meet the specified model form. In general whether or not you have causal series , the only time you would be justified or correct in taking the Log of $Y$ is when it can be proven that the Variance of $Y$ is proportional to the Expected Value of $Y^2$ . I don't remember the original source for the following but it nicely summarizes the role of power transformations. It is important to note that the distributional assumptions are always about the error process not the observed Y, thus it is a definite "no-no" to analyze the original series for an appropriate transformation unless the series is defined by a simple constant. Unwarranted or incorrect transformations including differences should be studiously avoided as they are often an ill-fashioned /ill-conceived attempt to deal with unidentified anomalies/level shifts/time trends or changes in parameters or changes in error variance. A classic example of this is discussed starting at slide 60 here http://www.autobox.com/cms/index.php/afs-university/intro-to-forecasting/doc_download/53-capabilities-presentation where three pulse anomalies (untreated) led to an unwarranted log transformation by early researchers. Unfortunately some of our current researchers are still making the same mistake. Several common used variance-stabilizing transformations Relationship of $\sigma^2$ to $E(y)$ Transformation $\sigma^2 \propto$ constant $y'=y$ (no transformation) $\sigma^2 \propto E(y)$ $y' = \sqrt y$ (square root: Poisson data) $\sigma^2 \propto E(y)(1-E(y))$ $y' = sin^{-1}(\sqrt y)$ (arcsin; binomial proportions $0\le y_i \le 1$) $\sigma^2 \propto (E(y))^2$ $y'=log(y)$ $\sigma^2 \propto (E(y))^3$ $y' = y^{-1/2}$ (reciprocal square root) $\sigma^2 \propto (E(y))^4$ $y' = y^{-1}$ (reciprocal) The optimal power transformation is found via the Box-Cox Test where -1. is a reciprocal -.5 is a recriprocal square root 0.0 is a log transformation .5 is a square toot transform and 1.0 is no transform. Note that when you have no predictor/causal/supporting input series, the model is $Y_t=u +a_t$ and that there are no requirements made about the distribution of $Y$ BUT are made about $a_t$, the error process. In this case the distributional requirements about $a_t$ pass directly on to $Y_t$. When you have supporting series such as in a regression or in a Autoregressive–moving-average model with exogenous inputs model (ARMAX model) the distributional assumptions are all about $a_t$ and have nothing whatsoever to do with the distribution of $Y_t$. Thus in the case of ARIMA model or an ARMAX Model one would never assume any transformation on $Y$ before finding the optimal Box-Cox transformation which would then suggest the remedy (transformation) for $Y$. In earlier times some analysts would transform both $Y$ and $X$ in a presumptive way just to be able to reflect upon the percent change in $Y$ as a result in the percent change in $X$ by examining the regression coefficient between $\log Y$ and $\log X$. In summary, transformations are like drugs some are good and some are bad for you! They should only be used when necessary and then with caution.
When (and why) should you take the log of a distribution (of numbers)? If you assume a model form that is non-linear but can be transformed to a linear model such as $\log Y = \beta_0 + \beta_1t$ then one would be justified in taking logarithms of $Y$ to meet the specifi
567
When (and why) should you take the log of a distribution (of numbers)?
Log-scale informs on relative changes (multiplicative), while linear-scale informs on absolute changes (additive). When do you use each? When you care about relative changes, use the log-scale; when you care about absolute changes, use linear-scale. This is true for distributions, but also for any quantity or changes in quantities. Note, I use the word "care" here very specifically and intentionally. Without a model or a goal, your question cannot be answered; the model or goal defines which scale is important. If you're trying to model something, and the mechanism acts via a relative change, log-scale is critical to capturing the behavior seen in your data. But if the underlying model's mechanism is additive, you'll want to use linear-scale. Example. Stock market. Stock A on day 1: $\$$100. On day 2, $\$$101. Every stock tracking service in the world reports this change in two ways! (1) +$\$$1. (2) +1%. The first is a measure of absolute, additive change; the second a measure of relative change. Illustration of relative change vs absolute: Relative change is the same, absolute change is different Stock A goes from $\$$1 to $\$$1.10. Stock B goes from $\$$100 to $\$$110. Stock A gained 10%, stock B gained 10% (relative scale, equal) ...but stock A gained 10 cents, while stock B gained $\$$10 (B gained more absolute dollar amount) If we convert to log space, relative changes appear as absolute changes. Stock A goes from $\log_{10}(\$1)$ to $\log_{10}(\$1.10)$ = 0 to .0413 Stock B goes from $\log_{10}(\$100)$ to $\log_{10}(\$110)$ = 2 to 2.0413 Now, taking the absolute difference in log space, we find that both changed by .0413. Both of these measures of change are important, and which one is important to you depends solely on your model of investing. There are two models. (1) Investing a fixed amount of principal, or (2) investing in a fixed number of shares. Model 1: Investing with a fixed amount of principal. Say yesterday stock A cost $\$$1 per share, and stock B costs $\$$100 a share. Today they both went up by one dollar to $\$$2 and $\$$101 respectively. Their absolute change is identical ($\$$1), but their relative change is dramatically different (100% for A, 1% for B). Given that you have a fixed amount of principal to invest, say $\$$100, you can only afford 1 share of B or 100 shares of A. If you invested yesterday you'd have $\$$200 with A, or $\$$101 with B. So here you "care" about the relative gains, specifically because you have a finite amount of principal. Model 2: fixed number of shares. In a different scenario, suppose your bank only lets you buy in blocks of 100 shares, and you've decided to invest in 100 shares of A or B. In the previous case, whether you buy A or B your gains will be the same ($\$$100 - i.e. $1 for each share). Now suppose we think of a stock value as a random variable fluctuating over time, and we want to come up with a model that reflects generally how stocks behave. And let's say we want to use this model to maximize profit. We compute a probability distribution whose x-values are in units of 'share price', and y-values in probability of observing a given share price. We do this for stock A, and stock B. If you subscribe to the first scenario, where you have a fixed amount of principal you want to invest, then taking the log of these distributions will be informative. Why? What you care about is the shape of the distribution in relative space. Whether a stock goes from 1 to 10, or 10 to 100 doesn't matter to you, right? Both cases are a 10-fold relative gain. This appears naturally in a log-scale distribution in that unit gains correspond to fold gains directly. For two stocks whose mean value is different but whose relative change is identically distributed (they have the same distribution of daily percent changes), their log distributions will be identical in shape just shifted. Conversely, their linear distributions will not be identical in shape, with the higher valued distribution having a higher variance. If you were to look at these same distributions in linear, or absolute space, you would think that higher-valued share prices correspond to greater fluctuations. For your investing purposes though, where only relative gains matter, this is not necessarily true. Example 2. Chemical reactions. Suppose we have two molecules A and B that undergo a reversible reaction. $A\Leftrightarrow B$ which is defined by the individual rate constants ($k_{ab}$) $A\Rightarrow B$ ($k_{ba}$) $B\Rightarrow A$ Their equilibrium is defined by the relationship: $K=\frac{k_{ab}}{k_{ba}}=\frac{[A]}{[B]}$ Two points here. (1) This is a multiplicative relationship between the concentrations of $A$ and $B$. (2) This relationship isn't arbitrary, but rather arises directly from the fundamental physical-chemical properties that govern molecules bumping into each other and reacting. Now suppose we have some distribution of A or B's concentration. The appropriate scale of that distribution is in log-space, because the model of how either concentration changes is defined multiplicatively (the product of A's concentration with the inverse of B's concentration). In some alternate universe where $K^*=k_{ab}-k_{ba}=[A]-[B]$, we might look at this concentration distribution in absolute, linear space. That said, if you have a model, be it for stock market prediction or chemical kinetics, you can always interconvert 'losslessly' between linear and log space, so long as your range of values is $(0,\inf)$. Whether you choose to look at the linear or log-scale distribution depends on what you're trying to obtain from the data. EDIT. An interesting parallel that helped me build intuition is the example of arithmetic means vs geometric means. An arithmetic (vanilla) mean computes the average of numbers assuming a hidden model where absolute differences are what matter. Example. The arithmetic mean of 1 and 100 is 50.5. Suppose we're talking about concentrations though, where the chemical relationship between concentrations is multiplicative. Then the average concentration should really be computed on the log scale. This is called the geometric average. The geometric average of 1 and 100 is 10! In terms of relative differences, this makes sense: 10/1 = 10, and 100/10 = 10, ie., the relative change between the average and two values is the same. Additively we find the same thing; 50.5-1= 49.5, and 100-50.5 = 49.5.
When (and why) should you take the log of a distribution (of numbers)?
Log-scale informs on relative changes (multiplicative), while linear-scale informs on absolute changes (additive). When do you use each? When you care about relative changes, use the log-scale; when y
When (and why) should you take the log of a distribution (of numbers)? Log-scale informs on relative changes (multiplicative), while linear-scale informs on absolute changes (additive). When do you use each? When you care about relative changes, use the log-scale; when you care about absolute changes, use linear-scale. This is true for distributions, but also for any quantity or changes in quantities. Note, I use the word "care" here very specifically and intentionally. Without a model or a goal, your question cannot be answered; the model or goal defines which scale is important. If you're trying to model something, and the mechanism acts via a relative change, log-scale is critical to capturing the behavior seen in your data. But if the underlying model's mechanism is additive, you'll want to use linear-scale. Example. Stock market. Stock A on day 1: $\$$100. On day 2, $\$$101. Every stock tracking service in the world reports this change in two ways! (1) +$\$$1. (2) +1%. The first is a measure of absolute, additive change; the second a measure of relative change. Illustration of relative change vs absolute: Relative change is the same, absolute change is different Stock A goes from $\$$1 to $\$$1.10. Stock B goes from $\$$100 to $\$$110. Stock A gained 10%, stock B gained 10% (relative scale, equal) ...but stock A gained 10 cents, while stock B gained $\$$10 (B gained more absolute dollar amount) If we convert to log space, relative changes appear as absolute changes. Stock A goes from $\log_{10}(\$1)$ to $\log_{10}(\$1.10)$ = 0 to .0413 Stock B goes from $\log_{10}(\$100)$ to $\log_{10}(\$110)$ = 2 to 2.0413 Now, taking the absolute difference in log space, we find that both changed by .0413. Both of these measures of change are important, and which one is important to you depends solely on your model of investing. There are two models. (1) Investing a fixed amount of principal, or (2) investing in a fixed number of shares. Model 1: Investing with a fixed amount of principal. Say yesterday stock A cost $\$$1 per share, and stock B costs $\$$100 a share. Today they both went up by one dollar to $\$$2 and $\$$101 respectively. Their absolute change is identical ($\$$1), but their relative change is dramatically different (100% for A, 1% for B). Given that you have a fixed amount of principal to invest, say $\$$100, you can only afford 1 share of B or 100 shares of A. If you invested yesterday you'd have $\$$200 with A, or $\$$101 with B. So here you "care" about the relative gains, specifically because you have a finite amount of principal. Model 2: fixed number of shares. In a different scenario, suppose your bank only lets you buy in blocks of 100 shares, and you've decided to invest in 100 shares of A or B. In the previous case, whether you buy A or B your gains will be the same ($\$$100 - i.e. $1 for each share). Now suppose we think of a stock value as a random variable fluctuating over time, and we want to come up with a model that reflects generally how stocks behave. And let's say we want to use this model to maximize profit. We compute a probability distribution whose x-values are in units of 'share price', and y-values in probability of observing a given share price. We do this for stock A, and stock B. If you subscribe to the first scenario, where you have a fixed amount of principal you want to invest, then taking the log of these distributions will be informative. Why? What you care about is the shape of the distribution in relative space. Whether a stock goes from 1 to 10, or 10 to 100 doesn't matter to you, right? Both cases are a 10-fold relative gain. This appears naturally in a log-scale distribution in that unit gains correspond to fold gains directly. For two stocks whose mean value is different but whose relative change is identically distributed (they have the same distribution of daily percent changes), their log distributions will be identical in shape just shifted. Conversely, their linear distributions will not be identical in shape, with the higher valued distribution having a higher variance. If you were to look at these same distributions in linear, or absolute space, you would think that higher-valued share prices correspond to greater fluctuations. For your investing purposes though, where only relative gains matter, this is not necessarily true. Example 2. Chemical reactions. Suppose we have two molecules A and B that undergo a reversible reaction. $A\Leftrightarrow B$ which is defined by the individual rate constants ($k_{ab}$) $A\Rightarrow B$ ($k_{ba}$) $B\Rightarrow A$ Their equilibrium is defined by the relationship: $K=\frac{k_{ab}}{k_{ba}}=\frac{[A]}{[B]}$ Two points here. (1) This is a multiplicative relationship between the concentrations of $A$ and $B$. (2) This relationship isn't arbitrary, but rather arises directly from the fundamental physical-chemical properties that govern molecules bumping into each other and reacting. Now suppose we have some distribution of A or B's concentration. The appropriate scale of that distribution is in log-space, because the model of how either concentration changes is defined multiplicatively (the product of A's concentration with the inverse of B's concentration). In some alternate universe where $K^*=k_{ab}-k_{ba}=[A]-[B]$, we might look at this concentration distribution in absolute, linear space. That said, if you have a model, be it for stock market prediction or chemical kinetics, you can always interconvert 'losslessly' between linear and log space, so long as your range of values is $(0,\inf)$. Whether you choose to look at the linear or log-scale distribution depends on what you're trying to obtain from the data. EDIT. An interesting parallel that helped me build intuition is the example of arithmetic means vs geometric means. An arithmetic (vanilla) mean computes the average of numbers assuming a hidden model where absolute differences are what matter. Example. The arithmetic mean of 1 and 100 is 50.5. Suppose we're talking about concentrations though, where the chemical relationship between concentrations is multiplicative. Then the average concentration should really be computed on the log scale. This is called the geometric average. The geometric average of 1 and 100 is 10! In terms of relative differences, this makes sense: 10/1 = 10, and 100/10 = 10, ie., the relative change between the average and two values is the same. Additively we find the same thing; 50.5-1= 49.5, and 100-50.5 = 49.5.
When (and why) should you take the log of a distribution (of numbers)? Log-scale informs on relative changes (multiplicative), while linear-scale informs on absolute changes (additive). When do you use each? When you care about relative changes, use the log-scale; when y
568
When (and why) should you take the log of a distribution (of numbers)?
I wanted to give an answer in the simplist form. If exponents are short hand for multiplication, and log is the inverse of exponentiation, the taking the log of something is a form of division. Take the simplest function form y = C. Let C be 100,000 so we have y=100,000. If ws dona log() transform we have y=5. If we had another function on the same plot of y=1,000,000 it would be hard to graph those together given the range on the y axis. But if we use log() on both now we have functions y=5 and y= 6. Extend this to simple linear form of y = mx + C and you can see how powerful this can be as things get increasing poweful. To use a one senetence analogy log transform is equivalent to the scale on a map that says 1in = 1 mile. We dont want a map where 1 mile = 1 mile.. Logarithms scale down when we need it. Exponents scale up. We use both for normalizing data
When (and why) should you take the log of a distribution (of numbers)?
I wanted to give an answer in the simplist form. If exponents are short hand for multiplication, and log is the inverse of exponentiation, the taking the log of something is a form of division. Take t
When (and why) should you take the log of a distribution (of numbers)? I wanted to give an answer in the simplist form. If exponents are short hand for multiplication, and log is the inverse of exponentiation, the taking the log of something is a form of division. Take the simplest function form y = C. Let C be 100,000 so we have y=100,000. If ws dona log() transform we have y=5. If we had another function on the same plot of y=1,000,000 it would be hard to graph those together given the range on the y axis. But if we use log() on both now we have functions y=5 and y= 6. Extend this to simple linear form of y = mx + C and you can see how powerful this can be as things get increasing poweful. To use a one senetence analogy log transform is equivalent to the scale on a map that says 1in = 1 mile. We dont want a map where 1 mile = 1 mile.. Logarithms scale down when we need it. Exponents scale up. We use both for normalizing data
When (and why) should you take the log of a distribution (of numbers)? I wanted to give an answer in the simplist form. If exponents are short hand for multiplication, and log is the inverse of exponentiation, the taking the log of something is a form of division. Take t
569
When (and why) should you take the log of a distribution (of numbers)?
A practical answer: Why use log? 1.To avoid numerical underflow / overflow In statistical inference or parameter learning processes, it's very common to cumulate product a series of probability densities. But some times the individual densities are too small (or too big) that computer won't be able to store their product. For example we want to calculate a likelihood $L=p_1 \cdot p_2$ where $p_1=8e^{-300}$ and $p_2=6e^{-300}$, but if you multiply them together in a computer you will get $L=0$, because the true result $4.8e^{-601}$ is smaller than the smallest positive number a computer can handle. Hence we always use log probabilities or log probability densities during computation. 2.To improve model learning efficiency by exploiting log concave/convex/linear property We know that parameter learning in essence is an optimazation problem, we also know that if a function is concave/convex/linear, then it's optimal value can be easily found. Most of the common distributions we see are log concave/convex, some are even log linear, which means that the log of the density function is concave/convex/linear, finding it's optimal values in the log space can be much more efficient. When use log? As explained in "Why use log?", it is recommended to use log densities/probabilities for all inference and model learning processes.
When (and why) should you take the log of a distribution (of numbers)?
A practical answer: Why use log? 1.To avoid numerical underflow / overflow In statistical inference or parameter learning processes, it's very common to cumulate product a series of probability densit
When (and why) should you take the log of a distribution (of numbers)? A practical answer: Why use log? 1.To avoid numerical underflow / overflow In statistical inference or parameter learning processes, it's very common to cumulate product a series of probability densities. But some times the individual densities are too small (or too big) that computer won't be able to store their product. For example we want to calculate a likelihood $L=p_1 \cdot p_2$ where $p_1=8e^{-300}$ and $p_2=6e^{-300}$, but if you multiply them together in a computer you will get $L=0$, because the true result $4.8e^{-601}$ is smaller than the smallest positive number a computer can handle. Hence we always use log probabilities or log probability densities during computation. 2.To improve model learning efficiency by exploiting log concave/convex/linear property We know that parameter learning in essence is an optimazation problem, we also know that if a function is concave/convex/linear, then it's optimal value can be easily found. Most of the common distributions we see are log concave/convex, some are even log linear, which means that the log of the density function is concave/convex/linear, finding it's optimal values in the log space can be much more efficient. When use log? As explained in "Why use log?", it is recommended to use log densities/probabilities for all inference and model learning processes.
When (and why) should you take the log of a distribution (of numbers)? A practical answer: Why use log? 1.To avoid numerical underflow / overflow In statistical inference or parameter learning processes, it's very common to cumulate product a series of probability densit
570
How to interpret a QQ plot
If the values lie along a line the distribution has the same shape (up to location and scale) as the theoretical distribution we have supposed. Local behaviour: When looking at sorted sample values on the y-axis and (approximate) expected quantiles on the x-axis, we can identify from how the values in some section of the plot differ locally from an overall linear trend by seeing whether the values are more or less concentrated than the theoretical distribution would suppose in that section of a plot: As we see, less concentrated points increase more and more concentrated points increase less rapidly than an overall linear relation would suggest, and in the extreme cases correspond to a gap in the density of the sample (shows as a near-vertical jump) or a spike of constant values (values aligned horizontally). This allows us to spot a heavy tail or a light tail and hence, skewness greater or smaller than the theoretical distribution, and so on. Overall apppearance: Here's what QQ-plots look like (for particular choices of distribution) on average: But randomness tends to obscure things, especially with small samples: Note that at $n=21$ the results may be much more variable than shown there - I generated several such sets of six plots and chose a 'nice' set where you could kind of see the shape in all six plots at the same time. Sometimes straight relationships look curved, curved relationships look straight, heavy-tails just look skew, and so on - with such small samples, often the situation may be much less clear: It's possible to discern more features than those (such as discreteness, for one example), but with $n=21$, even such basic features may be hard to spot; we shouldn't try to 'over-interpret' every little wiggle. As sample sizes become larger, generally speaking the plots 'stabilize' and the features become more clearly interpretable rather than representing noise. [With some very heavy-tailed distributions, the rare large outlier might prevent the picture stabilizing nicely even at quite large sample sizes.] You may also find the suggestion here useful when trying to decide how much you should worry about a particular amount of curvature or wiggliness. A more suitable guide for interpretation in general would also include displays at smaller and larger sample sizes.
How to interpret a QQ plot
If the values lie along a line the distribution has the same shape (up to location and scale) as the theoretical distribution we have supposed. Local behaviour: When looking at sorted sample values o
How to interpret a QQ plot If the values lie along a line the distribution has the same shape (up to location and scale) as the theoretical distribution we have supposed. Local behaviour: When looking at sorted sample values on the y-axis and (approximate) expected quantiles on the x-axis, we can identify from how the values in some section of the plot differ locally from an overall linear trend by seeing whether the values are more or less concentrated than the theoretical distribution would suppose in that section of a plot: As we see, less concentrated points increase more and more concentrated points increase less rapidly than an overall linear relation would suggest, and in the extreme cases correspond to a gap in the density of the sample (shows as a near-vertical jump) or a spike of constant values (values aligned horizontally). This allows us to spot a heavy tail or a light tail and hence, skewness greater or smaller than the theoretical distribution, and so on. Overall apppearance: Here's what QQ-plots look like (for particular choices of distribution) on average: But randomness tends to obscure things, especially with small samples: Note that at $n=21$ the results may be much more variable than shown there - I generated several such sets of six plots and chose a 'nice' set where you could kind of see the shape in all six plots at the same time. Sometimes straight relationships look curved, curved relationships look straight, heavy-tails just look skew, and so on - with such small samples, often the situation may be much less clear: It's possible to discern more features than those (such as discreteness, for one example), but with $n=21$, even such basic features may be hard to spot; we shouldn't try to 'over-interpret' every little wiggle. As sample sizes become larger, generally speaking the plots 'stabilize' and the features become more clearly interpretable rather than representing noise. [With some very heavy-tailed distributions, the rare large outlier might prevent the picture stabilizing nicely even at quite large sample sizes.] You may also find the suggestion here useful when trying to decide how much you should worry about a particular amount of curvature or wiggliness. A more suitable guide for interpretation in general would also include displays at smaller and larger sample sizes.
How to interpret a QQ plot If the values lie along a line the distribution has the same shape (up to location and scale) as the theoretical distribution we have supposed. Local behaviour: When looking at sorted sample values o
571
How to interpret a QQ plot
I made a shiny app to help interpret normal QQ plot. Try this link. In this app, you can adjust the skewness, tailedness (kurtosis) and modality of data and you can see how the histogram and QQ plot change. Conversely, you can use it in a way that given the pattern of QQ plot, then check how the skewness etc should be. For further details, see the documentation therein. I realized that I don't have enough free space to provide this app online. As request, I will provide all three code chunks: sample.R, server.R and ui.R here. Those who are interested in running this app may just load these files into Rstudio then run it on your own PC. The sample.R file: # Compute the positive part of a real number x, # which is $\max(x, 0)$. positive_part <- function(x) {ifelse(x > 0, x, 0)} # This function generates n data points from some # unimodal population. # Input: ---------------------------------------------------- # n: sample size; # mu: the mode of the population, default value is 0. # skewness: the parameter that reflects the skewness of the # distribution, note it is not # the exact skewness defined in statistics textbook, # the default value is 0. # tailedness: the parameter that reflects the tailedness # of the distribution, note it is # not the exact kurtosis defined in textbook, # the default value is 0. # When all arguments take their default values, the data will # be generated from standard # normal distribution. random_sample <- function(n, mu = 0, skewness = 0, § tailedness = 0){ sigma = 1 # The sampling scheme resembles the rejection sampling. # For each step, an initial data point # was proposed, and it will be rejected or accepted based on # the weights determined by the # skewness and tailedness of input. reject_skewness <- function(x){ scale = 1 # if `skewness` > 0 (means data are right-skewed), # then small values of x will be rejected # with higher probability. l <- exp(-scale * skewness * x) l/(1 + l) } reject_tailedness <- function(x){ scale = 1 # if `tailedness` < 0 (means data are lightly-tailed), # then big values of x will be rejected with # higher probability. l <- exp(-scale * tailedness * abs(x)) l/(1 + l) } # w is another layer option to control the tailedness, the # higher the w is, the data will be # more heavily-tailed. w = positive_part((1 - exp(-0.5 * tailedness)))/(1 + exp(-0.5 * tailedness)) filter <- function(x){ # The proposed data points will be accepted only if it # satified the following condition, # in which way we controlled the skewness and tailedness of # data. (For example, the # proposed data point will be rejected more frequently if it # has higher skewness or # tailedness.) accept <- runif(length(x)) > reject_tailedness(x) * reject_skewness(x) x[accept] } result <- filter(mu + sigma * ((1 - w) * rnorm(n) + w * rt(n, 5))) # Keep generating data points until the length of data vector # reaches n. while (length(result) < n) { result <- c(result, filter(mu + sigma * ((1 - w) * rnorm(n) + w * rt(n, 5)))) } result[1:n] } multimodal <- function(n, Mu, skewness = 0, tailedness = 0) { # Deal with the bimodal case. mumu <- as.numeric(Mu %*% rmultinom(n, 1, rep(1, length(Mu)))) mumu + random_sample(n, skewness = skewness, tailedness = tailedness) } The server.R file: library(shiny) # Need 'ggplot2' package to get a better aesthetic effect. library(ggplot2) # The 'sample.R' source code is used to generate data to be # plotted, based on the input skewness, # tailedness and modality. For more information, see the source # code in 'sample.R' code. source("sample.R") shinyServer(function(input, output) { # We generate 10000 data points from the distribution which # reflects the specification of skewness, # tailedness and modality. n = 10000 # 'scale' is a parameter that controls the skewness and # tailedness. scale = 1000 # The `reactive` function is a trick to accelerate the app, # which enables us only generate the data # once to plot two plots. The generated sample was stored in # the `data` object to be called later. data <- reactive({ # For `Unimodal` choice, we fix the mode at 0. if (input$modality == "Unimodal") {mu = 0} # For `Bimodal` choice, we fix the two modes at -2 and 2. if (input$modality == "Bimodal") {mu = c(-2, 2)} # Details will be explained in `sample.R` file. sample1 <- multimodal(n, mu, skewness = scale * input$skewness, tailedness = scale * input$kurtosis) data.frame(x = sample1)}) output$histogram <- renderPlot({ # Plot the histogram. ggplot(data(), aes(x = x)) + geom_histogram(aes(y = ..density..), binwidth = .5, colour = "black", fill = "white") + xlim(-6, 6) + # Overlay the density curve. geom_density(alpha = .5, fill = "blue") + ggtitle("Histogram of Data") + theme(plot.title = element_text(lineheight = .8, face = "bold")) }) output$qqplot <- renderPlot({ # Plot the QQ plot. ggplot(data(), aes(sample = x)) + stat_qq() + ggtitle("QQplot of Data") + theme(plot.title = element_text(lineheight=.8, face = "bold")) }) }) Finally, the ui.R file: library(shiny) # Define UI for application that helps students interpret the # pattern of (normal) QQ plots. # By using this app, we can show students the different patterns # of QQ plots (and the histograms, # for completeness) for different type of data distributions. # For example, left skewed heavy tailed # data, etc. # This app can be (and is encouraged to be) used in a reversed # way, namely, show the QQ plot to the # students first, then tell them based on the pattern of the QQ # plot, the data is right skewed, bimodal, # heavy-tailed, etc. shinyUI(fluidPage( # Application title titlePanel("Interpreting Normal QQ Plots"), sidebarLayout( sidebarPanel( # The first slider can control the skewness of input data. # "-1" indicates the most left-skewed # case while "1" indicates the most right-skewed case. sliderInput("skewness", "Skewness", min = -1, max = 1, value = 0, step = 0.1, ticks = FALSE), # The second slider can control the skewness of input data. # "-1" indicates the most light tail # case while "1" indicates the most heavy tail case. sliderInput("kurtosis", "Tailedness", min = -1, max = 1, value = 0, step = 0.1, ticks = FALSE), # This selectbox allows user to choose the number of modes # of data, two options are provided: # "Unimodal" and "Bimodal". selectInput("modality", label = "Modality", choices = c("Unimodal" = "Unimodal", "Bimodal" = "Bimodal"), selected = "Unimodal"), br(), # The following helper information will be shown on the # user interface to give necessary # information to help users understand sliders. helpText(p("The skewness of data is controlled by moving the", strong("Skewness"), "slider,", "the left side means left skewed while the right side means right skewed."), p("The tailedness of data is controlled by moving the", strong("Tailedness"), "slider,", "the left side means light tailed while the right side means heavy tailed."), p("The modality of data is controlled by selecting the modality from", strong("Modality"), "select box.") ) ), # The main panel outputs two plots. One plot is the histogram # of data (with the non-parametric density # curve overlaid), to get a better visualization, we restricted # the range of x-axis to -6 to 6 so # that part of the data will not be shown when heavy-tailed # input is chosen. The other plot is the # QQ plot of data, as convention, the x-axis is the theoretical # quantiles for standard normal distri- # bution and the y-axis is the sample quantiles of data. mainPanel( plotOutput("histogram"), plotOutput("qqplot") ) ) ) )
How to interpret a QQ plot
I made a shiny app to help interpret normal QQ plot. Try this link. In this app, you can adjust the skewness, tailedness (kurtosis) and modality of data and you can see how the histogram and QQ plot c
How to interpret a QQ plot I made a shiny app to help interpret normal QQ plot. Try this link. In this app, you can adjust the skewness, tailedness (kurtosis) and modality of data and you can see how the histogram and QQ plot change. Conversely, you can use it in a way that given the pattern of QQ plot, then check how the skewness etc should be. For further details, see the documentation therein. I realized that I don't have enough free space to provide this app online. As request, I will provide all three code chunks: sample.R, server.R and ui.R here. Those who are interested in running this app may just load these files into Rstudio then run it on your own PC. The sample.R file: # Compute the positive part of a real number x, # which is $\max(x, 0)$. positive_part <- function(x) {ifelse(x > 0, x, 0)} # This function generates n data points from some # unimodal population. # Input: ---------------------------------------------------- # n: sample size; # mu: the mode of the population, default value is 0. # skewness: the parameter that reflects the skewness of the # distribution, note it is not # the exact skewness defined in statistics textbook, # the default value is 0. # tailedness: the parameter that reflects the tailedness # of the distribution, note it is # not the exact kurtosis defined in textbook, # the default value is 0. # When all arguments take their default values, the data will # be generated from standard # normal distribution. random_sample <- function(n, mu = 0, skewness = 0, § tailedness = 0){ sigma = 1 # The sampling scheme resembles the rejection sampling. # For each step, an initial data point # was proposed, and it will be rejected or accepted based on # the weights determined by the # skewness and tailedness of input. reject_skewness <- function(x){ scale = 1 # if `skewness` > 0 (means data are right-skewed), # then small values of x will be rejected # with higher probability. l <- exp(-scale * skewness * x) l/(1 + l) } reject_tailedness <- function(x){ scale = 1 # if `tailedness` < 0 (means data are lightly-tailed), # then big values of x will be rejected with # higher probability. l <- exp(-scale * tailedness * abs(x)) l/(1 + l) } # w is another layer option to control the tailedness, the # higher the w is, the data will be # more heavily-tailed. w = positive_part((1 - exp(-0.5 * tailedness)))/(1 + exp(-0.5 * tailedness)) filter <- function(x){ # The proposed data points will be accepted only if it # satified the following condition, # in which way we controlled the skewness and tailedness of # data. (For example, the # proposed data point will be rejected more frequently if it # has higher skewness or # tailedness.) accept <- runif(length(x)) > reject_tailedness(x) * reject_skewness(x) x[accept] } result <- filter(mu + sigma * ((1 - w) * rnorm(n) + w * rt(n, 5))) # Keep generating data points until the length of data vector # reaches n. while (length(result) < n) { result <- c(result, filter(mu + sigma * ((1 - w) * rnorm(n) + w * rt(n, 5)))) } result[1:n] } multimodal <- function(n, Mu, skewness = 0, tailedness = 0) { # Deal with the bimodal case. mumu <- as.numeric(Mu %*% rmultinom(n, 1, rep(1, length(Mu)))) mumu + random_sample(n, skewness = skewness, tailedness = tailedness) } The server.R file: library(shiny) # Need 'ggplot2' package to get a better aesthetic effect. library(ggplot2) # The 'sample.R' source code is used to generate data to be # plotted, based on the input skewness, # tailedness and modality. For more information, see the source # code in 'sample.R' code. source("sample.R") shinyServer(function(input, output) { # We generate 10000 data points from the distribution which # reflects the specification of skewness, # tailedness and modality. n = 10000 # 'scale' is a parameter that controls the skewness and # tailedness. scale = 1000 # The `reactive` function is a trick to accelerate the app, # which enables us only generate the data # once to plot two plots. The generated sample was stored in # the `data` object to be called later. data <- reactive({ # For `Unimodal` choice, we fix the mode at 0. if (input$modality == "Unimodal") {mu = 0} # For `Bimodal` choice, we fix the two modes at -2 and 2. if (input$modality == "Bimodal") {mu = c(-2, 2)} # Details will be explained in `sample.R` file. sample1 <- multimodal(n, mu, skewness = scale * input$skewness, tailedness = scale * input$kurtosis) data.frame(x = sample1)}) output$histogram <- renderPlot({ # Plot the histogram. ggplot(data(), aes(x = x)) + geom_histogram(aes(y = ..density..), binwidth = .5, colour = "black", fill = "white") + xlim(-6, 6) + # Overlay the density curve. geom_density(alpha = .5, fill = "blue") + ggtitle("Histogram of Data") + theme(plot.title = element_text(lineheight = .8, face = "bold")) }) output$qqplot <- renderPlot({ # Plot the QQ plot. ggplot(data(), aes(sample = x)) + stat_qq() + ggtitle("QQplot of Data") + theme(plot.title = element_text(lineheight=.8, face = "bold")) }) }) Finally, the ui.R file: library(shiny) # Define UI for application that helps students interpret the # pattern of (normal) QQ plots. # By using this app, we can show students the different patterns # of QQ plots (and the histograms, # for completeness) for different type of data distributions. # For example, left skewed heavy tailed # data, etc. # This app can be (and is encouraged to be) used in a reversed # way, namely, show the QQ plot to the # students first, then tell them based on the pattern of the QQ # plot, the data is right skewed, bimodal, # heavy-tailed, etc. shinyUI(fluidPage( # Application title titlePanel("Interpreting Normal QQ Plots"), sidebarLayout( sidebarPanel( # The first slider can control the skewness of input data. # "-1" indicates the most left-skewed # case while "1" indicates the most right-skewed case. sliderInput("skewness", "Skewness", min = -1, max = 1, value = 0, step = 0.1, ticks = FALSE), # The second slider can control the skewness of input data. # "-1" indicates the most light tail # case while "1" indicates the most heavy tail case. sliderInput("kurtosis", "Tailedness", min = -1, max = 1, value = 0, step = 0.1, ticks = FALSE), # This selectbox allows user to choose the number of modes # of data, two options are provided: # "Unimodal" and "Bimodal". selectInput("modality", label = "Modality", choices = c("Unimodal" = "Unimodal", "Bimodal" = "Bimodal"), selected = "Unimodal"), br(), # The following helper information will be shown on the # user interface to give necessary # information to help users understand sliders. helpText(p("The skewness of data is controlled by moving the", strong("Skewness"), "slider,", "the left side means left skewed while the right side means right skewed."), p("The tailedness of data is controlled by moving the", strong("Tailedness"), "slider,", "the left side means light tailed while the right side means heavy tailed."), p("The modality of data is controlled by selecting the modality from", strong("Modality"), "select box.") ) ), # The main panel outputs two plots. One plot is the histogram # of data (with the non-parametric density # curve overlaid), to get a better visualization, we restricted # the range of x-axis to -6 to 6 so # that part of the data will not be shown when heavy-tailed # input is chosen. The other plot is the # QQ plot of data, as convention, the x-axis is the theoretical # quantiles for standard normal distri- # bution and the y-axis is the sample quantiles of data. mainPanel( plotOutput("histogram"), plotOutput("qqplot") ) ) ) )
How to interpret a QQ plot I made a shiny app to help interpret normal QQ plot. Try this link. In this app, you can adjust the skewness, tailedness (kurtosis) and modality of data and you can see how the histogram and QQ plot c
572
How to interpret a QQ plot
A very helpful (and intuitive) explanation is given by prof. Philippe Rigollet in the MIT MOOC course: 18.650 Statistics for Applications, Fall 2016 - see video at 45 mins https://www.youtube.com/watch?v=vMaKx9fmJHE I have crudely copied his diagram which I keep in my notes as I find it very useful. In example 1, in the top left diagram, we see that in the right tail the empirical (or sample) quantile is less than the theoretical quantile Qe < Qt This can be interpreted using the probability density functions. For the same $\alpha$ value, the empirical quantile is to the left of the theoretical quantile, which means that the right tail of the empirical distribution is "lighter" than the right tail of the theoretical distribution, i.e. it falls faster to values close to zero.
How to interpret a QQ plot
A very helpful (and intuitive) explanation is given by prof. Philippe Rigollet in the MIT MOOC course: 18.650 Statistics for Applications, Fall 2016 - see video at 45 mins https://www.youtube.com/watc
How to interpret a QQ plot A very helpful (and intuitive) explanation is given by prof. Philippe Rigollet in the MIT MOOC course: 18.650 Statistics for Applications, Fall 2016 - see video at 45 mins https://www.youtube.com/watch?v=vMaKx9fmJHE I have crudely copied his diagram which I keep in my notes as I find it very useful. In example 1, in the top left diagram, we see that in the right tail the empirical (or sample) quantile is less than the theoretical quantile Qe < Qt This can be interpreted using the probability density functions. For the same $\alpha$ value, the empirical quantile is to the left of the theoretical quantile, which means that the right tail of the empirical distribution is "lighter" than the right tail of the theoretical distribution, i.e. it falls faster to values close to zero.
How to interpret a QQ plot A very helpful (and intuitive) explanation is given by prof. Philippe Rigollet in the MIT MOOC course: 18.650 Statistics for Applications, Fall 2016 - see video at 45 mins https://www.youtube.com/watc
573
How to interpret a QQ plot
Since this thread has been deemed to be a definitive "how to interpret the normal q-q plot" StackExchange post, I would like to point readers to a nice, precise mathematical relationship between the normal q-q plot and the excess kurtosis statistic. Here it is: https://stats.stackexchange.com/a/354076/102879 A brief (and too simplified) summary is given as follows (see the link for more precise mathematical statements): You can actually see excess kurtosis in the normal q-q plot as the average distance between the data quantiles and the corresponding theoretical normal quantiles, weighted by distance from data to the mean. Thus, when the absolute values in the tails of the q-q plot generally deviate from the expected normal values greatly in the extreme directions, you have positive excess kurtosis. Because kurtosis is the average of these deviations weighted by distances from the mean, the values near the center of the q-q plot have little impact on kurtosis. Hence, excess kurtosis is not related to the center of the distribution, where the "peak" is. Rather, excess kurtosis is almost entirely determined by the comparison of the tails of the data distribution to the normal distribution.
How to interpret a QQ plot
Since this thread has been deemed to be a definitive "how to interpret the normal q-q plot" StackExchange post, I would like to point readers to a nice, precise mathematical relationship between the n
How to interpret a QQ plot Since this thread has been deemed to be a definitive "how to interpret the normal q-q plot" StackExchange post, I would like to point readers to a nice, precise mathematical relationship between the normal q-q plot and the excess kurtosis statistic. Here it is: https://stats.stackexchange.com/a/354076/102879 A brief (and too simplified) summary is given as follows (see the link for more precise mathematical statements): You can actually see excess kurtosis in the normal q-q plot as the average distance between the data quantiles and the corresponding theoretical normal quantiles, weighted by distance from data to the mean. Thus, when the absolute values in the tails of the q-q plot generally deviate from the expected normal values greatly in the extreme directions, you have positive excess kurtosis. Because kurtosis is the average of these deviations weighted by distances from the mean, the values near the center of the q-q plot have little impact on kurtosis. Hence, excess kurtosis is not related to the center of the distribution, where the "peak" is. Rather, excess kurtosis is almost entirely determined by the comparison of the tails of the data distribution to the normal distribution.
How to interpret a QQ plot Since this thread has been deemed to be a definitive "how to interpret the normal q-q plot" StackExchange post, I would like to point readers to a nice, precise mathematical relationship between the n
574
How should I transform non-negative data including zeros?
It seems to me that the most appropriate choice of transformation is contingent on the model and the context. The '0' point can arise from several different reasons each of which may have to be treated differently: Truncation (as in Robin's example): Use appropriate models (e.g., mixtures, survival models etc) Missing data: Impute data / Drop observations if appropriate. Natural zero point (e.g., income levels; an unemployed person has zero income): Transform as needed Sensitivity of measuring instrument: Perhaps, add a small amount to data? I am not really offering an answer as I suspect there is no universal, 'correct' transformation when you have zeros.
How should I transform non-negative data including zeros?
It seems to me that the most appropriate choice of transformation is contingent on the model and the context. The '0' point can arise from several different reasons each of which may have to be treat
How should I transform non-negative data including zeros? It seems to me that the most appropriate choice of transformation is contingent on the model and the context. The '0' point can arise from several different reasons each of which may have to be treated differently: Truncation (as in Robin's example): Use appropriate models (e.g., mixtures, survival models etc) Missing data: Impute data / Drop observations if appropriate. Natural zero point (e.g., income levels; an unemployed person has zero income): Transform as needed Sensitivity of measuring instrument: Perhaps, add a small amount to data? I am not really offering an answer as I suspect there is no universal, 'correct' transformation when you have zeros.
How should I transform non-negative data including zeros? It seems to me that the most appropriate choice of transformation is contingent on the model and the context. The '0' point can arise from several different reasons each of which may have to be treat
575
How should I transform non-negative data including zeros?
No-one mentioned the inverse hyperbolic sine transformation. So for completeness I'm adding it here. This is an alternative to the Box-Cox transformations and is defined by \begin{equation} f(y,\theta) = \text{sinh}^{-1}(\theta y)/\theta = \log[\theta y + (\theta^2y^2+1)^{1/2}]/\theta, \end{equation} where $\theta>0$. For any value of $\theta$, zero maps to zero. There is also a two parameter version allowing a shift, just as with the two-parameter BC transformation. Burbidge, Magee and Robb (1988) discuss the IHS transformation including estimation of $\theta$. The IHS transformation works with data defined on the whole real line including negative values and zeros. For large values of $y$ it behaves like a log transformation, regardless of the value of $\theta$ (except 0). The limiting case as $\theta\rightarrow0$ gives $f(y,\theta)\rightarrow y$. It looks to me like the IHS transformation should be a lot better known than it is.
How should I transform non-negative data including zeros?
No-one mentioned the inverse hyperbolic sine transformation. So for completeness I'm adding it here. This is an alternative to the Box-Cox transformations and is defined by \begin{equation} f(y,\theta
How should I transform non-negative data including zeros? No-one mentioned the inverse hyperbolic sine transformation. So for completeness I'm adding it here. This is an alternative to the Box-Cox transformations and is defined by \begin{equation} f(y,\theta) = \text{sinh}^{-1}(\theta y)/\theta = \log[\theta y + (\theta^2y^2+1)^{1/2}]/\theta, \end{equation} where $\theta>0$. For any value of $\theta$, zero maps to zero. There is also a two parameter version allowing a shift, just as with the two-parameter BC transformation. Burbidge, Magee and Robb (1988) discuss the IHS transformation including estimation of $\theta$. The IHS transformation works with data defined on the whole real line including negative values and zeros. For large values of $y$ it behaves like a log transformation, regardless of the value of $\theta$ (except 0). The limiting case as $\theta\rightarrow0$ gives $f(y,\theta)\rightarrow y$. It looks to me like the IHS transformation should be a lot better known than it is.
How should I transform non-negative data including zeros? No-one mentioned the inverse hyperbolic sine transformation. So for completeness I'm adding it here. This is an alternative to the Box-Cox transformations and is defined by \begin{equation} f(y,\theta
576
How should I transform non-negative data including zeros?
A useful approach when the variable is used as an independent factor in regression is to replace it by two variables: one is a binary indicator of whether it is zero and the other is the value of the original variable or a re-expression of it, such as its logarithm. This technique is discussed in Hosmer & Lemeshow's book on logistic regression (and in other places, I'm sure). Truncated probability plots of the positive part of the original variable are useful for identifying an appropriate re-expression. (See the analysis at https://stats.stackexchange.com/a/30749/919 for examples.) When the variable is the dependent one in a linear model, censored regression (like Tobit) can be useful, again obviating the need to produce a started logarithm. This technique is common among econometricians.
How should I transform non-negative data including zeros?
A useful approach when the variable is used as an independent factor in regression is to replace it by two variables: one is a binary indicator of whether it is zero and the other is the value of the
How should I transform non-negative data including zeros? A useful approach when the variable is used as an independent factor in regression is to replace it by two variables: one is a binary indicator of whether it is zero and the other is the value of the original variable or a re-expression of it, such as its logarithm. This technique is discussed in Hosmer & Lemeshow's book on logistic regression (and in other places, I'm sure). Truncated probability plots of the positive part of the original variable are useful for identifying an appropriate re-expression. (See the analysis at https://stats.stackexchange.com/a/30749/919 for examples.) When the variable is the dependent one in a linear model, censored regression (like Tobit) can be useful, again obviating the need to produce a started logarithm. This technique is common among econometricians.
How should I transform non-negative data including zeros? A useful approach when the variable is used as an independent factor in regression is to replace it by two variables: one is a binary indicator of whether it is zero and the other is the value of the
577
How should I transform non-negative data including zeros?
The log transforms with shifts are special cases of the Box-Cox transformations: $y(\lambda_{1}, \lambda_{2}) = \begin{cases} \frac {(y+\lambda_{2})^{\lambda_1} - 1} {\lambda_{1}} & \mbox{when } \lambda_{1} \neq 0 \\ \log (y + \lambda_{2}) & \mbox{when } \lambda_{1} = 0 \end{cases}$ These are the extended form for negative values, but also applicable to data containing zeros. Box and Cox (1964) presents an algorithm to find appropriate values for the $\lambda$'s using maximum likelihood. This gives you the ultimate transformation. A reason to prefer Box-Cox transformations is that they're developed to ensure assumptions for the linear model. There's some work done to show that even if your data cannot be transformed to normality, then the estimated $\lambda$ still lead to a symmetric distribution. I'm not sure how well this addresses your data, since it could be that $\lambda = (0, 1)$ which is just the log transform you mentioned, but it may be worth estimating the requried $\lambda$'s to see if another transformation is appropriate. In R, the boxcox.fit function in package geoR will compute the parameters for you.
How should I transform non-negative data including zeros?
The log transforms with shifts are special cases of the Box-Cox transformations: $y(\lambda_{1}, \lambda_{2}) = \begin{cases} \frac {(y+\lambda_{2})^{\lambda_1} - 1} {\lambda_{1}} & \mbox{when } \l
How should I transform non-negative data including zeros? The log transforms with shifts are special cases of the Box-Cox transformations: $y(\lambda_{1}, \lambda_{2}) = \begin{cases} \frac {(y+\lambda_{2})^{\lambda_1} - 1} {\lambda_{1}} & \mbox{when } \lambda_{1} \neq 0 \\ \log (y + \lambda_{2}) & \mbox{when } \lambda_{1} = 0 \end{cases}$ These are the extended form for negative values, but also applicable to data containing zeros. Box and Cox (1964) presents an algorithm to find appropriate values for the $\lambda$'s using maximum likelihood. This gives you the ultimate transformation. A reason to prefer Box-Cox transformations is that they're developed to ensure assumptions for the linear model. There's some work done to show that even if your data cannot be transformed to normality, then the estimated $\lambda$ still lead to a symmetric distribution. I'm not sure how well this addresses your data, since it could be that $\lambda = (0, 1)$ which is just the log transform you mentioned, but it may be worth estimating the requried $\lambda$'s to see if another transformation is appropriate. In R, the boxcox.fit function in package geoR will compute the parameters for you.
How should I transform non-negative data including zeros? The log transforms with shifts are special cases of the Box-Cox transformations: $y(\lambda_{1}, \lambda_{2}) = \begin{cases} \frac {(y+\lambda_{2})^{\lambda_1} - 1} {\lambda_{1}} & \mbox{when } \l
578
How should I transform non-negative data including zeros?
I'm presuming that zero != missing data, as that's an entirely different question. When thinking about how to handle zeros in multiple linear regression, I tend to consider how many zeros do we actually have? Only a couple of zeros If I have a single zero in a reasonably large data set, I tend to: Remove the point, take logs and fit the model Add a small $c$ to the point, take logs and fit the model Does the model fit change? What about the parameter values? If the model is fairly robust to the removal of the point, I'll go for quick and dirty approach of adding $c$. You could make this procedure a bit less crude and use the boxcox method with shifts described in ars' answer. Large number of zeros If my data set contains a large number of zeros, then this suggests that simple linear regression isn't the best tool for the job. Instead I would use something like mixture modelling (as suggested by Srikant and Robin).
How should I transform non-negative data including zeros?
I'm presuming that zero != missing data, as that's an entirely different question. When thinking about how to handle zeros in multiple linear regression, I tend to consider how many zeros do we actual
How should I transform non-negative data including zeros? I'm presuming that zero != missing data, as that's an entirely different question. When thinking about how to handle zeros in multiple linear regression, I tend to consider how many zeros do we actually have? Only a couple of zeros If I have a single zero in a reasonably large data set, I tend to: Remove the point, take logs and fit the model Add a small $c$ to the point, take logs and fit the model Does the model fit change? What about the parameter values? If the model is fairly robust to the removal of the point, I'll go for quick and dirty approach of adding $c$. You could make this procedure a bit less crude and use the boxcox method with shifts described in ars' answer. Large number of zeros If my data set contains a large number of zeros, then this suggests that simple linear regression isn't the best tool for the job. Instead I would use something like mixture modelling (as suggested by Srikant and Robin).
How should I transform non-negative data including zeros? I'm presuming that zero != missing data, as that's an entirely different question. When thinking about how to handle zeros in multiple linear regression, I tend to consider how many zeros do we actual
579
How should I transform non-negative data including zeros?
If you want something quick and dirty why not use the square root?
How should I transform non-negative data including zeros?
If you want something quick and dirty why not use the square root?
How should I transform non-negative data including zeros? If you want something quick and dirty why not use the square root?
How should I transform non-negative data including zeros? If you want something quick and dirty why not use the square root?
580
How should I transform non-negative data including zeros?
Comparing the answer provided in by @RobHyndman to a log-plus-one transformation extended to negative values with the form: $$T(x) = \text{sign}(x) \cdot \log{\left(|x|+1\right)} $$ (As Nick Cox pointed out in the comments, this is known as the 'neglog' transformation) r = -1000:1000 l = sign(r)*log1p(abs(r)) l = l/max(l) plot(r, l, type = "l", xlab = "Original", ylab = "Transformed", col = adjustcolor("red", alpha = 0.5), lwd = 3) #We scale both to fit (-1,1) for(i in exp(seq(-10, 100, 10))){ s = asinh(i*r) s = s / max(s) lines(r, s, col = adjustcolor("blue", alpha = 0.2), lwd = 3) } legend("topleft", c("asinh(x)", "sign(x) log(abs(x)+1)"), col = c("blue", "red"), lty = 1) As you can see, as $\theta$ increases more the transform looks like a step function. With $\theta \approx 1$ it looks a lot like the log-plus-one transformation. And when $\theta \rightarrow 0$ it approaches a line. EDIT: Keep in mind the log transform can be similarly altered to arbitrary scale, with similar results. I just wanted to show what $\theta$ gives similar results based on the previous answer. The biggest difference between both approaches is the region near $x=0$, as we can see by their derivatives.
How should I transform non-negative data including zeros?
Comparing the answer provided in by @RobHyndman to a log-plus-one transformation extended to negative values with the form: $$T(x) = \text{sign}(x) \cdot \log{\left(|x|+1\right)} $$ (As Nick Cox point
How should I transform non-negative data including zeros? Comparing the answer provided in by @RobHyndman to a log-plus-one transformation extended to negative values with the form: $$T(x) = \text{sign}(x) \cdot \log{\left(|x|+1\right)} $$ (As Nick Cox pointed out in the comments, this is known as the 'neglog' transformation) r = -1000:1000 l = sign(r)*log1p(abs(r)) l = l/max(l) plot(r, l, type = "l", xlab = "Original", ylab = "Transformed", col = adjustcolor("red", alpha = 0.5), lwd = 3) #We scale both to fit (-1,1) for(i in exp(seq(-10, 100, 10))){ s = asinh(i*r) s = s / max(s) lines(r, s, col = adjustcolor("blue", alpha = 0.2), lwd = 3) } legend("topleft", c("asinh(x)", "sign(x) log(abs(x)+1)"), col = c("blue", "red"), lty = 1) As you can see, as $\theta$ increases more the transform looks like a step function. With $\theta \approx 1$ it looks a lot like the log-plus-one transformation. And when $\theta \rightarrow 0$ it approaches a line. EDIT: Keep in mind the log transform can be similarly altered to arbitrary scale, with similar results. I just wanted to show what $\theta$ gives similar results based on the previous answer. The biggest difference between both approaches is the region near $x=0$, as we can see by their derivatives.
How should I transform non-negative data including zeros? Comparing the answer provided in by @RobHyndman to a log-plus-one transformation extended to negative values with the form: $$T(x) = \text{sign}(x) \cdot \log{\left(|x|+1\right)} $$ (As Nick Cox point
581
How should I transform non-negative data including zeros?
I assume you have continuous data. If the data include zeros this means you have a spike on zero which may be due to some particular aspect of your data. It appears for example in wind energy, wind below 2 m/s produce zero power (it is called cut in) and wind over (something around) 25 m/s also produce zero power (for security reason, it is called cut off). While the distribution of produced wind energy seems continuous there is a spike in zero. My solution: In this case, I suggest to treat the zeros separately by working with a mixture of the spike in zero and the model you planned to use for the part of the distribution that is continuous (wrt Lebesgue).
How should I transform non-negative data including zeros?
I assume you have continuous data. If the data include zeros this means you have a spike on zero which may be due to some particular aspect of your data. It appears for example in wind energy, wind b
How should I transform non-negative data including zeros? I assume you have continuous data. If the data include zeros this means you have a spike on zero which may be due to some particular aspect of your data. It appears for example in wind energy, wind below 2 m/s produce zero power (it is called cut in) and wind over (something around) 25 m/s also produce zero power (for security reason, it is called cut off). While the distribution of produced wind energy seems continuous there is a spike in zero. My solution: In this case, I suggest to treat the zeros separately by working with a mixture of the spike in zero and the model you planned to use for the part of the distribution that is continuous (wrt Lebesgue).
How should I transform non-negative data including zeros? I assume you have continuous data. If the data include zeros this means you have a spike on zero which may be due to some particular aspect of your data. It appears for example in wind energy, wind b
582
How should I transform non-negative data including zeros?
Since the two-parameter fit Box-Cox has been proposed, here's some R to fit input data, run an arbitrary function on it (e.g. time series forecasting), and then return the inverted output: # Two-parameter Box-Cox function boxcox.f <- function(x, lambda1, lambda2) { if (lambda1!=0) { return(((x + lambda2) ^ lambda1 - 1) / lambda1) } else { return(log(x + lambda2)) } } # Two-parameter inverse Box-Cox function boxcox.inv <- function(x, lambda1, lambda2) { if (lambda1!=0) { return((lambda1 * x + 1) ^ (1 / lambda1) - lambda2) } else { return(exp(x) - lambda2) } } # Function to Box-Cox transform x, apply function g, # and return inverted Box-Cox output y boxcox.fit.apply <- function(x, g) { require(geoR) require(plyr) # Fit lambdas t <- try(lambda.pair <- boxcoxfit(x, lambda2=T)$lambda) # Estimating both lambdas sometimes fails; if so, estimate lambda1 only if (inherits(t, "try-error")) { lambda1 <- boxcoxfit(x)$lambda lambda2 <- 0 } else { lambda1 <- lambda.pair[1] lambda2 <- lambda.pair[2] } x.boxcox <- boxcox.f(x, lambda1, lambda2) # Apply function g to x.boxcox. This should return data similar to x (e.g. ts) y <- aaply(x.boxcox, 1, g) return(boxcox.inv(y, lambda1, lambda2)) }
How should I transform non-negative data including zeros?
Since the two-parameter fit Box-Cox has been proposed, here's some R to fit input data, run an arbitrary function on it (e.g. time series forecasting), and then return the inverted output: # Two-param
How should I transform non-negative data including zeros? Since the two-parameter fit Box-Cox has been proposed, here's some R to fit input data, run an arbitrary function on it (e.g. time series forecasting), and then return the inverted output: # Two-parameter Box-Cox function boxcox.f <- function(x, lambda1, lambda2) { if (lambda1!=0) { return(((x + lambda2) ^ lambda1 - 1) / lambda1) } else { return(log(x + lambda2)) } } # Two-parameter inverse Box-Cox function boxcox.inv <- function(x, lambda1, lambda2) { if (lambda1!=0) { return((lambda1 * x + 1) ^ (1 / lambda1) - lambda2) } else { return(exp(x) - lambda2) } } # Function to Box-Cox transform x, apply function g, # and return inverted Box-Cox output y boxcox.fit.apply <- function(x, g) { require(geoR) require(plyr) # Fit lambdas t <- try(lambda.pair <- boxcoxfit(x, lambda2=T)$lambda) # Estimating both lambdas sometimes fails; if so, estimate lambda1 only if (inherits(t, "try-error")) { lambda1 <- boxcoxfit(x)$lambda lambda2 <- 0 } else { lambda1 <- lambda.pair[1] lambda2 <- lambda.pair[2] } x.boxcox <- boxcox.f(x, lambda1, lambda2) # Apply function g to x.boxcox. This should return data similar to x (e.g. ts) y <- aaply(x.boxcox, 1, g) return(boxcox.inv(y, lambda1, lambda2)) }
How should I transform non-negative data including zeros? Since the two-parameter fit Box-Cox has been proposed, here's some R to fit input data, run an arbitrary function on it (e.g. time series forecasting), and then return the inverted output: # Two-param
583
How should I transform non-negative data including zeros?
The Yeo-Johnson power transformation discussed here has excellent properties designed to handle zeros and negatives while building on the strengths of Box Cox power transformation. This is what I typically go to when I am dealing with zeros or negative data. Here is a summary of transformations with pros/cons to illustrate why Yeo-Johnson is preferable. Log Pros: Does well with positive data. Cons: Does not handle zeros. > log(0) [1] -Inf Log Plus 1 Pros: The plus 1 offset adds the ability to handle zeros in addition to positive data. Cons: Fails with negative data > log1p(-1) [1] -Inf > log1p(-2) [1] NaN Warning message: In log1p(-2) : NaNs produced Square Root Pros: Uses a power transformation that can handle zeros and positive data. Cons: Fails with negative data > sqrt(-1) [1] NaN Warning message: In sqrt(-1) : NaNs produced Box Cox R Code: box_cox <- function(x, lambda) { eps <- 0.00001 if (abs(lambda) < eps) log(x) else (x ^ lambda - 1) / lambda } Pros: Enables scaled power transformations Cons: Suffers from issues with zeros and negatives (i.e. can only handle positive data. > box_cox(0, lambda = 0) [1] -Inf > box_cox(0, lambda = -0.5) [1] -Inf > box_cox(-1, lambda = 0.5) [1] NaN Yeo Johnson R Code: yeo_johnson <- function(x, lambda) { eps <- .000001 not_neg <- which(x >= 0) is_neg <- which(x < 0) not_neg_trans <- function(x, lambda) { if (abs(lambda) < eps) log(x + 1) else ((x + 1) ^ lambda - 1) / lambda } neg_trans <- function(x, lambda) { if (abs(lambda - 2) < eps) - log(-x + 1) else - ((-x + 1) ^ (2 - lambda) - 1) / (2 - lambda) } x[not_neg] <- not_neg_trans(x[not_neg], lambda) x[is_neg] <- neg_trans(x[is_neg], lambda) return(x) } Pros: Can handle positive, zero, and negative data. Cons: None that I can think of. Properties are very similar to Box-Cox but can handle zero and negative data. > yeo_johnson(0, lambda = 0) [1] 0 > yeo_johnson(0, lambda = -0.5) [1] 0 > yeo_johnson(-1, lambda = 0.5) [1] -1.218951
How should I transform non-negative data including zeros?
The Yeo-Johnson power transformation discussed here has excellent properties designed to handle zeros and negatives while building on the strengths of Box Cox power transformation. This is what I typi
How should I transform non-negative data including zeros? The Yeo-Johnson power transformation discussed here has excellent properties designed to handle zeros and negatives while building on the strengths of Box Cox power transformation. This is what I typically go to when I am dealing with zeros or negative data. Here is a summary of transformations with pros/cons to illustrate why Yeo-Johnson is preferable. Log Pros: Does well with positive data. Cons: Does not handle zeros. > log(0) [1] -Inf Log Plus 1 Pros: The plus 1 offset adds the ability to handle zeros in addition to positive data. Cons: Fails with negative data > log1p(-1) [1] -Inf > log1p(-2) [1] NaN Warning message: In log1p(-2) : NaNs produced Square Root Pros: Uses a power transformation that can handle zeros and positive data. Cons: Fails with negative data > sqrt(-1) [1] NaN Warning message: In sqrt(-1) : NaNs produced Box Cox R Code: box_cox <- function(x, lambda) { eps <- 0.00001 if (abs(lambda) < eps) log(x) else (x ^ lambda - 1) / lambda } Pros: Enables scaled power transformations Cons: Suffers from issues with zeros and negatives (i.e. can only handle positive data. > box_cox(0, lambda = 0) [1] -Inf > box_cox(0, lambda = -0.5) [1] -Inf > box_cox(-1, lambda = 0.5) [1] NaN Yeo Johnson R Code: yeo_johnson <- function(x, lambda) { eps <- .000001 not_neg <- which(x >= 0) is_neg <- which(x < 0) not_neg_trans <- function(x, lambda) { if (abs(lambda) < eps) log(x + 1) else ((x + 1) ^ lambda - 1) / lambda } neg_trans <- function(x, lambda) { if (abs(lambda - 2) < eps) - log(-x + 1) else - ((-x + 1) ^ (2 - lambda) - 1) / (2 - lambda) } x[not_neg] <- not_neg_trans(x[not_neg], lambda) x[is_neg] <- neg_trans(x[is_neg], lambda) return(x) } Pros: Can handle positive, zero, and negative data. Cons: None that I can think of. Properties are very similar to Box-Cox but can handle zero and negative data. > yeo_johnson(0, lambda = 0) [1] 0 > yeo_johnson(0, lambda = -0.5) [1] 0 > yeo_johnson(-1, lambda = 0.5) [1] -1.218951
How should I transform non-negative data including zeros? The Yeo-Johnson power transformation discussed here has excellent properties designed to handle zeros and negatives while building on the strengths of Box Cox power transformation. This is what I typi
584
How should I transform non-negative data including zeros?
Suppose Y is the amount of money each American spends on a new car in a given year (total purchase price). Y will spike at 0; will have no values at all between 0 and about 12,000; and will take other values mostly in the teens, twenties and thirties of thousands. Predictors would be proxies for the level of need and/or interest in making such a purchase. Need or interest could hardly be said to be zero for individuals who made no purchase; on these scales non-purchasers would be much closer to purchasers than Y or even the log of Y would suggest. In a case much like this but in health care, I found that the most accurate predictions, judged by test-set/training-set crossvalidation, were obtained by, in increasing order, Logistic regression on a binary version of Y, OLS on Y, Ordinal regression (PLUM) on Y binned into 5 categories (so as to divide purchasers into 4 equal-size groups), Multinomial logistic regression on Y binned into 5 categories, OLS on the log(10) of Y (I didn't think of trying the cube root), and OLS on Y binned into 5 categories. Some will recoil at this categorization of a continuous dependent variable. But although it sacrifices some information, categorizing seems to help by restoring an important underlying aspect of the situation -- again, that the "zeroes" are much more similar to the rest than Y would indicate.
How should I transform non-negative data including zeros?
Suppose Y is the amount of money each American spends on a new car in a given year (total purchase price). Y will spike at 0; will have no values at all between 0 and about 12,000; and will take othe
How should I transform non-negative data including zeros? Suppose Y is the amount of money each American spends on a new car in a given year (total purchase price). Y will spike at 0; will have no values at all between 0 and about 12,000; and will take other values mostly in the teens, twenties and thirties of thousands. Predictors would be proxies for the level of need and/or interest in making such a purchase. Need or interest could hardly be said to be zero for individuals who made no purchase; on these scales non-purchasers would be much closer to purchasers than Y or even the log of Y would suggest. In a case much like this but in health care, I found that the most accurate predictions, judged by test-set/training-set crossvalidation, were obtained by, in increasing order, Logistic regression on a binary version of Y, OLS on Y, Ordinal regression (PLUM) on Y binned into 5 categories (so as to divide purchasers into 4 equal-size groups), Multinomial logistic regression on Y binned into 5 categories, OLS on the log(10) of Y (I didn't think of trying the cube root), and OLS on Y binned into 5 categories. Some will recoil at this categorization of a continuous dependent variable. But although it sacrifices some information, categorizing seems to help by restoring an important underlying aspect of the situation -- again, that the "zeroes" are much more similar to the rest than Y would indicate.
How should I transform non-negative data including zeros? Suppose Y is the amount of money each American spends on a new car in a given year (total purchase price). Y will spike at 0; will have no values at all between 0 and about 12,000; and will take othe
585
How should I transform non-negative data including zeros?
To clarify how to deal with the log of zero in regression models, we have written a pedagogical paper explaining the best solution and the common mistakes people make in practice. We also came out with a new solution to tackle this issue. You can find the paper by clicking here: https://ssrn.com/abstract=3444996 First, we think that ones should wonder why using a log transformation. In regression models, a log-log relationship leads to the identification of an elasticity. Indeed, if $\log(y) = \beta \log(x) + \varepsilon$, then $\beta$ corresponds to the elasticity of $y$ to $x$. The log can also linearize a theoretical model. It can also be used to reduce heteroskedasticity. However, in practice, it often occurs that the variable taken in log contains non-positive values. A solution that is often proposed consists in adding a positive constant c to all observations $Y$ so that $Y + c > 0$. However, contrary to linear regressions, log-linear regressions are not robust to linear transformation of the dependent variable. This is due to the non-linear nature of the log function. Log transformation expands low values and squeezes high values. Therefore, adding a constant will distort the (linear) relationship between zeros and other observations in the data. The magnitude of the bias generated by the constant actually depends on the range of observations in the data. For that reason, adding the smallest possible constant is not necessarily the best worst solution. In our article, we actually provide an example where adding very small constants is actually providing the highest bias. We provide derive an expression of the bias. Actually, Poisson Pseudo Maximum Likelihood (PPML) can be considered as a good solution to this issue. One has to consider the following process: $y_i = a_i \exp(\alpha + x_i' \beta)$ with $E(a_i | x_i) = 1$ This process is motivated by several features. First, it provides the same interpretation to $\beta$ as a semi-log model. Second, this data generating process provides a logical rationalization of zero values in the dependent variable. This situation can arise when the multiplicative error term, $a_i$ , is equal to zero. Third, estimating this model with PPML does not encounter the computational difficulty when $y_i = 0$. Under the assumption that $E(a_i|x_i) = 1$, we have $E( y_i - \exp(\alpha + x_i' \beta) | x_i) = 0$. We want to minimize the quadratic error of this moment, leading to the following first-order conditions: $\sum_{i=1}^N ( y_i - \exp(\alpha + x_i' \beta) )x_i' = 0$ These conditions are defined even when $y_i = 0$. These first-order conditions are numerically equivalent to those of a Poisson model, so it can be estimated with any standard statistical software. Finally, we propose a new solution that is also easy to implement and that provides unbiased estimator of $\beta$. One simply need to estimate: $\log( y_i + \exp (\alpha + x_i' \beta)) = x_i' \beta + \eta_i $ We show that this estimator is unbiased and that it can simply be estimated with GMM with any standard statistical software. For instance, it can be estimated by executing just one line of code with Stata. We hope that this article can help and we'd love to get feedback from you. Christophe Bellégo and Louis-Daniel Pape CREST - Ecole Polytechnique - ENSAE
How should I transform non-negative data including zeros?
To clarify how to deal with the log of zero in regression models, we have written a pedagogical paper explaining the best solution and the common mistakes people make in practice. We also came out wit
How should I transform non-negative data including zeros? To clarify how to deal with the log of zero in regression models, we have written a pedagogical paper explaining the best solution and the common mistakes people make in practice. We also came out with a new solution to tackle this issue. You can find the paper by clicking here: https://ssrn.com/abstract=3444996 First, we think that ones should wonder why using a log transformation. In regression models, a log-log relationship leads to the identification of an elasticity. Indeed, if $\log(y) = \beta \log(x) + \varepsilon$, then $\beta$ corresponds to the elasticity of $y$ to $x$. The log can also linearize a theoretical model. It can also be used to reduce heteroskedasticity. However, in practice, it often occurs that the variable taken in log contains non-positive values. A solution that is often proposed consists in adding a positive constant c to all observations $Y$ so that $Y + c > 0$. However, contrary to linear regressions, log-linear regressions are not robust to linear transformation of the dependent variable. This is due to the non-linear nature of the log function. Log transformation expands low values and squeezes high values. Therefore, adding a constant will distort the (linear) relationship between zeros and other observations in the data. The magnitude of the bias generated by the constant actually depends on the range of observations in the data. For that reason, adding the smallest possible constant is not necessarily the best worst solution. In our article, we actually provide an example where adding very small constants is actually providing the highest bias. We provide derive an expression of the bias. Actually, Poisson Pseudo Maximum Likelihood (PPML) can be considered as a good solution to this issue. One has to consider the following process: $y_i = a_i \exp(\alpha + x_i' \beta)$ with $E(a_i | x_i) = 1$ This process is motivated by several features. First, it provides the same interpretation to $\beta$ as a semi-log model. Second, this data generating process provides a logical rationalization of zero values in the dependent variable. This situation can arise when the multiplicative error term, $a_i$ , is equal to zero. Third, estimating this model with PPML does not encounter the computational difficulty when $y_i = 0$. Under the assumption that $E(a_i|x_i) = 1$, we have $E( y_i - \exp(\alpha + x_i' \beta) | x_i) = 0$. We want to minimize the quadratic error of this moment, leading to the following first-order conditions: $\sum_{i=1}^N ( y_i - \exp(\alpha + x_i' \beta) )x_i' = 0$ These conditions are defined even when $y_i = 0$. These first-order conditions are numerically equivalent to those of a Poisson model, so it can be estimated with any standard statistical software. Finally, we propose a new solution that is also easy to implement and that provides unbiased estimator of $\beta$. One simply need to estimate: $\log( y_i + \exp (\alpha + x_i' \beta)) = x_i' \beta + \eta_i $ We show that this estimator is unbiased and that it can simply be estimated with GMM with any standard statistical software. For instance, it can be estimated by executing just one line of code with Stata. We hope that this article can help and we'd love to get feedback from you. Christophe Bellégo and Louis-Daniel Pape CREST - Ecole Polytechnique - ENSAE
How should I transform non-negative data including zeros? To clarify how to deal with the log of zero in regression models, we have written a pedagogical paper explaining the best solution and the common mistakes people make in practice. We also came out wit
586
How should I transform non-negative data including zeros?
I had the same problem with data and no transformation would give reasonable distribution. I came up with the following idea. I would appreciate if someone decide whether it is worth utilising as I am not a statistitian. We may adopt the assumption that 0 is not equal to 0. There is a hidden continuous value which we observe as zeros but, the low sensitivity of the test gives any values more than 0 only after reaching the treshold. So what we observe is more like half-normal distribution where all the left side of normal distribution is shown as one rectangle (x=0) in histogram. So maybe we can just perform following steps: Transform the variable to dychotomic values (0 are still zeros, and >0 we code as 1) We search for another continuous variable with high Spearman correlation coefficent with our original variable. We perform logistic regression which predicts 1. Dependant variable - dychotomic, independant - highly correlated variable We look at predicted values for observed zeros in logistic regression. We recode zeros in original variable for predicted in logistic regression. We leave original values higher than 0 intact (however they must be higher than 1) We rank the original variable with recoded zeros. We normalize the ranked variable with Blom - f(r) = vnormal((r+3/8)/(n+1/4); 0;1) where r is a rank; n - number of cases, or Tukey transformation.
How should I transform non-negative data including zeros?
I had the same problem with data and no transformation would give reasonable distribution. I came up with the following idea. I would appreciate if someone decide whether it is worth utilising as I a
How should I transform non-negative data including zeros? I had the same problem with data and no transformation would give reasonable distribution. I came up with the following idea. I would appreciate if someone decide whether it is worth utilising as I am not a statistitian. We may adopt the assumption that 0 is not equal to 0. There is a hidden continuous value which we observe as zeros but, the low sensitivity of the test gives any values more than 0 only after reaching the treshold. So what we observe is more like half-normal distribution where all the left side of normal distribution is shown as one rectangle (x=0) in histogram. So maybe we can just perform following steps: Transform the variable to dychotomic values (0 are still zeros, and >0 we code as 1) We search for another continuous variable with high Spearman correlation coefficent with our original variable. We perform logistic regression which predicts 1. Dependant variable - dychotomic, independant - highly correlated variable We look at predicted values for observed zeros in logistic regression. We recode zeros in original variable for predicted in logistic regression. We leave original values higher than 0 intact (however they must be higher than 1) We rank the original variable with recoded zeros. We normalize the ranked variable with Blom - f(r) = vnormal((r+3/8)/(n+1/4); 0;1) where r is a rank; n - number of cases, or Tukey transformation.
How should I transform non-negative data including zeros? I had the same problem with data and no transformation would give reasonable distribution. I came up with the following idea. I would appreciate if someone decide whether it is worth utilising as I a
587
How should I transform non-negative data including zeros?
Depending on the problem's context, it may be useful to apply quantile transformations. The idea itself is simple*, given a sample $x_1, \dots, x_n$, compute for each $i \in \{1, \dots, n\}$ the respective empirical cumulative density function values $F(x_i) = c_i$, then map $c_i$ to another distribution via the quantile function $Q$ of that distribution, i.e., $Q(c_i)$. *Assuming you don't apply any interpolation and bounding logic. With the method out of the way, there are several caveats, features, and notes which I will list below (mostly caveats). Details can be found in the references at the end. These methods are lacking in well-studied statistical properties Does not necessarily maintain type 1 error, and can reduce statistical power. Interpretation is difficult Requires a large number of samples Not easily translated to multivariate data Typically applied to marginal distributions. It may be tempting to think this transformation helps satisfy linear regression models' assumptions, but the normality assumption for linear regression is for the conditional distribution. Correlations not preserved In the case of Gaussians, the median of your data is transformed to zero. Non-parametric Expensive to compute Quantiles depend on your sample Is a monotone and invertible transformation Related questions and references: Quantile Transformation with Gaussian Distribution - Sklearn Implementation Quantile transform vs Power transformation to get normal distribution https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2921808/
How should I transform non-negative data including zeros?
Depending on the problem's context, it may be useful to apply quantile transformations. The idea itself is simple*, given a sample $x_1, \dots, x_n$, compute for each $i \in \{1, \dots, n\}$ the respe
How should I transform non-negative data including zeros? Depending on the problem's context, it may be useful to apply quantile transformations. The idea itself is simple*, given a sample $x_1, \dots, x_n$, compute for each $i \in \{1, \dots, n\}$ the respective empirical cumulative density function values $F(x_i) = c_i$, then map $c_i$ to another distribution via the quantile function $Q$ of that distribution, i.e., $Q(c_i)$. *Assuming you don't apply any interpolation and bounding logic. With the method out of the way, there are several caveats, features, and notes which I will list below (mostly caveats). Details can be found in the references at the end. These methods are lacking in well-studied statistical properties Does not necessarily maintain type 1 error, and can reduce statistical power. Interpretation is difficult Requires a large number of samples Not easily translated to multivariate data Typically applied to marginal distributions. It may be tempting to think this transformation helps satisfy linear regression models' assumptions, but the normality assumption for linear regression is for the conditional distribution. Correlations not preserved In the case of Gaussians, the median of your data is transformed to zero. Non-parametric Expensive to compute Quantiles depend on your sample Is a monotone and invertible transformation Related questions and references: Quantile Transformation with Gaussian Distribution - Sklearn Implementation Quantile transform vs Power transformation to get normal distribution https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2921808/
How should I transform non-negative data including zeros? Depending on the problem's context, it may be useful to apply quantile transformations. The idea itself is simple*, given a sample $x_1, \dots, x_n$, compute for each $i \in \{1, \dots, n\}$ the respe
588
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
The standard deviation calculated with a divisor of $n-1$ is a standard deviation calculated from the sample as an estimate of the standard deviation of the population from which the sample was drawn. Because the observed values fall, on average, closer to the sample mean than to the population mean, the standard deviation which is calculated using deviations from the sample mean underestimates the desired standard deviation of the population. Using $n-1$ instead of $n$ as the divisor corrects for that by making the result a little bit bigger. Note that the correction has a larger proportional effect when $n$ is small than when it is large, which is what we want because when n is larger the sample mean is likely to be a good estimator of the population mean. When the sample is the whole population we use the standard deviation with $n$ as the divisor because the sample mean is population mean. (I note parenthetically that nothing that starts with "second moment recentered around a known, definite mean" is going to fulfil the questioner's request for an intuitive explanation.)
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
The standard deviation calculated with a divisor of $n-1$ is a standard deviation calculated from the sample as an estimate of the standard deviation of the population from which the sample was drawn.
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? The standard deviation calculated with a divisor of $n-1$ is a standard deviation calculated from the sample as an estimate of the standard deviation of the population from which the sample was drawn. Because the observed values fall, on average, closer to the sample mean than to the population mean, the standard deviation which is calculated using deviations from the sample mean underestimates the desired standard deviation of the population. Using $n-1$ instead of $n$ as the divisor corrects for that by making the result a little bit bigger. Note that the correction has a larger proportional effect when $n$ is small than when it is large, which is what we want because when n is larger the sample mean is likely to be a good estimator of the population mean. When the sample is the whole population we use the standard deviation with $n$ as the divisor because the sample mean is population mean. (I note parenthetically that nothing that starts with "second moment recentered around a known, definite mean" is going to fulfil the questioner's request for an intuitive explanation.)
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? The standard deviation calculated with a divisor of $n-1$ is a standard deviation calculated from the sample as an estimate of the standard deviation of the population from which the sample was drawn.
589
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
By definition, variance is calculated by taking the sum of squared differences from the mean and dividing by the size. We have the general formula $\sigma^2= \frac{\sum_{i}^{N}(X_i-\mu)^2}{N}$ where $\mu$ is the mean and $N$ is the size of the population. According to this definition, variance of the a sample (e.g. sample $t$) must also be calculated in this way. $\sigma^2_t= \frac{\sum_{i}^{n}(X_i-\overline{X})^2}{n}$ where $\overline{X}$ is the mean and $n$ is the size of this small sample. However, by sample variance $S^2$, we mean an estimator of the population variance $\sigma^2$. How can we estimate $\sigma^2$ only by using the values from the sample? According to the formulas above, the random variable $X$ deviates from sample mean $\overline{X}$ with variance $\sigma^2_t$. The sample mean $\overline{X}$ also deviates from $\mu$ with variance $\frac{\sigma^2}{n}$ because sample mean gets different values from sample to sample and it is a random variable with mean $\mu$ and variance $\frac{\sigma^2}{n}$. (One can prove easily.) Therefore, roughly, $X$ should deviate from $\mu$ with a variance that involves two variances so add up these two and get $\sigma^2=\sigma^2_t+\frac{\sigma^2}{n}$. By solving this, we get $\sigma^2=\sigma^2_t \times\frac{n}{n-1}$. Replacing $\sigma^2_t$ gives our estimator for population variance: $S^2= \frac{\sum_{i}^{n}(X_i-\overline{X})^2}{n-1}$. One can also prove that $E[S^2]=\sigma^2$ is true.
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
By definition, variance is calculated by taking the sum of squared differences from the mean and dividing by the size. We have the general formula $\sigma^2= \frac{\sum_{i}^{N}(X_i-\mu)^2}{N}$ where
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? By definition, variance is calculated by taking the sum of squared differences from the mean and dividing by the size. We have the general formula $\sigma^2= \frac{\sum_{i}^{N}(X_i-\mu)^2}{N}$ where $\mu$ is the mean and $N$ is the size of the population. According to this definition, variance of the a sample (e.g. sample $t$) must also be calculated in this way. $\sigma^2_t= \frac{\sum_{i}^{n}(X_i-\overline{X})^2}{n}$ where $\overline{X}$ is the mean and $n$ is the size of this small sample. However, by sample variance $S^2$, we mean an estimator of the population variance $\sigma^2$. How can we estimate $\sigma^2$ only by using the values from the sample? According to the formulas above, the random variable $X$ deviates from sample mean $\overline{X}$ with variance $\sigma^2_t$. The sample mean $\overline{X}$ also deviates from $\mu$ with variance $\frac{\sigma^2}{n}$ because sample mean gets different values from sample to sample and it is a random variable with mean $\mu$ and variance $\frac{\sigma^2}{n}$. (One can prove easily.) Therefore, roughly, $X$ should deviate from $\mu$ with a variance that involves two variances so add up these two and get $\sigma^2=\sigma^2_t+\frac{\sigma^2}{n}$. By solving this, we get $\sigma^2=\sigma^2_t \times\frac{n}{n-1}$. Replacing $\sigma^2_t$ gives our estimator for population variance: $S^2= \frac{\sum_{i}^{n}(X_i-\overline{X})^2}{n-1}$. One can also prove that $E[S^2]=\sigma^2$ is true.
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? By definition, variance is calculated by taking the sum of squared differences from the mean and dividing by the size. We have the general formula $\sigma^2= \frac{\sum_{i}^{N}(X_i-\mu)^2}{N}$ where
590
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
A common one is that the definition of variance (of a distribution) is the second moment recentered around a known, definite mean, whereas the estimator uses an estimated mean. This loss of a degree of freedom (given the mean, you can reconstitute the dataset with knowledge of just $n-1$ of the data values) requires the use of $n-1$ rather than $n$ to "adjust" the result. Such an explanation is consistent with the estimated variances in ANOVA and variance components analysis. It's really just a special case. The need to make some adjustment that inflates the variance can, I think, be made intuitively clear with a valid argument that isn't just ex post facto hand-waving. (I recollect that Student may have made such an argument in his 1908 paper on the t-test.) Why the adjustment to the variance should be exactly a factor of $n/(n-1)$ is harder to justify, especially when you consider that the adjusted SD is not an unbiased estimator. (It is merely the square root of an unbiased estimator of the variance. Being unbiased usually does not survive a nonlinear transformation.) So, in fact, the correct adjustment to the SD to remove its bias is not a factor of $\sqrt{n/(n-1)}$ at all! Some introductory textbooks don't even bother introducing the adjusted sd: they teach one formula (divide by $n$). I first reacted negatively to that when teaching from such a book but grew to appreciate the wisdom: to focus on the concepts and applications, the authors strip out all inessential mathematical niceties. It turns out that nothing is hurt and nobody is misled.
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
A common one is that the definition of variance (of a distribution) is the second moment recentered around a known, definite mean, whereas the estimator uses an estimated mean. This loss of a degree
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? A common one is that the definition of variance (of a distribution) is the second moment recentered around a known, definite mean, whereas the estimator uses an estimated mean. This loss of a degree of freedom (given the mean, you can reconstitute the dataset with knowledge of just $n-1$ of the data values) requires the use of $n-1$ rather than $n$ to "adjust" the result. Such an explanation is consistent with the estimated variances in ANOVA and variance components analysis. It's really just a special case. The need to make some adjustment that inflates the variance can, I think, be made intuitively clear with a valid argument that isn't just ex post facto hand-waving. (I recollect that Student may have made such an argument in his 1908 paper on the t-test.) Why the adjustment to the variance should be exactly a factor of $n/(n-1)$ is harder to justify, especially when you consider that the adjusted SD is not an unbiased estimator. (It is merely the square root of an unbiased estimator of the variance. Being unbiased usually does not survive a nonlinear transformation.) So, in fact, the correct adjustment to the SD to remove its bias is not a factor of $\sqrt{n/(n-1)}$ at all! Some introductory textbooks don't even bother introducing the adjusted sd: they teach one formula (divide by $n$). I first reacted negatively to that when teaching from such a book but grew to appreciate the wisdom: to focus on the concepts and applications, the authors strip out all inessential mathematical niceties. It turns out that nothing is hurt and nobody is misled.
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? A common one is that the definition of variance (of a distribution) is the second moment recentered around a known, definite mean, whereas the estimator uses an estimated mean. This loss of a degree
591
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
This is a total intuition, but the simplest answer is that is a correction made to make standard deviation of one-element sample undefined rather than 0.
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
This is a total intuition, but the simplest answer is that is a correction made to make standard deviation of one-element sample undefined rather than 0.
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? This is a total intuition, but the simplest answer is that is a correction made to make standard deviation of one-element sample undefined rather than 0.
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? This is a total intuition, but the simplest answer is that is a correction made to make standard deviation of one-element sample undefined rather than 0.
592
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
It is well-known (or easily proved) that the quadratic $\alpha z^2 + 2\beta z + \gamma$ has an extremum at $z = -\frac{\beta}{\alpha}$ which point is midway between the roots $\frac{-\beta - \sqrt{\beta^2-\alpha\gamma}}{\alpha}$ and $\frac{-\beta + \sqrt{\beta^2-\alpha\gamma}}{\alpha}$ of the quadratic. This shows that, for any given $n$ real numbers $x_1, x_2, \ldots, x_n$, the quantity $$G(a) = \sum_{i=1}^n (x_i-a)^2 = \left(\sum_{i=1}^n x_i^2\right) -2a\left(\sum_{i=1}^n x_i\right) + na^2,$$ has minimum value when $\displaystyle a = \frac 1n \sum_{i=1}^n x_i =\bar{x}$. Now, suppose that the $x_i$ are a sample of size $n$ from a distribution with unknown mean $\mu$ and unknown variance $\sigma^2$. We can estimate $\mu$ as $\frac 1n \sum_{i=1}^n x_i = \bar{x}$ which is easy enough to calculate, but an attempt to estimate $\sigma^2$ as $\frac 1n \sum_{i=1}^n (x_i-\mu)^2 = n^{-1}G(\mu)$ encounters the problem that we don't know $\mu$. We can, of course, readily compute $G(\bar{x})$ and we know that $G(\mu) \geq G(\bar{x})$, but how much larger is $G(\mu)$? The answer is that $G(\mu)$ is larger than $G(\bar{x})$ by a factor of approximately $\frac{n}{n-1}$, that is, $$G(\mu) \approx \frac{n}{n-1}G(\bar{x})\tag{1}$$ and so the estimate $\displaystyle n^{-1}G(\mu)= \frac 1n\sum_{i=1}^n(x_i-\mu)^2$ for the variance of the distribution can be approximated by $\displaystyle \frac{1}{n-1}G(\bar{x}) = \frac{1}{n-1}\sum_{i=1}^n (x_i-\bar{x})^2.$ So, what is an intuitive explanation of $(1)$? Well, we have that \begin{align} G(\mu) &= \sum_{i=1}^n (x_i-\mu)^2\\ &= \sum_{i=1}^n (x_i-\bar{x} + \bar{x}-\mu)^2\\ &= \sum_{i=1}^n \left((x_i-\bar{x})^2 + (\bar{x}-\mu)^2 + 2(x_i-\bar{x})(\bar{x}-\mu)\right)\\ &= G(\bar{x}) + n(\bar{x}-\mu)^2 + (\bar{x}-\mu)\sum_{i=1}^n(x_i-\bar{x})\\ &= G(\bar{x}) + n(\bar{x}-\mu)^2 \tag{2} \end{align} since $\sum_{i=1}^n (x_i-\bar{x}) = n\bar{x}-n\bar{x} = 0$. Now, \begin{align} n(\bar{x}-\mu)^2 &= n\frac{1}{n^2}\left(\sum_{i=1}^n(x_i-\mu)\right)^2\\ &= \frac 1n \sum_{i=1}^n(x_i-\mu)^2 + \frac 2n \sum_{i=1}^n\sum_{j=i+1}^n(x_i-\mu)(x_j-\mu)\\ &= \frac 1n G(\mu) + \frac 2n \sum_{i=1}^n\sum_{j=i+1}^n(x_i-\mu)(x_j-\mu)\tag{3} \end{align} Except when we have an extraordinarily unusual sample in which all the $x_i$ are larger than $\mu$ (or they are all smaller than $\mu$), the summands $(x_i-\mu)(x_j-\mu)$ in the double sum on the right side of $(3)$ take on positive as well as negative values and thus a lot of cancellations occur. Thus, the double sum can be expected to have small absolute value, and we simply ignore it in comparison to the $\frac 1nG(\mu)$ term on the right side of $(3)$. Thus, $(2)$ becomes $$G(\mu) \approx G(\bar{x}) + \frac 1nG(\mu) \Longrightarrow G(\mu) \approx \frac{n}{n-1}G(\bar{x})$$ as claimed in $(1)$.
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
It is well-known (or easily proved) that the quadratic $\alpha z^2 + 2\beta z + \gamma$ has an extremum at $z = -\frac{\beta}{\alpha}$ which point is midway between the roots $\frac{-\beta - \sqrt{\be
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? It is well-known (or easily proved) that the quadratic $\alpha z^2 + 2\beta z + \gamma$ has an extremum at $z = -\frac{\beta}{\alpha}$ which point is midway between the roots $\frac{-\beta - \sqrt{\beta^2-\alpha\gamma}}{\alpha}$ and $\frac{-\beta + \sqrt{\beta^2-\alpha\gamma}}{\alpha}$ of the quadratic. This shows that, for any given $n$ real numbers $x_1, x_2, \ldots, x_n$, the quantity $$G(a) = \sum_{i=1}^n (x_i-a)^2 = \left(\sum_{i=1}^n x_i^2\right) -2a\left(\sum_{i=1}^n x_i\right) + na^2,$$ has minimum value when $\displaystyle a = \frac 1n \sum_{i=1}^n x_i =\bar{x}$. Now, suppose that the $x_i$ are a sample of size $n$ from a distribution with unknown mean $\mu$ and unknown variance $\sigma^2$. We can estimate $\mu$ as $\frac 1n \sum_{i=1}^n x_i = \bar{x}$ which is easy enough to calculate, but an attempt to estimate $\sigma^2$ as $\frac 1n \sum_{i=1}^n (x_i-\mu)^2 = n^{-1}G(\mu)$ encounters the problem that we don't know $\mu$. We can, of course, readily compute $G(\bar{x})$ and we know that $G(\mu) \geq G(\bar{x})$, but how much larger is $G(\mu)$? The answer is that $G(\mu)$ is larger than $G(\bar{x})$ by a factor of approximately $\frac{n}{n-1}$, that is, $$G(\mu) \approx \frac{n}{n-1}G(\bar{x})\tag{1}$$ and so the estimate $\displaystyle n^{-1}G(\mu)= \frac 1n\sum_{i=1}^n(x_i-\mu)^2$ for the variance of the distribution can be approximated by $\displaystyle \frac{1}{n-1}G(\bar{x}) = \frac{1}{n-1}\sum_{i=1}^n (x_i-\bar{x})^2.$ So, what is an intuitive explanation of $(1)$? Well, we have that \begin{align} G(\mu) &= \sum_{i=1}^n (x_i-\mu)^2\\ &= \sum_{i=1}^n (x_i-\bar{x} + \bar{x}-\mu)^2\\ &= \sum_{i=1}^n \left((x_i-\bar{x})^2 + (\bar{x}-\mu)^2 + 2(x_i-\bar{x})(\bar{x}-\mu)\right)\\ &= G(\bar{x}) + n(\bar{x}-\mu)^2 + (\bar{x}-\mu)\sum_{i=1}^n(x_i-\bar{x})\\ &= G(\bar{x}) + n(\bar{x}-\mu)^2 \tag{2} \end{align} since $\sum_{i=1}^n (x_i-\bar{x}) = n\bar{x}-n\bar{x} = 0$. Now, \begin{align} n(\bar{x}-\mu)^2 &= n\frac{1}{n^2}\left(\sum_{i=1}^n(x_i-\mu)\right)^2\\ &= \frac 1n \sum_{i=1}^n(x_i-\mu)^2 + \frac 2n \sum_{i=1}^n\sum_{j=i+1}^n(x_i-\mu)(x_j-\mu)\\ &= \frac 1n G(\mu) + \frac 2n \sum_{i=1}^n\sum_{j=i+1}^n(x_i-\mu)(x_j-\mu)\tag{3} \end{align} Except when we have an extraordinarily unusual sample in which all the $x_i$ are larger than $\mu$ (or they are all smaller than $\mu$), the summands $(x_i-\mu)(x_j-\mu)$ in the double sum on the right side of $(3)$ take on positive as well as negative values and thus a lot of cancellations occur. Thus, the double sum can be expected to have small absolute value, and we simply ignore it in comparison to the $\frac 1nG(\mu)$ term on the right side of $(3)$. Thus, $(2)$ becomes $$G(\mu) \approx G(\bar{x}) + \frac 1nG(\mu) \Longrightarrow G(\mu) \approx \frac{n}{n-1}G(\bar{x})$$ as claimed in $(1)$.
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? It is well-known (or easily proved) that the quadratic $\alpha z^2 + 2\beta z + \gamma$ has an extremum at $z = -\frac{\beta}{\alpha}$ which point is midway between the roots $\frac{-\beta - \sqrt{\be
593
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
Why divide by $n-1$ rather than $n$? Because it is customary, and results in an unbiased estimate of the variance. However, it results in a biased (low) estimate of the standard deviation, as can be seen by applying Jensen's inequality to the concave function, square root. So what's so great about having an unbiased estimator? It does not necessarily minimize mean square error. The MLE for a Normal distribution is to divide by $n$ rather than $n-1$. Teach your students to think, rather than to regurgitate and mindlessly apply antiquated notions from a century ago.
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
Why divide by $n-1$ rather than $n$? Because it is customary, and results in an unbiased estimate of the variance. However, it results in a biased (low) estimate of the standard deviation, as can be s
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? Why divide by $n-1$ rather than $n$? Because it is customary, and results in an unbiased estimate of the variance. However, it results in a biased (low) estimate of the standard deviation, as can be seen by applying Jensen's inequality to the concave function, square root. So what's so great about having an unbiased estimator? It does not necessarily minimize mean square error. The MLE for a Normal distribution is to divide by $n$ rather than $n-1$. Teach your students to think, rather than to regurgitate and mindlessly apply antiquated notions from a century ago.
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? Why divide by $n-1$ rather than $n$? Because it is customary, and results in an unbiased estimate of the variance. However, it results in a biased (low) estimate of the standard deviation, as can be s
594
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
You can gain a deeper understanding of the $n-1$ term through geometry alone, not just why it's not $n$ but why it takes exactly this form, but you may first need to build up your intuition cope with $n$-dimensional geometry. From there, however, it's a small step to a deeper understanding of degrees of freedom in linear models (i.e. model df & residual df). I think there's little doubt that Fisher thought this way. Here's a book that builds it up gradually: Saville DJ, Wood GR. Statistical methods: the geometric approach. 3rd edition. New York: Springer-Verlag; 1991. 560 pages. 9780387975177 (Yes, 560 pages. I did say gradually.)
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
You can gain a deeper understanding of the $n-1$ term through geometry alone, not just why it's not $n$ but why it takes exactly this form, but you may first need to build up your intuition cope with
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? You can gain a deeper understanding of the $n-1$ term through geometry alone, not just why it's not $n$ but why it takes exactly this form, but you may first need to build up your intuition cope with $n$-dimensional geometry. From there, however, it's a small step to a deeper understanding of degrees of freedom in linear models (i.e. model df & residual df). I think there's little doubt that Fisher thought this way. Here's a book that builds it up gradually: Saville DJ, Wood GR. Statistical methods: the geometric approach. 3rd edition. New York: Springer-Verlag; 1991. 560 pages. 9780387975177 (Yes, 560 pages. I did say gradually.)
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? You can gain a deeper understanding of the $n-1$ term through geometry alone, not just why it's not $n$ but why it takes exactly this form, but you may first need to build up your intuition cope with
595
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
The estimator of the population variance is biased when applied on a sample of the population. In order to adjust for that bias on needs to divide by n-1 instead of n. One can show mathematically that the estimator of the sample variance is unbiased when we divide by n-1 instead of n. A formal proof is provided here: https://economictheoryblog.com/2012/06/28/latexlatexs2/ Initially it was the mathematical correctness that led to the formula, I suppose. However, if one wants to add intuition to a formula the already mentioned suggestions appear reasonable. First, observations of a sample are on average closer to the sample mean than to the population mean. The variance estimator makes use of the sample mean and as a consequence underestimates the true variance of the population. Dividing by n-1 instead of n corrects for that bias. Furthermore, dividing by n-1 make the variance of a one-element sample undefined rather than zero.
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
The estimator of the population variance is biased when applied on a sample of the population. In order to adjust for that bias on needs to divide by n-1 instead of n. One can show mathematically that
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? The estimator of the population variance is biased when applied on a sample of the population. In order to adjust for that bias on needs to divide by n-1 instead of n. One can show mathematically that the estimator of the sample variance is unbiased when we divide by n-1 instead of n. A formal proof is provided here: https://economictheoryblog.com/2012/06/28/latexlatexs2/ Initially it was the mathematical correctness that led to the formula, I suppose. However, if one wants to add intuition to a formula the already mentioned suggestions appear reasonable. First, observations of a sample are on average closer to the sample mean than to the population mean. The variance estimator makes use of the sample mean and as a consequence underestimates the true variance of the population. Dividing by n-1 instead of n corrects for that bias. Furthermore, dividing by n-1 make the variance of a one-element sample undefined rather than zero.
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? The estimator of the population variance is biased when applied on a sample of the population. In order to adjust for that bias on needs to divide by n-1 instead of n. One can show mathematically that
596
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
At the suggestion of whuber, this answer has been copied over from another similar question. Bessel's correction is adopted to correct for bias in using the sample variance as an estimator of the true variance. The bias in the uncorrected statistic occurs because the sample mean is closer to the middle of the observations than the true mean, and so the squared deviations around the sample mean systematically underestimates the squared deviations around the true mean. To see this phenomenon algebraically, just derive the expected value of a sample variance without Bessel's correction and see what it looks like. Letting $S_*^2$ denote the uncorrected sample variance (using $n$ as the denominator) we have: $$\begin{equation} \begin{aligned} S_*^2 &= \frac{1}{n} \sum_{i=1}^n (X_i - \bar{X})^2 \\[8pt] &= \frac{1}{n} \sum_{i=1}^n (X_i^2 - 2 \bar{X} X_i + \bar{X}^2) \\[8pt] &= \frac{1}{n} \Bigg( \sum_{i=1}^n X_i^2 - 2 \bar{X} \sum_{i=1}^n X_i + n \bar{X}^2 \Bigg) \\[8pt] &= \frac{1}{n} \Bigg( \sum_{i=1}^n X_i^2 - 2 n \bar{X}^2 + n \bar{X}^2 \Bigg) \\[8pt] &= \frac{1}{n} \Bigg( \sum_{i=1}^n X_i^2 - n \bar{X}^2 \Bigg) \\[8pt] &= \frac{1}{n} \sum_{i=1}^n X_i^2 - \bar{X}^2. \end{aligned} \end{equation}$$ Taking expectations yields: $$\begin{equation} \begin{aligned} \mathbb{E}(S_*^2) &= \frac{1}{n} \sum_{i=1}^n \mathbb{E}(X_i^2) - \mathbb{E} (\bar{X}^2) \\[8pt] &= \frac{1}{n} \sum_{i=1}^n (\mu^2 + \sigma^2) - (\mu^2 + \frac{\sigma^2}{n}) \\[8pt] &= (\mu^2 + \sigma^2) - (\mu^2 + \frac{\sigma^2}{n}) \\[8pt] &= \sigma^2 - \frac{\sigma^2}{n} \\[8pt] &= \frac{n-1}{n} \cdot \sigma^2 \\[8pt] \end{aligned} \end{equation}$$ So you can see that the uncorrected sample variance statistic underestimates the true variance $\sigma^2$. Bessel's correction replaces the denominator with $n-1$ which yields an unbiased estimator. In regression analysis this is extended to the more general case where the estimated mean is a linear function of multiple predictors, and in this latter case, the denominator is reduced further, for the lower number of degrees-of-freedom.
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
At the suggestion of whuber, this answer has been copied over from another similar question. Bessel's correction is adopted to correct for bias in using the sample variance as an estimator of the true
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? At the suggestion of whuber, this answer has been copied over from another similar question. Bessel's correction is adopted to correct for bias in using the sample variance as an estimator of the true variance. The bias in the uncorrected statistic occurs because the sample mean is closer to the middle of the observations than the true mean, and so the squared deviations around the sample mean systematically underestimates the squared deviations around the true mean. To see this phenomenon algebraically, just derive the expected value of a sample variance without Bessel's correction and see what it looks like. Letting $S_*^2$ denote the uncorrected sample variance (using $n$ as the denominator) we have: $$\begin{equation} \begin{aligned} S_*^2 &= \frac{1}{n} \sum_{i=1}^n (X_i - \bar{X})^2 \\[8pt] &= \frac{1}{n} \sum_{i=1}^n (X_i^2 - 2 \bar{X} X_i + \bar{X}^2) \\[8pt] &= \frac{1}{n} \Bigg( \sum_{i=1}^n X_i^2 - 2 \bar{X} \sum_{i=1}^n X_i + n \bar{X}^2 \Bigg) \\[8pt] &= \frac{1}{n} \Bigg( \sum_{i=1}^n X_i^2 - 2 n \bar{X}^2 + n \bar{X}^2 \Bigg) \\[8pt] &= \frac{1}{n} \Bigg( \sum_{i=1}^n X_i^2 - n \bar{X}^2 \Bigg) \\[8pt] &= \frac{1}{n} \sum_{i=1}^n X_i^2 - \bar{X}^2. \end{aligned} \end{equation}$$ Taking expectations yields: $$\begin{equation} \begin{aligned} \mathbb{E}(S_*^2) &= \frac{1}{n} \sum_{i=1}^n \mathbb{E}(X_i^2) - \mathbb{E} (\bar{X}^2) \\[8pt] &= \frac{1}{n} \sum_{i=1}^n (\mu^2 + \sigma^2) - (\mu^2 + \frac{\sigma^2}{n}) \\[8pt] &= (\mu^2 + \sigma^2) - (\mu^2 + \frac{\sigma^2}{n}) \\[8pt] &= \sigma^2 - \frac{\sigma^2}{n} \\[8pt] &= \frac{n-1}{n} \cdot \sigma^2 \\[8pt] \end{aligned} \end{equation}$$ So you can see that the uncorrected sample variance statistic underestimates the true variance $\sigma^2$. Bessel's correction replaces the denominator with $n-1$ which yields an unbiased estimator. In regression analysis this is extended to the more general case where the estimated mean is a linear function of multiple predictors, and in this latter case, the denominator is reduced further, for the lower number of degrees-of-freedom.
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? At the suggestion of whuber, this answer has been copied over from another similar question. Bessel's correction is adopted to correct for bias in using the sample variance as an estimator of the true
597
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
Sample variance can be thought of to be the exact mean of the pairwise "energy" $(x_i-x_j)^2/2$ between all sample points. The definition of sample variance then becomes $$ s^2 = \frac{2}{n(n-1)}\sum_{i< j}\frac{(x_i-x_j)^2}{2} = \frac{1}{n-1}\sum_{i=1}^n(x_i-\bar{x})^2 .$$ This also agrees with defining variance of a random variable as the expectation of the pairwise energy, i.e. let $X$ and $Y$ be independent random variables with the same distribution, then $$ V(X) = E\left(\frac{(X-Y)^2}{2}\right) = E((X-E(X))^2) . $$ To go from the random variable defintion of variance to the defintion of sample variance is a matter of estimating a expectation by a mean which is can be justified by the philosophical principle of typicality: The sample is a typical representation the distribution. (Note, this is related to, but not the same as estimation by moments.)
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
Sample variance can be thought of to be the exact mean of the pairwise "energy" $(x_i-x_j)^2/2$ between all sample points. The definition of sample variance then becomes $$ s^2 = \frac{2}{n(n-1)}\sum_
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? Sample variance can be thought of to be the exact mean of the pairwise "energy" $(x_i-x_j)^2/2$ between all sample points. The definition of sample variance then becomes $$ s^2 = \frac{2}{n(n-1)}\sum_{i< j}\frac{(x_i-x_j)^2}{2} = \frac{1}{n-1}\sum_{i=1}^n(x_i-\bar{x})^2 .$$ This also agrees with defining variance of a random variable as the expectation of the pairwise energy, i.e. let $X$ and $Y$ be independent random variables with the same distribution, then $$ V(X) = E\left(\frac{(X-Y)^2}{2}\right) = E((X-E(X))^2) . $$ To go from the random variable defintion of variance to the defintion of sample variance is a matter of estimating a expectation by a mean which is can be justified by the philosophical principle of typicality: The sample is a typical representation the distribution. (Note, this is related to, but not the same as estimation by moments.)
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? Sample variance can be thought of to be the exact mean of the pairwise "energy" $(x_i-x_j)^2/2$ between all sample points. The definition of sample variance then becomes $$ s^2 = \frac{2}{n(n-1)}\sum_
598
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
The sample mean is defined as $\bar{X} = \frac{1}{n}\sum_{i=1}^{n} X_i$, which is quite intuitive. But the sample variance is $S^2 = \frac{1}{n-1}\sum_{i=1}^{n} (X_i - \bar{X})^2$. Where did the $n - 1$ come from ? To answer this question, we must go back to the definition of an unbiased estimator. An unbiased estimator is one whose expectation tends to the true expectation. The sample mean is an unbiased estimator. To see why: $$ E[\bar{X}] = \frac{1}{n}\sum_{i=1}^{n} E[X_i] = \frac{n}{n} \mu = \mu $$ Let us look at the expectation of the sample variance, $$ S^2 = \frac{1}{n-1} \sum_{i=1}^{n} (X_i^2) - n\bar{X}^2 $$ $$ E[S^2] = \frac{1}{n-1} \left( n E[(X_i^2)] - nE[\bar{X}^2] \right). $$ Notice that $\bar{X}$ is a random variable and not a constant, so the expectation $E[\bar{X}^2] $ plays a role. This is the reason behind the $n-1$. $$E[S^2] = \frac{1}{n-1} \left( n (\mu^2 + \sigma^2) - n(\mu^2 + Var(\bar{X})) \right). $$ $$Var(\bar{X}) = Var(\frac{1}{n}\sum_{i=1}^{n} X_i) = \sum_{i=1}^{n} \frac{1}{n^2} Var(X_i) = \frac{\sigma^2}{n} $$ $$E[S^2] = \frac{1}{n-1} \left( n (\mu^2 + \sigma^2) - n(\mu^2 + \sigma^2/n) \right). = \frac{(n-1)\sigma^2}{n-1} = \sigma^2 \\ $$ As you can see, if we had the denominator as $n$ instead of $n-1$, we would get a biased estimate for the variance! But with $n-1$ the estimator $S^2$ is an unbiased estimator.
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
The sample mean is defined as $\bar{X} = \frac{1}{n}\sum_{i=1}^{n} X_i$, which is quite intuitive. But the sample variance is $S^2 = \frac{1}{n-1}\sum_{i=1}^{n} (X_i - \bar{X})^2$. Where did the $n -
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? The sample mean is defined as $\bar{X} = \frac{1}{n}\sum_{i=1}^{n} X_i$, which is quite intuitive. But the sample variance is $S^2 = \frac{1}{n-1}\sum_{i=1}^{n} (X_i - \bar{X})^2$. Where did the $n - 1$ come from ? To answer this question, we must go back to the definition of an unbiased estimator. An unbiased estimator is one whose expectation tends to the true expectation. The sample mean is an unbiased estimator. To see why: $$ E[\bar{X}] = \frac{1}{n}\sum_{i=1}^{n} E[X_i] = \frac{n}{n} \mu = \mu $$ Let us look at the expectation of the sample variance, $$ S^2 = \frac{1}{n-1} \sum_{i=1}^{n} (X_i^2) - n\bar{X}^2 $$ $$ E[S^2] = \frac{1}{n-1} \left( n E[(X_i^2)] - nE[\bar{X}^2] \right). $$ Notice that $\bar{X}$ is a random variable and not a constant, so the expectation $E[\bar{X}^2] $ plays a role. This is the reason behind the $n-1$. $$E[S^2] = \frac{1}{n-1} \left( n (\mu^2 + \sigma^2) - n(\mu^2 + Var(\bar{X})) \right). $$ $$Var(\bar{X}) = Var(\frac{1}{n}\sum_{i=1}^{n} X_i) = \sum_{i=1}^{n} \frac{1}{n^2} Var(X_i) = \frac{\sigma^2}{n} $$ $$E[S^2] = \frac{1}{n-1} \left( n (\mu^2 + \sigma^2) - n(\mu^2 + \sigma^2/n) \right). = \frac{(n-1)\sigma^2}{n-1} = \sigma^2 \\ $$ As you can see, if we had the denominator as $n$ instead of $n-1$, we would get a biased estimate for the variance! But with $n-1$ the estimator $S^2$ is an unbiased estimator.
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? The sample mean is defined as $\bar{X} = \frac{1}{n}\sum_{i=1}^{n} X_i$, which is quite intuitive. But the sample variance is $S^2 = \frac{1}{n-1}\sum_{i=1}^{n} (X_i - \bar{X})^2$. Where did the $n -
599
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
Suppose that you have a random phenomenon. Suppose again that you only get one $N=1$ sample, or realization, $x$. Without further assumptions, your "only" reasonable choice for a sample average is $\overline{m}=x$. If you do not subtract $1$ from your denominator, the (uncorrect) sample variance would be $$ V=\frac{\sum_N (x_n - \overline{m} )^2}{N}$$, or: $$\overline{V}=\frac{(x-\overline{m})^2}{1} = 0\,.$$ Oddly, the variance would be null with only one sample. And having a second sample $y$ would risk to increase your variance, if $x\neq y$. This makes no sense. Intuitively, an infinite variance would be a sounder result, and you can recovered it only by "dividing by $N-1=0$". Estimating a mean is fitting a polynomial with degree $0$ to the data, having one degree of freedom (dof). This Bessel's correction applies to higher degrees of freedom models too: of course you can fit perfectly $d+1$ points with a $d$ degree polynomial, with $d+1$ dofs. The illusion of a zero-squared-error can only be counterbalanced by dividing by the number of points minus the number of dofs. This issue is particularly sensitive when dealing with very small experimental datasets.
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
Suppose that you have a random phenomenon. Suppose again that you only get one $N=1$ sample, or realization, $x$. Without further assumptions, your "only" reasonable choice for a sample average is $
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? Suppose that you have a random phenomenon. Suppose again that you only get one $N=1$ sample, or realization, $x$. Without further assumptions, your "only" reasonable choice for a sample average is $\overline{m}=x$. If you do not subtract $1$ from your denominator, the (uncorrect) sample variance would be $$ V=\frac{\sum_N (x_n - \overline{m} )^2}{N}$$, or: $$\overline{V}=\frac{(x-\overline{m})^2}{1} = 0\,.$$ Oddly, the variance would be null with only one sample. And having a second sample $y$ would risk to increase your variance, if $x\neq y$. This makes no sense. Intuitively, an infinite variance would be a sounder result, and you can recovered it only by "dividing by $N-1=0$". Estimating a mean is fitting a polynomial with degree $0$ to the data, having one degree of freedom (dof). This Bessel's correction applies to higher degrees of freedom models too: of course you can fit perfectly $d+1$ points with a $d$ degree polynomial, with $d+1$ dofs. The illusion of a zero-squared-error can only be counterbalanced by dividing by the number of points minus the number of dofs. This issue is particularly sensitive when dealing with very small experimental datasets.
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? Suppose that you have a random phenomenon. Suppose again that you only get one $N=1$ sample, or realization, $x$. Without further assumptions, your "only" reasonable choice for a sample average is $
600
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
The intuitive reason for the $n-1$ is that the $n$ deviations in the calculation of the standard deviation are not independent. There is one constraint which is that the sum of the deviations is zero. When we take that into account we are effectively dealing with $n-1$ quantities rather than $n$. (Geometrically the deviation vector $x-\bar{x}$ is the projection of $x$ onto the space orthogonal to the space spanned by the vector of all ones and the space onto which it projects has dimension $n-1$.)
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
The intuitive reason for the $n-1$ is that the $n$ deviations in the calculation of the standard deviation are not independent. There is one constraint which is that the sum of the deviations is zero
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? The intuitive reason for the $n-1$ is that the $n$ deviations in the calculation of the standard deviation are not independent. There is one constraint which is that the sum of the deviations is zero. When we take that into account we are effectively dealing with $n-1$ quantities rather than $n$. (Geometrically the deviation vector $x-\bar{x}$ is the projection of $x$ onto the space orthogonal to the space spanned by the vector of all ones and the space onto which it projects has dimension $n-1$.)
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? The intuitive reason for the $n-1$ is that the $n$ deviations in the calculation of the standard deviation are not independent. There is one constraint which is that the sum of the deviations is zero