idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
201
Difference between logit and probit models
What I am going to say in no way invalidates what has been said thus far. I just want to point out that probit models do not suffer from IIA (Independence of Irrelevant alternatives) assumptions, and the logit model does. To use an example from Train's excellent book. If I have a logit that predicts whether I am going to ride the blue bus or drive in my car, adding red bus would draw from both car and blue bus proportionally. But using a probit model you can avoid this problem. In essence, instead of drawing from both proportionally, you may draw more from blue bus as they are closer substitutes. The sacrifice you make is that there is no closed form solutions, as pointed out above. Probit tends to be my goto when I am worried about IIA issues. That's not to say that there aren't ways to get around IIA in a logit framework (GEV distributions). But I've always looked at these sorts of models as a clunky way around the problem. With the computational speeds that you can get, I would say go with probit.
Difference between logit and probit models
What I am going to say in no way invalidates what has been said thus far. I just want to point out that probit models do not suffer from IIA (Independence of Irrelevant alternatives) assumptions, and
Difference between logit and probit models What I am going to say in no way invalidates what has been said thus far. I just want to point out that probit models do not suffer from IIA (Independence of Irrelevant alternatives) assumptions, and the logit model does. To use an example from Train's excellent book. If I have a logit that predicts whether I am going to ride the blue bus or drive in my car, adding red bus would draw from both car and blue bus proportionally. But using a probit model you can avoid this problem. In essence, instead of drawing from both proportionally, you may draw more from blue bus as they are closer substitutes. The sacrifice you make is that there is no closed form solutions, as pointed out above. Probit tends to be my goto when I am worried about IIA issues. That's not to say that there aren't ways to get around IIA in a logit framework (GEV distributions). But I've always looked at these sorts of models as a clunky way around the problem. With the computational speeds that you can get, I would say go with probit.
Difference between logit and probit models What I am going to say in no way invalidates what has been said thus far. I just want to point out that probit models do not suffer from IIA (Independence of Irrelevant alternatives) assumptions, and
202
Difference between logit and probit models
I offer a practical answer to the question, that only focuses on "when to use logistic regression, and when to use probit", without getting into statistical details, but rather focusing on decisions based on statistics. The answer depends on two main things: do you have a disciplinary preference, and do you only care for which model better fits your data? Basic difference Both logit and probit models provide statistical models that give the probability that a dependent response variable would be 0 or 1. They are very similar and often given practically idential results, but because they use different functions to calculate the probabilities, their results are sometimes slightly different. Disciplinary preference Some academic disciplines generally prefer one or the other. If you are going to publish or present your results to an academic discipline with a specific traditional preference, then let that dictate your choice so that your findings would be more readily acceptable. For example (from Methods Consultants), Logit – also known as logistic regression – is more popular in health sciences like epidemiology partly because coefficients can be interpreted in terms of odds ratios. Probit models can be generalized to account for non-constant error variances in more advanced econometric settings (known as heteroskedastic probit models) and hence are used in some contexts by economists and political scientists. The point is that the differences in results are so minor that the ability for your general audience to understand your results outweigh the minor differences between the two approaches. If all you care about is better fit... If your research is in a discipline that does not prefer one or the other, then my study of this question (which is better, logit or probit) has led me to conclude that it is generally better to use probit, since it almost always will give a statistical fit to data that is equal or superior to that of the logit model. The most notable exception when logit models give a better fit is in the case of "extreme independent variables" (which I explain below). My conclusion is based almost entirely (after searching numerous other sources) on Hahn, E.D. & Soyer, R., 2005. Probit and logit models: Differences in the multivariate realm. Available at: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.329.4866&rep=rep1&type=pdf. Here is my summary of the practical decision conclusions from this article concerning whether logit versus probit multivariate models provide a better fit to the data (these conclusions also apply to univariate models, but they only simulated effects for two independent variables): In most scenarios, the logit and probit models fit the data equally well, with the following two exceptions. Logit is definitely better in the case of "extreme independent variables". These are independent variables where one particularly large or small value will overwhelmingly often determine whether the dependent variable is a 0 or a 1, overriding the effects of most other variables. Hahn and Soyer formally define it thus (p. 4): An extreme independent variable level involves the confluence of three events. First, an extreme independent variable level occurs at the upper or lower extreme of an independent variable. For example, say the independent variable x were to take on the values 1, 2, and 3.2. The extreme independent variable level would involve the values at x = 3.2 (or x = 1). Second, a substantial proportion (e.g., 60%) of the total n must be at this level. Third, the probability of success at this level should itself be extreme (e.g., greater than 99%). Probit is better in the case of "random effects models" with moderate or large sample sizes (it is equal to logit for small sample sizes). For fixed effects models, probit and logit are equally good. I don't really understand what Hahn and Soyer mean by "random effects models" in their article. Although many definitions are offered (as in this Stack Exchange question), the definition of the term is in fact ambiguous and inconsistent. But since logit is never superior to probit in this regard, the point is rendered moot by simply choosing probit. Based on Hahn and Soyer's analysis, my conclusion is to always use probit models except in the case of extreme independent variables, in which case logit should be chosen. Extreme independent variables are not all that common, and should be quite easy to recognize. With this rule of thumb, it doesn't matter whether the model is a random effects model or not. In cases where a model is a random effects model (where probit is preferred) but there are extreme independent variables (where logit is preferred), although Hahn and Soyer didn't comment on this, my impression from their article is that the effect of extreme independent variables are more dominant, and so logit would be preferred.
Difference between logit and probit models
I offer a practical answer to the question, that only focuses on "when to use logistic regression, and when to use probit", without getting into statistical details, but rather focusing on decisions b
Difference between logit and probit models I offer a practical answer to the question, that only focuses on "when to use logistic regression, and when to use probit", without getting into statistical details, but rather focusing on decisions based on statistics. The answer depends on two main things: do you have a disciplinary preference, and do you only care for which model better fits your data? Basic difference Both logit and probit models provide statistical models that give the probability that a dependent response variable would be 0 or 1. They are very similar and often given practically idential results, but because they use different functions to calculate the probabilities, their results are sometimes slightly different. Disciplinary preference Some academic disciplines generally prefer one or the other. If you are going to publish or present your results to an academic discipline with a specific traditional preference, then let that dictate your choice so that your findings would be more readily acceptable. For example (from Methods Consultants), Logit – also known as logistic regression – is more popular in health sciences like epidemiology partly because coefficients can be interpreted in terms of odds ratios. Probit models can be generalized to account for non-constant error variances in more advanced econometric settings (known as heteroskedastic probit models) and hence are used in some contexts by economists and political scientists. The point is that the differences in results are so minor that the ability for your general audience to understand your results outweigh the minor differences between the two approaches. If all you care about is better fit... If your research is in a discipline that does not prefer one or the other, then my study of this question (which is better, logit or probit) has led me to conclude that it is generally better to use probit, since it almost always will give a statistical fit to data that is equal or superior to that of the logit model. The most notable exception when logit models give a better fit is in the case of "extreme independent variables" (which I explain below). My conclusion is based almost entirely (after searching numerous other sources) on Hahn, E.D. & Soyer, R., 2005. Probit and logit models: Differences in the multivariate realm. Available at: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.329.4866&rep=rep1&type=pdf. Here is my summary of the practical decision conclusions from this article concerning whether logit versus probit multivariate models provide a better fit to the data (these conclusions also apply to univariate models, but they only simulated effects for two independent variables): In most scenarios, the logit and probit models fit the data equally well, with the following two exceptions. Logit is definitely better in the case of "extreme independent variables". These are independent variables where one particularly large or small value will overwhelmingly often determine whether the dependent variable is a 0 or a 1, overriding the effects of most other variables. Hahn and Soyer formally define it thus (p. 4): An extreme independent variable level involves the confluence of three events. First, an extreme independent variable level occurs at the upper or lower extreme of an independent variable. For example, say the independent variable x were to take on the values 1, 2, and 3.2. The extreme independent variable level would involve the values at x = 3.2 (or x = 1). Second, a substantial proportion (e.g., 60%) of the total n must be at this level. Third, the probability of success at this level should itself be extreme (e.g., greater than 99%). Probit is better in the case of "random effects models" with moderate or large sample sizes (it is equal to logit for small sample sizes). For fixed effects models, probit and logit are equally good. I don't really understand what Hahn and Soyer mean by "random effects models" in their article. Although many definitions are offered (as in this Stack Exchange question), the definition of the term is in fact ambiguous and inconsistent. But since logit is never superior to probit in this regard, the point is rendered moot by simply choosing probit. Based on Hahn and Soyer's analysis, my conclusion is to always use probit models except in the case of extreme independent variables, in which case logit should be chosen. Extreme independent variables are not all that common, and should be quite easy to recognize. With this rule of thumb, it doesn't matter whether the model is a random effects model or not. In cases where a model is a random effects model (where probit is preferred) but there are extreme independent variables (where logit is preferred), although Hahn and Soyer didn't comment on this, my impression from their article is that the effect of extreme independent variables are more dominant, and so logit would be preferred.
Difference between logit and probit models I offer a practical answer to the question, that only focuses on "when to use logistic regression, and when to use probit", without getting into statistical details, but rather focusing on decisions b
203
Difference between logit and probit models
One of the most well-known difference between logit and probit is the (theoretical) regression residuals distribution: normal for probit, logistic for logit (please see: Koop G. An Introduction to Econometrics Chichester, Wiley: 2008: 280).
Difference between logit and probit models
One of the most well-known difference between logit and probit is the (theoretical) regression residuals distribution: normal for probit, logistic for logit (please see: Koop G. An Introduction to Eco
Difference between logit and probit models One of the most well-known difference between logit and probit is the (theoretical) regression residuals distribution: normal for probit, logistic for logit (please see: Koop G. An Introduction to Econometrics Chichester, Wiley: 2008: 280).
Difference between logit and probit models One of the most well-known difference between logit and probit is the (theoretical) regression residuals distribution: normal for probit, logistic for logit (please see: Koop G. An Introduction to Eco
204
Difference between logit and probit models
Below, I explain an estimator that nests probit and logit as special cases and where one can test which is more appropriate. Both probit and logit can be nested in a latent variable model, $$ y_i^* = x_i \beta + \varepsilon_i,\quad \varepsilon_i \sim G(\cdot), $$ where the observed component is $$ y_i = \mathbb{1}(y_i^* > 0). $$ If you choose $G$ to be the normal cdf, you get probit, if you choose the logistic cdf, you get logit. Either way, the likelihood function takes the form $$ \ell(\beta) = y_i \log G(x_i\beta) + (1-y_i) \log[1-G(x_i\beta)].$$ However, if you are concerned about which assumption you have made, you can use the Klein & Spady (1993; Econometrica) estimator. This estimator allows you to be fully flexible in your specification of the cdf, $G$, and you could then even subsequently test the validity of normality or logisticness (?). In Klein & Spady, the criterion function is instead $$ \ell(\beta) = y_i \log \hat{G}(x_i\beta) + (1-y_i) \log[1-\hat{G}(x_i\beta)],$$ where $\hat{G}(\cdot)$ is a nonparametric estimate of the cdf, for example estimated using a Nadaraya-Watson kernel regression estimator, $$ \hat{G}(z) = \sum_{i=1}^N y_i \frac{ K\left( \frac{z - x_i\beta}{h} \right)}{\sum_{j=1}^N K\left( \frac{z - x_j\beta}{h} \right)}, $$ where $K$ is called the "Kernel" (typically, the Gaussian cdf or a triangular kernel is chosen), and $h$ is a "bandwidth". There are plugin values to pick for the latter but it can be a lot more complicated and it can make the outer optimization over $\beta$ more complicated if $h$ changes in every step ($h$ balances the so-called bias-variance tradeoff). Improvements: Ichimura has suggested that the kernel regression, $\hat{G}$, should leave out the $i$th observation; otherwise, the choice of $h$ may be complicated by a problem with over-fitting in sample (too high variance). Discussion: One drawback with the Klein-Spady estimator is that it may get stuck in local minima. This is because the $G$ cdf adapts to the given $\beta$-parameters. I know of several students who have tried implementing it and have had problems achieving convergence and avoiding numerical issues. Hence, it is not an easy estimator to work with. Moreover, inference on the estimated parameters is complicated by the semi-parametric specification for $G$.
Difference between logit and probit models
Below, I explain an estimator that nests probit and logit as special cases and where one can test which is more appropriate. Both probit and logit can be nested in a latent variable model, $$ y_i^* =
Difference between logit and probit models Below, I explain an estimator that nests probit and logit as special cases and where one can test which is more appropriate. Both probit and logit can be nested in a latent variable model, $$ y_i^* = x_i \beta + \varepsilon_i,\quad \varepsilon_i \sim G(\cdot), $$ where the observed component is $$ y_i = \mathbb{1}(y_i^* > 0). $$ If you choose $G$ to be the normal cdf, you get probit, if you choose the logistic cdf, you get logit. Either way, the likelihood function takes the form $$ \ell(\beta) = y_i \log G(x_i\beta) + (1-y_i) \log[1-G(x_i\beta)].$$ However, if you are concerned about which assumption you have made, you can use the Klein & Spady (1993; Econometrica) estimator. This estimator allows you to be fully flexible in your specification of the cdf, $G$, and you could then even subsequently test the validity of normality or logisticness (?). In Klein & Spady, the criterion function is instead $$ \ell(\beta) = y_i \log \hat{G}(x_i\beta) + (1-y_i) \log[1-\hat{G}(x_i\beta)],$$ where $\hat{G}(\cdot)$ is a nonparametric estimate of the cdf, for example estimated using a Nadaraya-Watson kernel regression estimator, $$ \hat{G}(z) = \sum_{i=1}^N y_i \frac{ K\left( \frac{z - x_i\beta}{h} \right)}{\sum_{j=1}^N K\left( \frac{z - x_j\beta}{h} \right)}, $$ where $K$ is called the "Kernel" (typically, the Gaussian cdf or a triangular kernel is chosen), and $h$ is a "bandwidth". There are plugin values to pick for the latter but it can be a lot more complicated and it can make the outer optimization over $\beta$ more complicated if $h$ changes in every step ($h$ balances the so-called bias-variance tradeoff). Improvements: Ichimura has suggested that the kernel regression, $\hat{G}$, should leave out the $i$th observation; otherwise, the choice of $h$ may be complicated by a problem with over-fitting in sample (too high variance). Discussion: One drawback with the Klein-Spady estimator is that it may get stuck in local minima. This is because the $G$ cdf adapts to the given $\beta$-parameters. I know of several students who have tried implementing it and have had problems achieving convergence and avoiding numerical issues. Hence, it is not an easy estimator to work with. Moreover, inference on the estimated parameters is complicated by the semi-parametric specification for $G$.
Difference between logit and probit models Below, I explain an estimator that nests probit and logit as special cases and where one can test which is more appropriate. Both probit and logit can be nested in a latent variable model, $$ y_i^* =
205
Difference between logit and probit models
They are very similar. In both models, the probability that $Y=1$ given $X$ can be seen as the probability that a random hidden variable $S$ (with a certain fixed distribution) is below a certain threshold that depends linearly on $X$ : $$P(Y=1|X)=P(S<\beta X)$$ Or equivalently : $$P(Y=1|X)=P(\beta X-S>0)$$ Then it's all a matter of what you choose for the distribution of $S$ : in logistic regression, $S$ has a logistic distribution. in probit regression, $S$ has a normal distribution. Variance is unimportant since it is automatically compensated by multiplying $\beta$ by a constant. Mean is unimportant as well if you use an intercept. This can be seen as a threshold effect. Some invisible outcome $E=\beta X-S$ is a linear function of $X$ with some noise $-S$ added like in linear regression, and we get a 0/1 outcome by saying: when $E>0$, outcome is $Y=1$ when $E<0$, outcome is $Y=0$ The differences between logistic and probit lies in the difference between the logistic and the normal distributions. There ain't that much. Once adjusted, they look like it : Logistic has heavier tail. This may impact a little how events of small (<1%) or high (>99%) probability are fitted. Practically, the difference is not even noticeable in most situations : logit and probit predict essentially the same thing. See http://scholarworks.rit.edu/cgi/viewcontent.cgi?article=2237&context=article "Philosophically", logistic regression can be justified by being equivalent to the principle of maximum entropy : http://www.win-vector.com/blog/2011/09/the-equivalence-of-logistic-regression-and-maximum-entropy-models/ In terms of calculation : logistic is simpler since the cumulative distribution of the logistic distribution has a closed formula unlike the normal distribution. But normal distributions have good properties when you go to multi-dimensional, this is why probit is often preferred in advanced cases.
Difference between logit and probit models
They are very similar. In both models, the probability that $Y=1$ given $X$ can be seen as the probability that a random hidden variable $S$ (with a certain fixed distribution) is below a certain thre
Difference between logit and probit models They are very similar. In both models, the probability that $Y=1$ given $X$ can be seen as the probability that a random hidden variable $S$ (with a certain fixed distribution) is below a certain threshold that depends linearly on $X$ : $$P(Y=1|X)=P(S<\beta X)$$ Or equivalently : $$P(Y=1|X)=P(\beta X-S>0)$$ Then it's all a matter of what you choose for the distribution of $S$ : in logistic regression, $S$ has a logistic distribution. in probit regression, $S$ has a normal distribution. Variance is unimportant since it is automatically compensated by multiplying $\beta$ by a constant. Mean is unimportant as well if you use an intercept. This can be seen as a threshold effect. Some invisible outcome $E=\beta X-S$ is a linear function of $X$ with some noise $-S$ added like in linear regression, and we get a 0/1 outcome by saying: when $E>0$, outcome is $Y=1$ when $E<0$, outcome is $Y=0$ The differences between logistic and probit lies in the difference between the logistic and the normal distributions. There ain't that much. Once adjusted, they look like it : Logistic has heavier tail. This may impact a little how events of small (<1%) or high (>99%) probability are fitted. Practically, the difference is not even noticeable in most situations : logit and probit predict essentially the same thing. See http://scholarworks.rit.edu/cgi/viewcontent.cgi?article=2237&context=article "Philosophically", logistic regression can be justified by being equivalent to the principle of maximum entropy : http://www.win-vector.com/blog/2011/09/the-equivalence-of-logistic-regression-and-maximum-entropy-models/ In terms of calculation : logistic is simpler since the cumulative distribution of the logistic distribution has a closed formula unlike the normal distribution. But normal distributions have good properties when you go to multi-dimensional, this is why probit is often preferred in advanced cases.
Difference between logit and probit models They are very similar. In both models, the probability that $Y=1$ given $X$ can be seen as the probability that a random hidden variable $S$ (with a certain fixed distribution) is below a certain thre
206
Difference between logit and probit models
@Benoit Sanchez and @gung's graphs emphasize how little there is to distinguish the link functions, except with very large numbers of observations and/or in the extreme tails. The conversion ${\rm logit}(p) = 1.77\ {\rm probit}(p)$ never has an error of more than $0.1$ over the range $0.1 \le p \le 0.9$. Over $0.01 \le p \le 0.99$, the conversion is still a good approximation. Using such a conversion factor, one can get a good approximation to the odds ratio from a model that has a probit link. The graph uses the following R code: trellis.par.set(clip=list(panel='off')) p <- (1:99)/100 logit <- make.link('logit')$linkfun probit <- make.link('probit')$linkfun gph <- lattice::xyplot(logit(p)~probit(p), type=c('p','r'), scales=list(x=list(alternating=1, tck=c(0.6,0)))) b <- coef(lm(logit(p) ~ 0 + probit(p))) ## Slope of line gph1 <- update(gph, main = list(paste("Slope of line is", round(b,2)), cex=1)) probs <- c(.01,.04,.1, .25,.5,.75,.9,.96,.99) gph1+latticeExtra::layer(panel.axis(side='top', at=probit(probs), labels=paste(probs),outside=T, rot=0))
Difference between logit and probit models
@Benoit Sanchez and @gung's graphs emphasize how little there is to distinguish the link functions, except with very large numbers of observations and/or in the extreme tails. The conversion ${\rm lo
Difference between logit and probit models @Benoit Sanchez and @gung's graphs emphasize how little there is to distinguish the link functions, except with very large numbers of observations and/or in the extreme tails. The conversion ${\rm logit}(p) = 1.77\ {\rm probit}(p)$ never has an error of more than $0.1$ over the range $0.1 \le p \le 0.9$. Over $0.01 \le p \le 0.99$, the conversion is still a good approximation. Using such a conversion factor, one can get a good approximation to the odds ratio from a model that has a probit link. The graph uses the following R code: trellis.par.set(clip=list(panel='off')) p <- (1:99)/100 logit <- make.link('logit')$linkfun probit <- make.link('probit')$linkfun gph <- lattice::xyplot(logit(p)~probit(p), type=c('p','r'), scales=list(x=list(alternating=1, tck=c(0.6,0)))) b <- coef(lm(logit(p) ~ 0 + probit(p))) ## Slope of line gph1 <- update(gph, main = list(paste("Slope of line is", round(b,2)), cex=1)) probs <- c(.01,.04,.1, .25,.5,.75,.9,.96,.99) gph1+latticeExtra::layer(panel.axis(side='top', at=probit(probs), labels=paste(probs),outside=T, rot=0))
Difference between logit and probit models @Benoit Sanchez and @gung's graphs emphasize how little there is to distinguish the link functions, except with very large numbers of observations and/or in the extreme tails. The conversion ${\rm lo
207
Difference between logit and probit models
Directly from 'Discrete Choice Methods with Simulation', from Kenneth Train, in the context of discrete choice models: The logit model is limited in three important ways. It cannot represent random taste variation. It exhibits restrictive substitution patterns due to the IIA property. And it cannot be used with panel data when unobserved factors are correlated over time for each decision maker. GEV models relax the second of these restrictions, but not the other two. Probit models deal with all three. They can handle random taste variation, they allow any pattern of substitution, and they are applicable to panel data with temporally correlated errors. The only limitation of probit models is that they require normal distributions for all unobserved components of utility. In many, perhaps most situations, normal distributions provide an adequate representation of the random components. However, in some situations, normal distributions are inappropriate and can lead to perverse forecasts.
Difference between logit and probit models
Directly from 'Discrete Choice Methods with Simulation', from Kenneth Train, in the context of discrete choice models: The logit model is limited in three important ways. It cannot represent random t
Difference between logit and probit models Directly from 'Discrete Choice Methods with Simulation', from Kenneth Train, in the context of discrete choice models: The logit model is limited in three important ways. It cannot represent random taste variation. It exhibits restrictive substitution patterns due to the IIA property. And it cannot be used with panel data when unobserved factors are correlated over time for each decision maker. GEV models relax the second of these restrictions, but not the other two. Probit models deal with all three. They can handle random taste variation, they allow any pattern of substitution, and they are applicable to panel data with temporally correlated errors. The only limitation of probit models is that they require normal distributions for all unobserved components of utility. In many, perhaps most situations, normal distributions provide an adequate representation of the random components. However, in some situations, normal distributions are inappropriate and can lead to perverse forecasts.
Difference between logit and probit models Directly from 'Discrete Choice Methods with Simulation', from Kenneth Train, in the context of discrete choice models: The logit model is limited in three important ways. It cannot represent random t
208
Python as a statistics workbench
It's hard to ignore the wealth of statistical packages available in R/CRAN. That said, I spend a lot of time in Python land and would never dissuade anyone from having as much fun as I do. :) Here are some libraries/links you might find useful for statistical work. NumPy/Scipy You probably know about these already. But let me point out the Cookbook where you can read about many statistical facilities already available and the Example List which is a great reference for functions (including data manipulation and other operations). Another handy reference is John Cook's Distributions in Scipy. pandas This is a really nice library for working with statistical data -- tabular data, time series, panel data. Includes many builtin functions for data summaries, grouping/aggregation, pivoting. Also has a statistics/econometrics library. larry Labeled array that plays nice with NumPy. Provides statistical functions not present in NumPy and good for data manipulation. python-statlib A fairly recent effort which combined a number of scattered statistics libraries. Useful for basic and descriptive statistics if you're not using NumPy or pandas. statsmodels Statistical modeling: Linear models, GLMs, among others. scikits Statistical and scientific computing packages -- notably smoothing, optimization and machine learning. PyMC For your Bayesian/MCMC/hierarchical modeling needs. Highly recommended. PyMix Mixture models. Biopython Useful for loading your biological data into python, and provides some rudimentary statistical/ machine learning tools for analysis. If speed becomes a problem, consider Theano -- used with good success by the deep learning people. There's plenty of other stuff out there, but this is what I find the most useful along the lines you mentioned.
Python as a statistics workbench
It's hard to ignore the wealth of statistical packages available in R/CRAN. That said, I spend a lot of time in Python land and would never dissuade anyone from having as much fun as I do. :) Here
Python as a statistics workbench It's hard to ignore the wealth of statistical packages available in R/CRAN. That said, I spend a lot of time in Python land and would never dissuade anyone from having as much fun as I do. :) Here are some libraries/links you might find useful for statistical work. NumPy/Scipy You probably know about these already. But let me point out the Cookbook where you can read about many statistical facilities already available and the Example List which is a great reference for functions (including data manipulation and other operations). Another handy reference is John Cook's Distributions in Scipy. pandas This is a really nice library for working with statistical data -- tabular data, time series, panel data. Includes many builtin functions for data summaries, grouping/aggregation, pivoting. Also has a statistics/econometrics library. larry Labeled array that plays nice with NumPy. Provides statistical functions not present in NumPy and good for data manipulation. python-statlib A fairly recent effort which combined a number of scattered statistics libraries. Useful for basic and descriptive statistics if you're not using NumPy or pandas. statsmodels Statistical modeling: Linear models, GLMs, among others. scikits Statistical and scientific computing packages -- notably smoothing, optimization and machine learning. PyMC For your Bayesian/MCMC/hierarchical modeling needs. Highly recommended. PyMix Mixture models. Biopython Useful for loading your biological data into python, and provides some rudimentary statistical/ machine learning tools for analysis. If speed becomes a problem, consider Theano -- used with good success by the deep learning people. There's plenty of other stuff out there, but this is what I find the most useful along the lines you mentioned.
Python as a statistics workbench It's hard to ignore the wealth of statistical packages available in R/CRAN. That said, I spend a lot of time in Python land and would never dissuade anyone from having as much fun as I do. :) Here
209
Python as a statistics workbench
As a numerical platform and a substitute for MATLAB, Python reached maturity at least 2-3 years ago, and is now much better than MATLAB in many respects. I tried to switch to Python from R around that time, and failed miserably. There are just too many R packages I use on a daily basis that have no Python equivalent. The absence of ggplot2 is enough to be a showstopper, but there are many more. In addition to this, R has a better syntax for data analysis. Consider the following basic example: Python: results = sm.OLS(y, X).fit() R: results <- lm(y ~ x1 + x2 + x3, data=A) What do you consider more expressive? In R, you can think in terms of variables, and can easily extend a model, to, say, lm(y ~ x1 + x2 + x3 + x2:x3, data=A) Compared to R, Python is a low-level language for model building. If I had fewer requirements for advanced statistical functions and were already coding Python on a larger project, I would consider Python as a good candidate. I would consider it also when a bare-bone approach is needed, either because of speed limitations, or because R packages don't provide an edge. For those doing relatively advanced Statistics right now, the answer is a no-brainer, and is no. In fact, I believe Python will limit the way you think about data analysis. It will take a few years and many man-year of efforts to produce the module replacements for the 100 essential R packages, and even then, Python will feel like a language on which data analysis capabilities have been bolted on. Since R has already captured the largest relative share of applied statisticians across several fields, I don't see this happening any time soon. Having said that, it's a free country, and I know people doing Statistics in APL and C.
Python as a statistics workbench
As a numerical platform and a substitute for MATLAB, Python reached maturity at least 2-3 years ago, and is now much better than MATLAB in many respects. I tried to switch to Python from R around tha
Python as a statistics workbench As a numerical platform and a substitute for MATLAB, Python reached maturity at least 2-3 years ago, and is now much better than MATLAB in many respects. I tried to switch to Python from R around that time, and failed miserably. There are just too many R packages I use on a daily basis that have no Python equivalent. The absence of ggplot2 is enough to be a showstopper, but there are many more. In addition to this, R has a better syntax for data analysis. Consider the following basic example: Python: results = sm.OLS(y, X).fit() R: results <- lm(y ~ x1 + x2 + x3, data=A) What do you consider more expressive? In R, you can think in terms of variables, and can easily extend a model, to, say, lm(y ~ x1 + x2 + x3 + x2:x3, data=A) Compared to R, Python is a low-level language for model building. If I had fewer requirements for advanced statistical functions and were already coding Python on a larger project, I would consider Python as a good candidate. I would consider it also when a bare-bone approach is needed, either because of speed limitations, or because R packages don't provide an edge. For those doing relatively advanced Statistics right now, the answer is a no-brainer, and is no. In fact, I believe Python will limit the way you think about data analysis. It will take a few years and many man-year of efforts to produce the module replacements for the 100 essential R packages, and even then, Python will feel like a language on which data analysis capabilities have been bolted on. Since R has already captured the largest relative share of applied statisticians across several fields, I don't see this happening any time soon. Having said that, it's a free country, and I know people doing Statistics in APL and C.
Python as a statistics workbench As a numerical platform and a substitute for MATLAB, Python reached maturity at least 2-3 years ago, and is now much better than MATLAB in many respects. I tried to switch to Python from R around tha
210
Python as a statistics workbench
First, let me say I agree with John D Cook's answer: Python is not a Domain Specific Language like R, and accordingly, there is a lot more you'll be able to do with it further down the road. Of course, R being a DSL means that the latest algorithms published in JASA will almost certainly be in R. If you are doing mostly ad hoc work and want to experiment with the latest lasso regression technique, say, R is hard to beat. If you are doing more production analytical work, integrating with existing software and environments, and concerned about speed, extensibility and maintainability, Python will serve you much better. Second, ars gave a great answer with good links. Here are a few more packages that I view as essential to analytical work in Python: matplotlib for beautiful, publication quality graphics. IPython for an enhanced, interactive Python console. Importantly, IPython provides a powerful framework for interactive, parallel computing in Python. Cython for easily writing C extensions in Python. This package lets you take a chunk of computationally intensive Python code and easily convert it to a C extension. You'll then be able to load the C extension like any other Python module but the code will run very fast since it is in C. PyIMSL Studio for a collection of hundreds of mathemaical and statistical algorithms that are thoroughly documented and supported. You can call the exact same algorithms from Python and C, with nearly the same API and you'll get the same results. Full disclosure: I work on this product, but I also use it a lot. xlrd for reading in Excel files easily. If you want a more MATLAB-like interactive IDE/console, check out Spyder, or the PyDev plugin for Eclipse.
Python as a statistics workbench
First, let me say I agree with John D Cook's answer: Python is not a Domain Specific Language like R, and accordingly, there is a lot more you'll be able to do with it further down the road. Of course
Python as a statistics workbench First, let me say I agree with John D Cook's answer: Python is not a Domain Specific Language like R, and accordingly, there is a lot more you'll be able to do with it further down the road. Of course, R being a DSL means that the latest algorithms published in JASA will almost certainly be in R. If you are doing mostly ad hoc work and want to experiment with the latest lasso regression technique, say, R is hard to beat. If you are doing more production analytical work, integrating with existing software and environments, and concerned about speed, extensibility and maintainability, Python will serve you much better. Second, ars gave a great answer with good links. Here are a few more packages that I view as essential to analytical work in Python: matplotlib for beautiful, publication quality graphics. IPython for an enhanced, interactive Python console. Importantly, IPython provides a powerful framework for interactive, parallel computing in Python. Cython for easily writing C extensions in Python. This package lets you take a chunk of computationally intensive Python code and easily convert it to a C extension. You'll then be able to load the C extension like any other Python module but the code will run very fast since it is in C. PyIMSL Studio for a collection of hundreds of mathemaical and statistical algorithms that are thoroughly documented and supported. You can call the exact same algorithms from Python and C, with nearly the same API and you'll get the same results. Full disclosure: I work on this product, but I also use it a lot. xlrd for reading in Excel files easily. If you want a more MATLAB-like interactive IDE/console, check out Spyder, or the PyDev plugin for Eclipse.
Python as a statistics workbench First, let me say I agree with John D Cook's answer: Python is not a Domain Specific Language like R, and accordingly, there is a lot more you'll be able to do with it further down the road. Of course
211
Python as a statistics workbench
I don't think there's any argument that the range of statistical packages in cran and Bioconductor far exceed anything on offer from other languages, however, that isn't the only thing to consider. In my research, I use R when I can but sometimes R is just too slow. For example, a large MCMC run. Recently, I combined python and C to tackle this problem. Brief summary: fitting a large stochastic population model with ~60 parameters and inferring around 150 latent states using MCMC. Read in the data in python Construct the C data structures in python using ctypes. Using a python for loop, call the C functions that updated parameters and calculated the likelihood. A quick calculation showed that the programme spent 95% in C functions. However, I didn't have to write painful C code to read in data or construct C data structures. I know there's also rpy, where python can call R functions. This can be useful, but if you're "just" doing statistics then I would use R.
Python as a statistics workbench
I don't think there's any argument that the range of statistical packages in cran and Bioconductor far exceed anything on offer from other languages, however, that isn't the only thing to consider. In
Python as a statistics workbench I don't think there's any argument that the range of statistical packages in cran and Bioconductor far exceed anything on offer from other languages, however, that isn't the only thing to consider. In my research, I use R when I can but sometimes R is just too slow. For example, a large MCMC run. Recently, I combined python and C to tackle this problem. Brief summary: fitting a large stochastic population model with ~60 parameters and inferring around 150 latent states using MCMC. Read in the data in python Construct the C data structures in python using ctypes. Using a python for loop, call the C functions that updated parameters and calculated the likelihood. A quick calculation showed that the programme spent 95% in C functions. However, I didn't have to write painful C code to read in data or construct C data structures. I know there's also rpy, where python can call R functions. This can be useful, but if you're "just" doing statistics then I would use R.
Python as a statistics workbench I don't think there's any argument that the range of statistical packages in cran and Bioconductor far exceed anything on offer from other languages, however, that isn't the only thing to consider. In
212
Python as a statistics workbench
The following StackOverflow discussions might be useful R versus Python SciPy versus R Psychology researcher choosing between R, Python, and Matlab
Python as a statistics workbench
The following StackOverflow discussions might be useful R versus Python SciPy versus R Psychology researcher choosing between R, Python, and Matlab
Python as a statistics workbench The following StackOverflow discussions might be useful R versus Python SciPy versus R Psychology researcher choosing between R, Python, and Matlab
Python as a statistics workbench The following StackOverflow discussions might be useful R versus Python SciPy versus R Psychology researcher choosing between R, Python, and Matlab
213
Python as a statistics workbench
I haven't seen the scikit-learn explicitly mentioned in the answers above. It's a Python package for machine learning in Python. It's fairly young but growing extremely rapidly (disclaimer: I am a scikit-learn developer). It's goals are to provide standard machine learning algorithmic tools in a unified interface with a focus on speed, and usability. As far as I know, you cannot find anything similar in Matlab. It's strong points are: A detailed documentation, with many examples High quality standard supervised learning (regression/classification) tools. Specifically: very versatile SVM (based on libsvm, but with integration of external patches, and a lot of work on the Python binding) penalized linear models (Lasso, sparse logistic regression...) with efficient implementations. The ability to perform model selection by cross-validation using multiple CPUs Unsupervised learning to explore the data or do a first dimensionality reduction, that can easily be chained to supervised learning. Open source, BSD licensed. If you are not in a purely academic environment (I am in what would be a national lab in the state) this matters a lot as Matlab costs are then very high, and you might be thinking of deriving products from your work. Matlab is a great tool, but in my own work, scipy+scikit-learn is starting to give me an edge on Matlab because Python does a better job with memory due to its view mechanism (and I have big data), and because the scikit-learn enables me to very easily compare different approaches.
Python as a statistics workbench
I haven't seen the scikit-learn explicitly mentioned in the answers above. It's a Python package for machine learning in Python. It's fairly young but growing extremely rapidly (disclaimer: I am a sci
Python as a statistics workbench I haven't seen the scikit-learn explicitly mentioned in the answers above. It's a Python package for machine learning in Python. It's fairly young but growing extremely rapidly (disclaimer: I am a scikit-learn developer). It's goals are to provide standard machine learning algorithmic tools in a unified interface with a focus on speed, and usability. As far as I know, you cannot find anything similar in Matlab. It's strong points are: A detailed documentation, with many examples High quality standard supervised learning (regression/classification) tools. Specifically: very versatile SVM (based on libsvm, but with integration of external patches, and a lot of work on the Python binding) penalized linear models (Lasso, sparse logistic regression...) with efficient implementations. The ability to perform model selection by cross-validation using multiple CPUs Unsupervised learning to explore the data or do a first dimensionality reduction, that can easily be chained to supervised learning. Open source, BSD licensed. If you are not in a purely academic environment (I am in what would be a national lab in the state) this matters a lot as Matlab costs are then very high, and you might be thinking of deriving products from your work. Matlab is a great tool, but in my own work, scipy+scikit-learn is starting to give me an edge on Matlab because Python does a better job with memory due to its view mechanism (and I have big data), and because the scikit-learn enables me to very easily compare different approaches.
Python as a statistics workbench I haven't seen the scikit-learn explicitly mentioned in the answers above. It's a Python package for machine learning in Python. It's fairly young but growing extremely rapidly (disclaimer: I am a sci
214
Python as a statistics workbench
One benefit of moving to Python is the possibility to do more work in one language. Python is a reasonable choice for number crunching, writing web sites, administrative scripting, etc. So if you do your statistics in Python, you wouldn't have to switch languages to do other programming tasks. Update: On January 26, 2011 Microsoft Research announced Sho, a new Python-based environment for data analysis. I haven't had a chance to try it yet, but it sounds like an interesting possibility if want to run Python and also interact with .NET libraries.
Python as a statistics workbench
One benefit of moving to Python is the possibility to do more work in one language. Python is a reasonable choice for number crunching, writing web sites, administrative scripting, etc. So if you do
Python as a statistics workbench One benefit of moving to Python is the possibility to do more work in one language. Python is a reasonable choice for number crunching, writing web sites, administrative scripting, etc. So if you do your statistics in Python, you wouldn't have to switch languages to do other programming tasks. Update: On January 26, 2011 Microsoft Research announced Sho, a new Python-based environment for data analysis. I haven't had a chance to try it yet, but it sounds like an interesting possibility if want to run Python and also interact with .NET libraries.
Python as a statistics workbench One benefit of moving to Python is the possibility to do more work in one language. Python is a reasonable choice for number crunching, writing web sites, administrative scripting, etc. So if you do
215
Python as a statistics workbench
I am a biostatistician in what is essentially an R shop (~80 of folks use R as their primary tool). Still, I spend approximately 3/4 of my time working in Python. I attribute this primarily to the fact that my work involves Bayesian and machine learning approaches to statistical modeling. Python hits much closer to the performance/productivity sweet spot than does R, at least for statistical methods that are iterative or simulation-based. If I were performing ANOVAS, regressions and statistical tests, I'm sure I would primarily use R. Most of what I need, however, is not available as a canned R package.
Python as a statistics workbench
I am a biostatistician in what is essentially an R shop (~80 of folks use R as their primary tool). Still, I spend approximately 3/4 of my time working in Python. I attribute this primarily to the fac
Python as a statistics workbench I am a biostatistician in what is essentially an R shop (~80 of folks use R as their primary tool). Still, I spend approximately 3/4 of my time working in Python. I attribute this primarily to the fact that my work involves Bayesian and machine learning approaches to statistical modeling. Python hits much closer to the performance/productivity sweet spot than does R, at least for statistical methods that are iterative or simulation-based. If I were performing ANOVAS, regressions and statistical tests, I'm sure I would primarily use R. Most of what I need, however, is not available as a canned R package.
Python as a statistics workbench I am a biostatistician in what is essentially an R shop (~80 of folks use R as their primary tool). Still, I spend approximately 3/4 of my time working in Python. I attribute this primarily to the fac
216
Python as a statistics workbench
Perhaps this answer is cheating, but it seems strange no one has mentioned the rpy project, which provides an interface between R and Python. You get a pythonic api to most of R's functionality while retaining the (I would argue nicer) syntax, data processing, and in some cases speed of Python. It's unlikely that Python will ever have as many bleeding edge stats tools as R, just because R is a dsl and the stats community is more invested in R than possibly any other language. I see this as analogous to using an ORM to leverage the advantages of SQL, while letting Python be Python and SQL be SQL. Other useful packages specifically for data structures include: pydataframe replicates a data.frame and can be used with rpy. Allows you to use R-like filtering and operations. pyTables Uses the fast hdf5 data type underneath, been around for ages h5py Also hdf5, but specifically aimed at interoperating with numpy pandas Another project that manages data.frame like data, works with rpy, pyTables and numpy
Python as a statistics workbench
Perhaps this answer is cheating, but it seems strange no one has mentioned the rpy project, which provides an interface between R and Python. You get a pythonic api to most of R's functionality while
Python as a statistics workbench Perhaps this answer is cheating, but it seems strange no one has mentioned the rpy project, which provides an interface between R and Python. You get a pythonic api to most of R's functionality while retaining the (I would argue nicer) syntax, data processing, and in some cases speed of Python. It's unlikely that Python will ever have as many bleeding edge stats tools as R, just because R is a dsl and the stats community is more invested in R than possibly any other language. I see this as analogous to using an ORM to leverage the advantages of SQL, while letting Python be Python and SQL be SQL. Other useful packages specifically for data structures include: pydataframe replicates a data.frame and can be used with rpy. Allows you to use R-like filtering and operations. pyTables Uses the fast hdf5 data type underneath, been around for ages h5py Also hdf5, but specifically aimed at interoperating with numpy pandas Another project that manages data.frame like data, works with rpy, pyTables and numpy
Python as a statistics workbench Perhaps this answer is cheating, but it seems strange no one has mentioned the rpy project, which provides an interface between R and Python. You get a pythonic api to most of R's functionality while
217
Python as a statistics workbench
I would like to say that from the standpoint of someone who relies heavily on linear models for my statistical work, and love Python for other aspects of my job, I have been highly disappointed in Python as a platform for doing anything but fairly basic statistics. I find R has much better support from the statistical community, much better implementation of linear models, and to be frank from the statistics side of things, even with excellent distributions like Enthought, Python feels a bit like the Wild West. And unless you're working solo, the odds of you having collaborators who use Python for statistics, at this point, are pretty slim.
Python as a statistics workbench
I would like to say that from the standpoint of someone who relies heavily on linear models for my statistical work, and love Python for other aspects of my job, I have been highly disappointed in Pyt
Python as a statistics workbench I would like to say that from the standpoint of someone who relies heavily on linear models for my statistical work, and love Python for other aspects of my job, I have been highly disappointed in Python as a platform for doing anything but fairly basic statistics. I find R has much better support from the statistical community, much better implementation of linear models, and to be frank from the statistics side of things, even with excellent distributions like Enthought, Python feels a bit like the Wild West. And unless you're working solo, the odds of you having collaborators who use Python for statistics, at this point, are pretty slim.
Python as a statistics workbench I would like to say that from the standpoint of someone who relies heavily on linear models for my statistical work, and love Python for other aspects of my job, I have been highly disappointed in Pyt
218
Python as a statistics workbench
There's really no need to give up R for Python anyway. If you use IPython with a full stack, you have R, Octave and Cython extensions, so you can easily and cleanly use those languages within your IPython notebooks. You also have support for passing values between them and your Python namespace. You can output your data as plots, using matplotlib, and as properly rendered mathematical expressions. There are tons of other features, and you can do all this in your browser. IPython has come a long way :)
Python as a statistics workbench
There's really no need to give up R for Python anyway. If you use IPython with a full stack, you have R, Octave and Cython extensions, so you can easily and cleanly use those languages within your IPy
Python as a statistics workbench There's really no need to give up R for Python anyway. If you use IPython with a full stack, you have R, Octave and Cython extensions, so you can easily and cleanly use those languages within your IPython notebooks. You also have support for passing values between them and your Python namespace. You can output your data as plots, using matplotlib, and as properly rendered mathematical expressions. There are tons of other features, and you can do all this in your browser. IPython has come a long way :)
Python as a statistics workbench There's really no need to give up R for Python anyway. If you use IPython with a full stack, you have R, Octave and Cython extensions, so you can easily and cleanly use those languages within your IPy
219
Python as a statistics workbench
What you are looking for is called Sage: http://www.sagemath.org/ It is an excellent online interface to a well-built combination of Python tools for mathematics.
Python as a statistics workbench
What you are looking for is called Sage: http://www.sagemath.org/ It is an excellent online interface to a well-built combination of Python tools for mathematics.
Python as a statistics workbench What you are looking for is called Sage: http://www.sagemath.org/ It is an excellent online interface to a well-built combination of Python tools for mathematics.
Python as a statistics workbench What you are looking for is called Sage: http://www.sagemath.org/ It is an excellent online interface to a well-built combination of Python tools for mathematics.
220
Python as a statistics workbench
Rpy2 - play with R stay in Python... Further elaboration per Gung's request: Rpy2 documentation can be found at http://rpy.sourceforge.net/rpy2/doc-dev/html/introduction.html From the documentation, The high-level interface in rpy2 is designed to facilitate the use of R by Python programmers. R objects are exposed as instances of Python-implemented classes, with R functions as bound methods to those objects in a number of cases. This section also contains an introduction to graphics with R: trellis (lattice) plots as well as the grammar of graphics implemented in ggplot2 let one make complex and informative plots with little code written, while the underlying grid graphics allow all possible customization is outlined. Why I like it: I can process my data using the flexibility of python , turn it into a matrix using numpy or pandas and do the computation in R, and get back r objects to do post processing. I use econometrics and python simply will not have the bleeding edge stats tools of R. And R will unlikely ever be as flexible as python. This does require you to understand R. Fortunately, it has a nice developer community. Rpy2 itself is well supported and the gentleman supporting it frequents the SO forums. Windows installation maybe a slight pain - https://stackoverflow.com/questions/5068760/bizzarre-issue-trying-to-make-rpy2-2-1-9-work-with-r-2-12-1-using-python-2-6-un?rq=1 might help.
Python as a statistics workbench
Rpy2 - play with R stay in Python... Further elaboration per Gung's request: Rpy2 documentation can be found at http://rpy.sourceforge.net/rpy2/doc-dev/html/introduction.html From the documentation,
Python as a statistics workbench Rpy2 - play with R stay in Python... Further elaboration per Gung's request: Rpy2 documentation can be found at http://rpy.sourceforge.net/rpy2/doc-dev/html/introduction.html From the documentation, The high-level interface in rpy2 is designed to facilitate the use of R by Python programmers. R objects are exposed as instances of Python-implemented classes, with R functions as bound methods to those objects in a number of cases. This section also contains an introduction to graphics with R: trellis (lattice) plots as well as the grammar of graphics implemented in ggplot2 let one make complex and informative plots with little code written, while the underlying grid graphics allow all possible customization is outlined. Why I like it: I can process my data using the flexibility of python , turn it into a matrix using numpy or pandas and do the computation in R, and get back r objects to do post processing. I use econometrics and python simply will not have the bleeding edge stats tools of R. And R will unlikely ever be as flexible as python. This does require you to understand R. Fortunately, it has a nice developer community. Rpy2 itself is well supported and the gentleman supporting it frequents the SO forums. Windows installation maybe a slight pain - https://stackoverflow.com/questions/5068760/bizzarre-issue-trying-to-make-rpy2-2-1-9-work-with-r-2-12-1-using-python-2-6-un?rq=1 might help.
Python as a statistics workbench Rpy2 - play with R stay in Python... Further elaboration per Gung's request: Rpy2 documentation can be found at http://rpy.sourceforge.net/rpy2/doc-dev/html/introduction.html From the documentation,
221
Python as a statistics workbench
I use Python for statistical analysis and forecasting. As mentioned by others above, Numpy and Matplotlib are good workhorses. I also use ReportLab for producing PDF output. I'm currently looking at both Resolver and Pyspread which are Excel-like spreadsheet applications which are based on Python. Resolver is a commercial product but Pyspread is still open-source. (Apologies, I'm limited to only one link)
Python as a statistics workbench
I use Python for statistical analysis and forecasting. As mentioned by others above, Numpy and Matplotlib are good workhorses. I also use ReportLab for producing PDF output. I'm currently looking at
Python as a statistics workbench I use Python for statistical analysis and forecasting. As mentioned by others above, Numpy and Matplotlib are good workhorses. I also use ReportLab for producing PDF output. I'm currently looking at both Resolver and Pyspread which are Excel-like spreadsheet applications which are based on Python. Resolver is a commercial product but Pyspread is still open-source. (Apologies, I'm limited to only one link)
Python as a statistics workbench I use Python for statistical analysis and forecasting. As mentioned by others above, Numpy and Matplotlib are good workhorses. I also use ReportLab for producing PDF output. I'm currently looking at
222
Python as a statistics workbench
great overview so far. I'm using python (specifically scipy + matplotlib) as a matlab replacement since 3 years working at University. I sometimes still go back because I'm familiar with specific libraries e.g. the matlab wavelet package is purely awesome. I like the http://enthought.com/ python distribution. It's commercial, yet free for academic purposes and, as far as I know, completely open-source. As I'm working with a lot of students, before using enthought it was sometimes troublesome for them to install numpy, scipy, ipython etc. Enthought provides an installer for Windows, Linux and Mac. Two other packages worth mentioning: ipython (comes already with enthought) great advanced shell. a good intro is on showmedo http://showmedo.com/videotutorials/series?name=PythonIPythonSeries nltk - the natural language toolkit http://www.nltk.org/ great package in case you want to do some statistics /machine learning on any corpus.
Python as a statistics workbench
great overview so far. I'm using python (specifically scipy + matplotlib) as a matlab replacement since 3 years working at University. I sometimes still go back because I'm familiar with specific libr
Python as a statistics workbench great overview so far. I'm using python (specifically scipy + matplotlib) as a matlab replacement since 3 years working at University. I sometimes still go back because I'm familiar with specific libraries e.g. the matlab wavelet package is purely awesome. I like the http://enthought.com/ python distribution. It's commercial, yet free for academic purposes and, as far as I know, completely open-source. As I'm working with a lot of students, before using enthought it was sometimes troublesome for them to install numpy, scipy, ipython etc. Enthought provides an installer for Windows, Linux and Mac. Two other packages worth mentioning: ipython (comes already with enthought) great advanced shell. a good intro is on showmedo http://showmedo.com/videotutorials/series?name=PythonIPythonSeries nltk - the natural language toolkit http://www.nltk.org/ great package in case you want to do some statistics /machine learning on any corpus.
Python as a statistics workbench great overview so far. I'm using python (specifically scipy + matplotlib) as a matlab replacement since 3 years working at University. I sometimes still go back because I'm familiar with specific libr
223
Python as a statistics workbench
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. This is an interesting question, with some great answers. You might find some useful discussion in a paper that I wrote with Roseline Bilina. The final version is here: http://www.enac.fr/recherche/leea/Steve%20Lawford/papers/python_paper_revised.pdf (it has since appeared, in almost this form, as "Python for Unified Research in Econometrics and Statistics", in Econometric Reviews (2012), 31(5), 558-591).
Python as a statistics workbench
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Python as a statistics workbench Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. This is an interesting question, with some great answers. You might find some useful discussion in a paper that I wrote with Roseline Bilina. The final version is here: http://www.enac.fr/recherche/leea/Steve%20Lawford/papers/python_paper_revised.pdf (it has since appeared, in almost this form, as "Python for Unified Research in Econometrics and Statistics", in Econometric Reviews (2012), 31(5), 558-591).
Python as a statistics workbench Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
224
Python as a statistics workbench
Perhaps not directly related, but R has a nice GUI environment for interactive sessions (edit: on Mac/Windows). IPython is very good but for an environment closer to Matlab's you might try Spyder or IEP. I've had better luck of late using IEP, but Spyder looks more promising. IEP: http://code.google.com/p/iep/ Spyder: http://packages.python.org/spyder/ And the IEP site includes a brief comparison of related software: http://code.google.com/p/iep/wiki/Alternatives
Python as a statistics workbench
Perhaps not directly related, but R has a nice GUI environment for interactive sessions (edit: on Mac/Windows). IPython is very good but for an environment closer to Matlab's you might try Spyder or I
Python as a statistics workbench Perhaps not directly related, but R has a nice GUI environment for interactive sessions (edit: on Mac/Windows). IPython is very good but for an environment closer to Matlab's you might try Spyder or IEP. I've had better luck of late using IEP, but Spyder looks more promising. IEP: http://code.google.com/p/iep/ Spyder: http://packages.python.org/spyder/ And the IEP site includes a brief comparison of related software: http://code.google.com/p/iep/wiki/Alternatives
Python as a statistics workbench Perhaps not directly related, but R has a nice GUI environment for interactive sessions (edit: on Mac/Windows). IPython is very good but for an environment closer to Matlab's you might try Spyder or I
225
Python as a statistics workbench
No one has mentioned Orange before: Data mining through visual programming or Python scripting. Components for machine learning. Add-ons for bioinformatics and text mining. Packed with features for data analytics. I don't use it on daily basis, but it's a must-see for anyone who prefers GUI over command line interface. Even if you prefer the latter, Orange is a good thing to be familiar with, since you can easily import pieces of Orange to your Python scripts in case you need some of its functionality.
Python as a statistics workbench
No one has mentioned Orange before: Data mining through visual programming or Python scripting. Components for machine learning. Add-ons for bioinformatics and text mining. Packed with features f
Python as a statistics workbench No one has mentioned Orange before: Data mining through visual programming or Python scripting. Components for machine learning. Add-ons for bioinformatics and text mining. Packed with features for data analytics. I don't use it on daily basis, but it's a must-see for anyone who prefers GUI over command line interface. Even if you prefer the latter, Orange is a good thing to be familiar with, since you can easily import pieces of Orange to your Python scripts in case you need some of its functionality.
Python as a statistics workbench No one has mentioned Orange before: Data mining through visual programming or Python scripting. Components for machine learning. Add-ons for bioinformatics and text mining. Packed with features f
226
Python as a statistics workbench
I should add a shout-out for Sho, the numerical computing environment built on IronPython. I'm using it right now for the Stanford machine learning class and it's been really helpful. It's got built in linear algebra packages and charting capabilities. Being .Net it's easy to extend with C# or any other .Net language. I've found it much easier to get started with, being a windows user, than straight Python and NumPy.
Python as a statistics workbench
I should add a shout-out for Sho, the numerical computing environment built on IronPython. I'm using it right now for the Stanford machine learning class and it's been really helpful. It's got built i
Python as a statistics workbench I should add a shout-out for Sho, the numerical computing environment built on IronPython. I'm using it right now for the Stanford machine learning class and it's been really helpful. It's got built in linear algebra packages and charting capabilities. Being .Net it's easy to extend with C# or any other .Net language. I've found it much easier to get started with, being a windows user, than straight Python and NumPy.
Python as a statistics workbench I should add a shout-out for Sho, the numerical computing environment built on IronPython. I'm using it right now for the Stanford machine learning class and it's been really helpful. It's got built i
227
Python as a statistics workbench
Note that SPSS Statistics has an integrated Python interface (also R). So you can write Python programs that use Statistics procedures and produce either the usual nicely formatted Statistics output or return results to your program for further processing. Or you can run Python programs in the Statistics command stream. You do still have to know the Statistics command language, but you can take advantage of all the data management, presentation output etc that Statistics provides as well as the procedures.
Python as a statistics workbench
Note that SPSS Statistics has an integrated Python interface (also R). So you can write Python programs that use Statistics procedures and produce either the usual nicely formatted Statistics output
Python as a statistics workbench Note that SPSS Statistics has an integrated Python interface (also R). So you can write Python programs that use Statistics procedures and produce either the usual nicely formatted Statistics output or return results to your program for further processing. Or you can run Python programs in the Statistics command stream. You do still have to know the Statistics command language, but you can take advantage of all the data management, presentation output etc that Statistics provides as well as the procedures.
Python as a statistics workbench Note that SPSS Statistics has an integrated Python interface (also R). So you can write Python programs that use Statistics procedures and produce either the usual nicely formatted Statistics output
228
Python as a statistics workbench
I found a great intro to pandas here that I suggest checking out. Pandas is an amazing toolset and provides the high level data analysis capabilities of R with the extensive libraries and production quality of Python. This blog post gives a great intro to Pandas from the perspective of a complete beginner: http://manishamde.github.com/blog/2013/03/07/pandas-and-python-top-10/
Python as a statistics workbench
I found a great intro to pandas here that I suggest checking out. Pandas is an amazing toolset and provides the high level data analysis capabilities of R with the extensive libraries and production q
Python as a statistics workbench I found a great intro to pandas here that I suggest checking out. Pandas is an amazing toolset and provides the high level data analysis capabilities of R with the extensive libraries and production quality of Python. This blog post gives a great intro to Pandas from the perspective of a complete beginner: http://manishamde.github.com/blog/2013/03/07/pandas-and-python-top-10/
Python as a statistics workbench I found a great intro to pandas here that I suggest checking out. Pandas is an amazing toolset and provides the high level data analysis capabilities of R with the extensive libraries and production q
229
Python as a statistics workbench
Python has a long way to go before it can be compared to R. It has significantly fewer packages than R and of lower quality. People who stick to the basics or rely only on their custom libraries could probably do their job exclusively in Python but if you're someone who needs more advanced quantitative solutions, I dare to say that nothing comes close to R out there. It should be also noted that, to date, Python has no proper scientific Matlab-style IDE comparable to R-Studio (please don't say Spyder) and you need to work out everything on the console. Generally speaking, the whole Python experience requires a good amount of "geekness" that most people lack and don't care about. Don't get me wrong, I love Python, it's actually my favourite language which, unlike R, is a real programming language. Still, when it comes to pure data analysis I am dependent to R, which is by far the most specialised and developed solution to date. I use Python when I need to combine data analysis with software engineering, e.g. create a tool which will perform automatisation on the methods that I first programmed in a dirty R script. In many occasions I use rpy2 to call R from Python because in the vast majority of cases R packages are so much better (or don't exist in Python at all). This way I try to get the best of both worlds. I still use some Matlab for pure algorithm development since I love its mathematical-style syntax and speed.
Python as a statistics workbench
Python has a long way to go before it can be compared to R. It has significantly fewer packages than R and of lower quality. People who stick to the basics or rely only on their custom libraries could
Python as a statistics workbench Python has a long way to go before it can be compared to R. It has significantly fewer packages than R and of lower quality. People who stick to the basics or rely only on their custom libraries could probably do their job exclusively in Python but if you're someone who needs more advanced quantitative solutions, I dare to say that nothing comes close to R out there. It should be also noted that, to date, Python has no proper scientific Matlab-style IDE comparable to R-Studio (please don't say Spyder) and you need to work out everything on the console. Generally speaking, the whole Python experience requires a good amount of "geekness" that most people lack and don't care about. Don't get me wrong, I love Python, it's actually my favourite language which, unlike R, is a real programming language. Still, when it comes to pure data analysis I am dependent to R, which is by far the most specialised and developed solution to date. I use Python when I need to combine data analysis with software engineering, e.g. create a tool which will perform automatisation on the methods that I first programmed in a dirty R script. In many occasions I use rpy2 to call R from Python because in the vast majority of cases R packages are so much better (or don't exist in Python at all). This way I try to get the best of both worlds. I still use some Matlab for pure algorithm development since I love its mathematical-style syntax and speed.
Python as a statistics workbench Python has a long way to go before it can be compared to R. It has significantly fewer packages than R and of lower quality. People who stick to the basics or rely only on their custom libraries could
230
Python as a statistics workbench
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Recent comparison from DataCamp provides clear picture about R and Python. The usage of these two languages in the data analysis field. Python is generally used when the data analysis tasks need to be integrated with web apps or if statistics code needs to be incorporated into a production database. R is mainly used when the data analysis tasks require standalone computing or analysis on individual servers. I found it so useful in this blog and hope it would help others also to understand recent trends in both of these languages. Julia is also coming up in the area. Hope this helps !
Python as a statistics workbench
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Python as a statistics workbench Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Recent comparison from DataCamp provides clear picture about R and Python. The usage of these two languages in the data analysis field. Python is generally used when the data analysis tasks need to be integrated with web apps or if statistics code needs to be incorporated into a production database. R is mainly used when the data analysis tasks require standalone computing or analysis on individual servers. I found it so useful in this blog and hope it would help others also to understand recent trends in both of these languages. Julia is also coming up in the area. Hope this helps !
Python as a statistics workbench Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
231
Python as a statistics workbench
I believe Python is a superior workbench in my field. I do a lot of scraping, data wrangling, large data work, network analysis, Bayesian modeling, and simulations. All of these things typically need speed and flexibility so I find Python to work better than R in these cases. Here are a few things about Python that I like (some are mentioned above, other points are not): -Cleaner syntax; more readable code. I believe Python to be a more modern and syntactically consistent language. -Python has Notebook, Ipython, and other amazing tools for code sharing, collaboration, publishing. -iPython's notebook enables one to use R in one's Python code so it is always possible to go back to R. -Substantially faster without recourse to C. Using Cython, NUMBA, and other methods of C integration will put your code to speeds comparable to pure C. This, as far as I am aware, cannot be achieved in R. -Pandas, Numpy, and Scipy blow standard R out of the water. Yes, there are a few things that R can do in a single line but takes Pandas 3 or 4. In general, however, Pandas can handle larger data sets, is easier to use, and provides incredible flexibility in regard to integration with other Python packages and methods. -Python is more stable. Try loading a 2gig dataset into RStudio. -One neat package that doesn't seem mentioned above is PyMC3 - great general package for most of your Bayesian modeling. -Some, above mention ggplot2 and grub about its absence from Python. If you ever used Matlab's graphing functionalities and/or used matplotlib in Python then you'll know that the latter options are generally much more capable than ggplot2. However, perhaps R is easier to learn and I do frequently use it in cases where I am not yet too familiar with the modeling procedures. In that case, the depth of R's off-the-shelf statistical libraries is unbeatable. Ideally, I would know both well enough to be able to use upon need.
Python as a statistics workbench
I believe Python is a superior workbench in my field. I do a lot of scraping, data wrangling, large data work, network analysis, Bayesian modeling, and simulations. All of these things typically need
Python as a statistics workbench I believe Python is a superior workbench in my field. I do a lot of scraping, data wrangling, large data work, network analysis, Bayesian modeling, and simulations. All of these things typically need speed and flexibility so I find Python to work better than R in these cases. Here are a few things about Python that I like (some are mentioned above, other points are not): -Cleaner syntax; more readable code. I believe Python to be a more modern and syntactically consistent language. -Python has Notebook, Ipython, and other amazing tools for code sharing, collaboration, publishing. -iPython's notebook enables one to use R in one's Python code so it is always possible to go back to R. -Substantially faster without recourse to C. Using Cython, NUMBA, and other methods of C integration will put your code to speeds comparable to pure C. This, as far as I am aware, cannot be achieved in R. -Pandas, Numpy, and Scipy blow standard R out of the water. Yes, there are a few things that R can do in a single line but takes Pandas 3 or 4. In general, however, Pandas can handle larger data sets, is easier to use, and provides incredible flexibility in regard to integration with other Python packages and methods. -Python is more stable. Try loading a 2gig dataset into RStudio. -One neat package that doesn't seem mentioned above is PyMC3 - great general package for most of your Bayesian modeling. -Some, above mention ggplot2 and grub about its absence from Python. If you ever used Matlab's graphing functionalities and/or used matplotlib in Python then you'll know that the latter options are generally much more capable than ggplot2. However, perhaps R is easier to learn and I do frequently use it in cases where I am not yet too familiar with the modeling procedures. In that case, the depth of R's off-the-shelf statistical libraries is unbeatable. Ideally, I would know both well enough to be able to use upon need.
Python as a statistics workbench I believe Python is a superior workbench in my field. I do a lot of scraping, data wrangling, large data work, network analysis, Bayesian modeling, and simulations. All of these things typically need
232
Python as a statistics workbench
For those who have to work under Windows, Anaconda (https://store.continuum.io/cshop/anaconda/) really helps a lot. Installing packages under Windows was a headache. With Anaconda installed, you can set up a ready-to-use development environment with a one-liner. For example, with conda create -n stats_env python pip numpy scipy matplotlib pandas all these packages will be fetched and installed automatically.
Python as a statistics workbench
For those who have to work under Windows, Anaconda (https://store.continuum.io/cshop/anaconda/) really helps a lot. Installing packages under Windows was a headache. With Anaconda installed, you can s
Python as a statistics workbench For those who have to work under Windows, Anaconda (https://store.continuum.io/cshop/anaconda/) really helps a lot. Installing packages under Windows was a headache. With Anaconda installed, you can set up a ready-to-use development environment with a one-liner. For example, with conda create -n stats_env python pip numpy scipy matplotlib pandas all these packages will be fetched and installed automatically.
Python as a statistics workbench For those who have to work under Windows, Anaconda (https://store.continuum.io/cshop/anaconda/) really helps a lot. Installing packages under Windows was a headache. With Anaconda installed, you can s
233
Python as a statistics workbench
I thought I'd add a more up-to-date answer than those given. I'm a Python guy, through-and-through, and here's why: Python is easily the most intuitive syntax of any programming language I've ever used, except possibly LabVIEW. I can't count the number of times I've simply tried 20-30 lines of code in Python, and they've worked. That's certainly more than can be said for any other language, even LabVIEW. This makes for extremely fast development time. Python is performant. This has been mentioned in other answers, but it bears repeating. I find Python opens large datasets reliably. The packages in Python are fast catching up to R's packages. Certainly, Python use has considerably outstripped R use in the last few years, although technically this argument is, of course, an ad populum. More and more, I find readability to be among the most important qualities good code can possess, and Python is the most readable language ever (assuming you follow reasonably good coding practices, of course). Some of the previous answers have tried to argue that R is more readable, but the examples they've shown all prove the opposite for me: Python is more readable than R, and it's also much quicker to learn. I learned basic Python in one week! The Lambda Labs Stack is a newer tool than Anaconda, and one-ups it, in my opinion. The downside: you can only install it in Ubuntu 16.04, 18.04, and 20.04, and the Ubuntu derivatives in those versions. The upside: you get all the standard GPU-accelerated packages managed for you, all the way to the hardware drivers. Anaconda doesn't do that. The Lambda Labs stack maintains compatible version numbers all the way from your Theano or Keras version to the NVIDIA GPU driver version. As you are probably aware, this is no trivial task. When it comes to machine learning, Python is king, hands-down. And GPU acceleration is something most data professionals find they can't do without. Python has an extremely well-thought-out IDE now: PyCharm. In my opinion, this is what serious Python developers should be using - definitely NOT Jupyter notebooks. While many people use Visual Studio Code, I find PyCharm to be the best IDE for Python. You get everything you could practically want - IPython, terminal, advanced debugging tools including an in-memory calculator, and source code control integration. Many people have said that Python's stats packages aren't as complete as R's. No doubt that's still somewhat true (although see 3 above). On the other hand, I, for one, have not needed those incredibly advanced stats packages. I prefer waiting on advanced statistical analysis until I fully understand the business question being asked. Often times, a relatively straight-forward algorithm for computing a metric is the solution to the problem, in which case Python makes a superb tool for calculating metrics. A lot of people like to tout R's ability to do powerful things in only one line of code. In my opinion, that isn't a terribly great argument. Is that one line of code readable? Typical code is written one time, and read ten times! Readability counts, as the Zen of Python puts it. I'd rather have several lines of readable Python code than one cryptic line of R code (not that those are the only choices, of course; I just want to make the point that fewer lines of code does not equal greater readability). Incidentally, I just can't resist making a comment about SAS, even though it's slightly off-topic. As I said in an earlier point, I learned basic Python in one week. Then I tried SAS. I worked my way through about three chapters of an 11-chapter book on SAS, and it took me two months! Moreover, whenever I tried something, it never ever worked the first time. I would STRONGLY urge you to abandon SAS as soon as you can. It has an extremely convoluted syntax, is extraordinarily unforgiving, etc. About the only good thing that can be said about it is that its statistical capabilities are as complete as R's. Whoopty do. So, there you have it. Python all the way!
Python as a statistics workbench
I thought I'd add a more up-to-date answer than those given. I'm a Python guy, through-and-through, and here's why: Python is easily the most intuitive syntax of any programming language I've ever us
Python as a statistics workbench I thought I'd add a more up-to-date answer than those given. I'm a Python guy, through-and-through, and here's why: Python is easily the most intuitive syntax of any programming language I've ever used, except possibly LabVIEW. I can't count the number of times I've simply tried 20-30 lines of code in Python, and they've worked. That's certainly more than can be said for any other language, even LabVIEW. This makes for extremely fast development time. Python is performant. This has been mentioned in other answers, but it bears repeating. I find Python opens large datasets reliably. The packages in Python are fast catching up to R's packages. Certainly, Python use has considerably outstripped R use in the last few years, although technically this argument is, of course, an ad populum. More and more, I find readability to be among the most important qualities good code can possess, and Python is the most readable language ever (assuming you follow reasonably good coding practices, of course). Some of the previous answers have tried to argue that R is more readable, but the examples they've shown all prove the opposite for me: Python is more readable than R, and it's also much quicker to learn. I learned basic Python in one week! The Lambda Labs Stack is a newer tool than Anaconda, and one-ups it, in my opinion. The downside: you can only install it in Ubuntu 16.04, 18.04, and 20.04, and the Ubuntu derivatives in those versions. The upside: you get all the standard GPU-accelerated packages managed for you, all the way to the hardware drivers. Anaconda doesn't do that. The Lambda Labs stack maintains compatible version numbers all the way from your Theano or Keras version to the NVIDIA GPU driver version. As you are probably aware, this is no trivial task. When it comes to machine learning, Python is king, hands-down. And GPU acceleration is something most data professionals find they can't do without. Python has an extremely well-thought-out IDE now: PyCharm. In my opinion, this is what serious Python developers should be using - definitely NOT Jupyter notebooks. While many people use Visual Studio Code, I find PyCharm to be the best IDE for Python. You get everything you could practically want - IPython, terminal, advanced debugging tools including an in-memory calculator, and source code control integration. Many people have said that Python's stats packages aren't as complete as R's. No doubt that's still somewhat true (although see 3 above). On the other hand, I, for one, have not needed those incredibly advanced stats packages. I prefer waiting on advanced statistical analysis until I fully understand the business question being asked. Often times, a relatively straight-forward algorithm for computing a metric is the solution to the problem, in which case Python makes a superb tool for calculating metrics. A lot of people like to tout R's ability to do powerful things in only one line of code. In my opinion, that isn't a terribly great argument. Is that one line of code readable? Typical code is written one time, and read ten times! Readability counts, as the Zen of Python puts it. I'd rather have several lines of readable Python code than one cryptic line of R code (not that those are the only choices, of course; I just want to make the point that fewer lines of code does not equal greater readability). Incidentally, I just can't resist making a comment about SAS, even though it's slightly off-topic. As I said in an earlier point, I learned basic Python in one week. Then I tried SAS. I worked my way through about three chapters of an 11-chapter book on SAS, and it took me two months! Moreover, whenever I tried something, it never ever worked the first time. I would STRONGLY urge you to abandon SAS as soon as you can. It has an extremely convoluted syntax, is extraordinarily unforgiving, etc. About the only good thing that can be said about it is that its statistical capabilities are as complete as R's. Whoopty do. So, there you have it. Python all the way!
Python as a statistics workbench I thought I'd add a more up-to-date answer than those given. I'm a Python guy, through-and-through, and here's why: Python is easily the most intuitive syntax of any programming language I've ever us
234
What is your favorite "data analysis" cartoon?
Was XKCD, so time for Dilbert: Source: http://dilbert.com/strip/2001-10-25
What is your favorite "data analysis" cartoon?
Was XKCD, so time for Dilbert: Source: http://dilbert.com/strip/2001-10-25
What is your favorite "data analysis" cartoon? Was XKCD, so time for Dilbert: Source: http://dilbert.com/strip/2001-10-25
What is your favorite "data analysis" cartoon? Was XKCD, so time for Dilbert: Source: http://dilbert.com/strip/2001-10-25
235
What is your favorite "data analysis" cartoon?
Another from XKCD: Mentioned here and here.
What is your favorite "data analysis" cartoon?
Another from XKCD: Mentioned here and here.
What is your favorite "data analysis" cartoon? Another from XKCD: Mentioned here and here.
What is your favorite "data analysis" cartoon? Another from XKCD: Mentioned here and here.
236
What is your favorite "data analysis" cartoon?
My favourite Dilbert cartoon: Source: http://dilbert.com/strip/2008-05-07
What is your favorite "data analysis" cartoon?
My favourite Dilbert cartoon: Source: http://dilbert.com/strip/2008-05-07
What is your favorite "data analysis" cartoon? My favourite Dilbert cartoon: Source: http://dilbert.com/strip/2008-05-07
What is your favorite "data analysis" cartoon? My favourite Dilbert cartoon: Source: http://dilbert.com/strip/2008-05-07
237
What is your favorite "data analysis" cartoon?
One more Dilbert cartoon: ...
What is your favorite "data analysis" cartoon?
One more Dilbert cartoon: ...
What is your favorite "data analysis" cartoon? One more Dilbert cartoon: ...
What is your favorite "data analysis" cartoon? One more Dilbert cartoon: ...
238
What is your favorite "data analysis" cartoon?
One of my favorites from xckd: Random Number RFC 1149.5 specifies 4 as the standard IEEE-vetted random number.
What is your favorite "data analysis" cartoon?
One of my favorites from xckd: Random Number RFC 1149.5 specifies 4 as the standard IEEE-vetted random number.
What is your favorite "data analysis" cartoon? One of my favorites from xckd: Random Number RFC 1149.5 specifies 4 as the standard IEEE-vetted random number.
What is your favorite "data analysis" cartoon? One of my favorites from xckd: Random Number RFC 1149.5 specifies 4 as the standard IEEE-vetted random number.
239
What is your favorite "data analysis" cartoon?
From: A visual comparison of normal and paranormal distributions Matthew Freeman J Epidemiol Community Health 2006;60:6. Lower caption says 'Paranormal Distribution' - no idea why the graphical artifact is occuring.
What is your favorite "data analysis" cartoon?
From: A visual comparison of normal and paranormal distributions Matthew Freeman J Epidemiol Community Health 2006;60:6. Lower caption says 'Paranormal Distribution' - no idea why the graphical artifa
What is your favorite "data analysis" cartoon? From: A visual comparison of normal and paranormal distributions Matthew Freeman J Epidemiol Community Health 2006;60:6. Lower caption says 'Paranormal Distribution' - no idea why the graphical artifact is occuring.
What is your favorite "data analysis" cartoon? From: A visual comparison of normal and paranormal distributions Matthew Freeman J Epidemiol Community Health 2006;60:6. Lower caption says 'Paranormal Distribution' - no idea why the graphical artifa
240
What is your favorite "data analysis" cartoon?
'So, uh, we did the green study again and got no link. It was probably a--' 'RESEARCH CONFLICTED ON GREEN JELLY BEAN/ACNE LINK; MORE STUDY RECOMMENDED!' xkcd: significant
What is your favorite "data analysis" cartoon?
'So, uh, we did the green study again and got no link. It was probably a--' 'RESEARCH CONFLICTED ON GREEN JELLY BEAN/ACNE LINK; MORE STUDY RECOMMENDED!' xkcd: significant
What is your favorite "data analysis" cartoon? 'So, uh, we did the green study again and got no link. It was probably a--' 'RESEARCH CONFLICTED ON GREEN JELLY BEAN/ACNE LINK; MORE STUDY RECOMMENDED!' xkcd: significant
What is your favorite "data analysis" cartoon? 'So, uh, we did the green study again and got no link. It was probably a--' 'RESEARCH CONFLICTED ON GREEN JELLY BEAN/ACNE LINK; MORE STUDY RECOMMENDED!' xkcd: significant
241
What is your favorite "data analysis" cartoon?
I just came across this and loved it: (http://xkcd.com/795/).
What is your favorite "data analysis" cartoon?
I just came across this and loved it: (http://xkcd.com/795/).
What is your favorite "data analysis" cartoon? I just came across this and loved it: (http://xkcd.com/795/).
What is your favorite "data analysis" cartoon? I just came across this and loved it: (http://xkcd.com/795/).
242
What is your favorite "data analysis" cartoon?
null
What is your favorite "data analysis" cartoon?
null
What is your favorite "data analysis" cartoon?
What is your favorite "data analysis" cartoon?
243
What is your favorite "data analysis" cartoon?
Another from xkcd #833: And if you labeled your axes, I could tell you exactly how MUCH better.
What is your favorite "data analysis" cartoon?
Another from xkcd #833: And if you labeled your axes, I could tell you exactly how MUCH better.
What is your favorite "data analysis" cartoon? Another from xkcd #833: And if you labeled your axes, I could tell you exactly how MUCH better.
What is your favorite "data analysis" cartoon? Another from xkcd #833: And if you labeled your axes, I could tell you exactly how MUCH better.
244
What is your favorite "data analysis" cartoon?
By the third trimester, there will be hundreds of babies inside you. Also from XKCD
What is your favorite "data analysis" cartoon?
By the third trimester, there will be hundreds of babies inside you. Also from XKCD
What is your favorite "data analysis" cartoon? By the third trimester, there will be hundreds of babies inside you. Also from XKCD
What is your favorite "data analysis" cartoon? By the third trimester, there will be hundreds of babies inside you. Also from XKCD
245
What is your favorite "data analysis" cartoon?
This isn't technically a cartoon, but close enough:
What is your favorite "data analysis" cartoon?
This isn't technically a cartoon, but close enough:
What is your favorite "data analysis" cartoon? This isn't technically a cartoon, but close enough:
What is your favorite "data analysis" cartoon? This isn't technically a cartoon, but close enough:
246
What is your favorite "data analysis" cartoon?
Nice. The importance of variance when thinking about a population. Saturday Morning Breakfast Cereal
What is your favorite "data analysis" cartoon?
Nice. The importance of variance when thinking about a population. Saturday Morning Breakfast Cereal
What is your favorite "data analysis" cartoon? Nice. The importance of variance when thinking about a population. Saturday Morning Breakfast Cereal
What is your favorite "data analysis" cartoon? Nice. The importance of variance when thinking about a population. Saturday Morning Breakfast Cereal
247
What is your favorite "data analysis" cartoon?
this too:
What is your favorite "data analysis" cartoon?
this too:
What is your favorite "data analysis" cartoon? this too:
What is your favorite "data analysis" cartoon? this too:
248
What is your favorite "data analysis" cartoon?
There is this one on Bayesian learning:
What is your favorite "data analysis" cartoon?
There is this one on Bayesian learning:
What is your favorite "data analysis" cartoon? There is this one on Bayesian learning:
What is your favorite "data analysis" cartoon? There is this one on Bayesian learning:
249
What is your favorite "data analysis" cartoon?
And another one from xkcd. Title: Self-Description The mouseover text: The contents of any one panel are dependent on the contents of every panel including itself. The graph of panel dependencies is complete and bidirectional, and each node has a loop. The mouseover text has two hundred and forty-two characters.
What is your favorite "data analysis" cartoon?
And another one from xkcd. Title: Self-Description The mouseover text: The contents of any one panel are dependent on the contents of every panel including itself. The graph of panel depen
What is your favorite "data analysis" cartoon? And another one from xkcd. Title: Self-Description The mouseover text: The contents of any one panel are dependent on the contents of every panel including itself. The graph of panel dependencies is complete and bidirectional, and each node has a loop. The mouseover text has two hundred and forty-two characters.
What is your favorite "data analysis" cartoon? And another one from xkcd. Title: Self-Description The mouseover text: The contents of any one panel are dependent on the contents of every panel including itself. The graph of panel depen
250
What is your favorite "data analysis" cartoon?
Here is a nice one (the inadequacy about average ratings)
What is your favorite "data analysis" cartoon?
Here is a nice one (the inadequacy about average ratings)
What is your favorite "data analysis" cartoon? Here is a nice one (the inadequacy about average ratings)
What is your favorite "data analysis" cartoon? Here is a nice one (the inadequacy about average ratings)
251
What is your favorite "data analysis" cartoon?
Another one from xkcd: Alt-text: Hell, my eighth grade science class managed to conclusively reject it just based on a classroom experiment. It's pretty sad to hear about million-dollar research teams who can't even manage that.
What is your favorite "data analysis" cartoon?
Another one from xkcd: Alt-text: Hell, my eighth grade science class managed to conclusively reject it just based on a classroom experiment. It's pretty sad to hear about million-dollar research tea
What is your favorite "data analysis" cartoon? Another one from xkcd: Alt-text: Hell, my eighth grade science class managed to conclusively reject it just based on a classroom experiment. It's pretty sad to hear about million-dollar research teams who can't even manage that.
What is your favorite "data analysis" cartoon? Another one from xkcd: Alt-text: Hell, my eighth grade science class managed to conclusively reject it just based on a classroom experiment. It's pretty sad to hear about million-dollar research tea
252
What is your favorite "data analysis" cartoon?
Here's another one from Dilbert:
What is your favorite "data analysis" cartoon?
Here's another one from Dilbert:
What is your favorite "data analysis" cartoon? Here's another one from Dilbert:
What is your favorite "data analysis" cartoon? Here's another one from Dilbert:
253
What is your favorite "data analysis" cartoon?
http://andrewgelman.com/2011/12/suspicious-histograms/
What is your favorite "data analysis" cartoon?
http://andrewgelman.com/2011/12/suspicious-histograms/
What is your favorite "data analysis" cartoon? http://andrewgelman.com/2011/12/suspicious-histograms/
What is your favorite "data analysis" cartoon? http://andrewgelman.com/2011/12/suspicious-histograms/
254
What is your favorite "data analysis" cartoon?
More about design and power than analysis, but I like this one
What is your favorite "data analysis" cartoon?
More about design and power than analysis, but I like this one
What is your favorite "data analysis" cartoon? More about design and power than analysis, but I like this one
What is your favorite "data analysis" cartoon? More about design and power than analysis, but I like this one
255
What is your favorite "data analysis" cartoon?
I liked this one: This is probably fun to show in class as well...
What is your favorite "data analysis" cartoon?
I liked this one: This is probably fun to show in class as well...
What is your favorite "data analysis" cartoon? I liked this one: This is probably fun to show in class as well...
What is your favorite "data analysis" cartoon? I liked this one: This is probably fun to show in class as well...
256
What is your favorite "data analysis" cartoon?
A classic...
What is your favorite "data analysis" cartoon?
A classic...
What is your favorite "data analysis" cartoon? A classic...
What is your favorite "data analysis" cartoon? A classic...
257
What is your favorite "data analysis" cartoon?
Source: unknown. Posted on flowingdata.com.
What is your favorite "data analysis" cartoon?
Source: unknown. Posted on flowingdata.com.
What is your favorite "data analysis" cartoon? Source: unknown. Posted on flowingdata.com.
What is your favorite "data analysis" cartoon? Source: unknown. Posted on flowingdata.com.
258
What is your favorite "data analysis" cartoon?
Saturday Morning Breakfast Cereal
What is your favorite "data analysis" cartoon?
Saturday Morning Breakfast Cereal
What is your favorite "data analysis" cartoon? Saturday Morning Breakfast Cereal
What is your favorite "data analysis" cartoon? Saturday Morning Breakfast Cereal
259
What is your favorite "data analysis" cartoon?
Found this one in the comments on Andrew Gelman's blog.
What is your favorite "data analysis" cartoon?
Found this one in the comments on Andrew Gelman's blog.
What is your favorite "data analysis" cartoon? Found this one in the comments on Andrew Gelman's blog.
What is your favorite "data analysis" cartoon? Found this one in the comments on Andrew Gelman's blog.
260
What is your favorite "data analysis" cartoon?
I found this from a NoSQL presentation, but the cartoon can be found directly at http://browsertoolkit.com/fault-tolerance.png
What is your favorite "data analysis" cartoon?
I found this from a NoSQL presentation, but the cartoon can be found directly at http://browsertoolkit.com/fault-tolerance.png
What is your favorite "data analysis" cartoon? I found this from a NoSQL presentation, but the cartoon can be found directly at http://browsertoolkit.com/fault-tolerance.png
What is your favorite "data analysis" cartoon? I found this from a NoSQL presentation, but the cartoon can be found directly at http://browsertoolkit.com/fault-tolerance.png
261
What is your favorite "data analysis" cartoon?
Allright, I think this one is hilarious- but let's see if it passes the Statistical Analysis Miller test. Fermirotica I love how Google handles dimensional analysis. Stats are ballpark and vary wildly by time of day and whether your mom is in town.
What is your favorite "data analysis" cartoon?
Allright, I think this one is hilarious- but let's see if it passes the Statistical Analysis Miller test. Fermirotica I love how Google handles dimensional analysis. Stats are ballpark and vary wil
What is your favorite "data analysis" cartoon? Allright, I think this one is hilarious- but let's see if it passes the Statistical Analysis Miller test. Fermirotica I love how Google handles dimensional analysis. Stats are ballpark and vary wildly by time of day and whether your mom is in town.
What is your favorite "data analysis" cartoon? Allright, I think this one is hilarious- but let's see if it passes the Statistical Analysis Miller test. Fermirotica I love how Google handles dimensional analysis. Stats are ballpark and vary wil
262
What is your favorite "data analysis" cartoon?
From xkcd: This is data analysis in the form of a cartoon, and I find it particularly poignant. The universe is probably littered with the one-planet graves of cultures which made the sensible economic decision that there's no good reason to go into space--each discovered, studied, and remembered by the ones who made the irrational decision.
What is your favorite "data analysis" cartoon?
From xkcd: This is data analysis in the form of a cartoon, and I find it particularly poignant. The universe is probably littered with the one-planet graves of cultures which made the sensible econo
What is your favorite "data analysis" cartoon? From xkcd: This is data analysis in the form of a cartoon, and I find it particularly poignant. The universe is probably littered with the one-planet graves of cultures which made the sensible economic decision that there's no good reason to go into space--each discovered, studied, and remembered by the ones who made the irrational decision.
What is your favorite "data analysis" cartoon? From xkcd: This is data analysis in the form of a cartoon, and I find it particularly poignant. The universe is probably littered with the one-planet graves of cultures which made the sensible econo
263
What is your favorite "data analysis" cartoon?
Another one from xkcd:
What is your favorite "data analysis" cartoon?
Another one from xkcd:
What is your favorite "data analysis" cartoon? Another one from xkcd:
What is your favorite "data analysis" cartoon? Another one from xkcd:
264
What is the trade-off between batch size and number of iterations to train a neural network?
From Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima. https://arxiv.org/abs/1609.04836 : The stochastic gradient descent method and its variants are algorithms of choice for many Deep Learning tasks. These methods operate in a small-batch regime wherein a fraction of the training data, usually 32--512 data points, is sampled to compute an approximation to the gradient. It has been observed in practice that when using a larger batch there is a significant degradation in the quality of the model, as measured by its ability to generalize. There have been some attempts to investigate the cause for this generalization drop in the large-batch regime, however the precise answer for this phenomenon is, hitherto unknown. In this paper, we present ample numerical evidence that supports the view that large-batch methods tend to converge to sharp minimizers of the training and testing functions -- and that sharp minima lead to poorer generalization. In contrast, small-batch methods consistently converge to flat minimizers, and our experiments support a commonly held view that this is due to the inherent noise in the gradient estimation. We also discuss several empirical strategies that help large-batch methods eliminate the generalization gap and conclude with a set of future research ideas and open questions. […] The lack of generalization ability is due to the fact that large-batch methods tend to converge to sharp minimizers of the training function. These minimizers are characterized by large positive eigenvalues in $\nabla^2 f(x)$ and tend to generalize less well. In contrast, small-batch methods converge to flat minimizers characterized by small positive eigenvalues of $\nabla^2 f(x)$. We have observed that the loss function landscape of deep neural networks is such that large-batch methods are almost invariably attracted to regions with sharp minima and that, unlike small batch methods, are unable to escape basins of these minimizers. […] Also, some good insights from Ian Goodfellow answering to why do not use the whole training set to compute the gradient? on Quora: The size of the learning rate is limited mostly by factors like how curved the cost function is. You can think of gradient descent as making a linear approximation to the cost function, then moving downhill along that approximate cost. If the cost function is highly non-linear (highly curved) then the approximation will not be very good for very far, so only small step sizes are safe. You can read more about this in Chapter 4 of the deep learning textbook, on numerical computation: http://www.deeplearningbook.org/contents/numerical.html When you put m examples in a minibatch, you need to do O(m) computation and use O(m) memory, but you reduce the amount of uncertainty in the gradient by a factor of only O(sqrt(m)). In other words, there are diminishing marginal returns to putting more examples in the minibatch. You can read more about this in Chapter 8 of the deep learning textbook, on optimization algorithms for deep learning: http://www.deeplearningbook.org/contents/optimization.html Also, if you think about it, even using the entire training set doesn’t really give you the true gradient. The true gradient would be the expected gradient with the expectation taken over all possible examples, weighted by the data generating distribution. Using the entire training set is just using a very large minibatch size, where the size of your minibatch is limited by the amount you spend on data collection, rather than the amount you spend on computation. Related: Batch gradient descent versus stochastic gradient descent
What is the trade-off between batch size and number of iterations to train a neural network?
From Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima. https://arxiv.o
What is the trade-off between batch size and number of iterations to train a neural network? From Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima. https://arxiv.org/abs/1609.04836 : The stochastic gradient descent method and its variants are algorithms of choice for many Deep Learning tasks. These methods operate in a small-batch regime wherein a fraction of the training data, usually 32--512 data points, is sampled to compute an approximation to the gradient. It has been observed in practice that when using a larger batch there is a significant degradation in the quality of the model, as measured by its ability to generalize. There have been some attempts to investigate the cause for this generalization drop in the large-batch regime, however the precise answer for this phenomenon is, hitherto unknown. In this paper, we present ample numerical evidence that supports the view that large-batch methods tend to converge to sharp minimizers of the training and testing functions -- and that sharp minima lead to poorer generalization. In contrast, small-batch methods consistently converge to flat minimizers, and our experiments support a commonly held view that this is due to the inherent noise in the gradient estimation. We also discuss several empirical strategies that help large-batch methods eliminate the generalization gap and conclude with a set of future research ideas and open questions. […] The lack of generalization ability is due to the fact that large-batch methods tend to converge to sharp minimizers of the training function. These minimizers are characterized by large positive eigenvalues in $\nabla^2 f(x)$ and tend to generalize less well. In contrast, small-batch methods converge to flat minimizers characterized by small positive eigenvalues of $\nabla^2 f(x)$. We have observed that the loss function landscape of deep neural networks is such that large-batch methods are almost invariably attracted to regions with sharp minima and that, unlike small batch methods, are unable to escape basins of these minimizers. […] Also, some good insights from Ian Goodfellow answering to why do not use the whole training set to compute the gradient? on Quora: The size of the learning rate is limited mostly by factors like how curved the cost function is. You can think of gradient descent as making a linear approximation to the cost function, then moving downhill along that approximate cost. If the cost function is highly non-linear (highly curved) then the approximation will not be very good for very far, so only small step sizes are safe. You can read more about this in Chapter 4 of the deep learning textbook, on numerical computation: http://www.deeplearningbook.org/contents/numerical.html When you put m examples in a minibatch, you need to do O(m) computation and use O(m) memory, but you reduce the amount of uncertainty in the gradient by a factor of only O(sqrt(m)). In other words, there are diminishing marginal returns to putting more examples in the minibatch. You can read more about this in Chapter 8 of the deep learning textbook, on optimization algorithms for deep learning: http://www.deeplearningbook.org/contents/optimization.html Also, if you think about it, even using the entire training set doesn’t really give you the true gradient. The true gradient would be the expected gradient with the expectation taken over all possible examples, weighted by the data generating distribution. Using the entire training set is just using a very large minibatch size, where the size of your minibatch is limited by the amount you spend on data collection, rather than the amount you spend on computation. Related: Batch gradient descent versus stochastic gradient descent
What is the trade-off between batch size and number of iterations to train a neural network? From Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima. https://arxiv.o
265
What is the trade-off between batch size and number of iterations to train a neural network?
I assume you're talking about reducing the batch size in a mini batch stochastic gradient descent algorithm and comparing that to larger batch sizes requiring fewer iterations. Andrew Ng provides a good discussion of this and some visuals in his online coursera class on ML and neural networks. So the rest of this post is mostly a regurgitation of his teachings from that class. Let's take the two extremes, on one side each gradient descent step is using the entire dataset. You're computing the gradients for every sample. In this case you know exactly the best directly towards a local minimum. You don't waste time going the wrong direction. So in terms of numbers gradient descent steps, you'll get there in the fewest. Of course computing the gradient over the entire dataset is expensive. So now we go to the other extreme. A batch size of just 1 sample. In this case the gradient of that sample may take you completely the wrong direction. But hey, the cost of computing the one gradient was quite trivial. As you take steps with regard to just one sample you "wander" around a bit, but on the average you head towards an equally reasonable local minimum as in full batch gradient descent. This might be a moment to point out that I have seen some literature suggesting that perhaps this bouncing around that 1-sample stochastic gradient descent might help you bounce out of a local minima that full batch mode wouldn't avoid, but that's debatable. Some other good answers here address this question more directly than I have. In terms of computational power, while the single-sample stochastic GD process takes many many more iterations, you end up getting there for less cost than the full batch mode, "typically." This is how Andrew Ng puts it. Now let's find the middle ground you asked about. We might realize that modern BLAS libraries make computing vector math quite efficient, so computing 10 or 100 samples at once, presuming you've vectorized your code properly, will be barely more work than computing 1 sample (you gain memory call efficiencies as well as computational tricks built into most efficient math libraries). And averaging over a batch of 10, 100, 1000 samples is going to produce a gradient that is a more reasonable approximation of the true, full batch-mode gradient. So our steps are now more accurate, meaning we need fewer of them to converge, and at a cost that is only marginally higher than single-sample GD. Optimizing the exact size of the mini-batch you should use is generally left to trial and error. Run some tests on a sample of the dataset with numbers ranging from say tens to a few thousand and see which converges fastest, then go with that. Batch sizes in those ranges seem quite common across the literature. And if your data truly is IID, then the central limit theorem on variation of random processes would also suggest that those ranges are a reasonable approximation of the full gradient. Deciding exactly when to stop iterating is typically done by monitoring your generalization error against an untrained on validation set and choosing the point at which validation error is at its lowest point. Training for too many iterations will eventually lead to overfitting, at which point your error on your validation set will start to climb. When you see this happening back up and stop at the optimal point.
What is the trade-off between batch size and number of iterations to train a neural network?
I assume you're talking about reducing the batch size in a mini batch stochastic gradient descent algorithm and comparing that to larger batch sizes requiring fewer iterations. Andrew Ng provides a go
What is the trade-off between batch size and number of iterations to train a neural network? I assume you're talking about reducing the batch size in a mini batch stochastic gradient descent algorithm and comparing that to larger batch sizes requiring fewer iterations. Andrew Ng provides a good discussion of this and some visuals in his online coursera class on ML and neural networks. So the rest of this post is mostly a regurgitation of his teachings from that class. Let's take the two extremes, on one side each gradient descent step is using the entire dataset. You're computing the gradients for every sample. In this case you know exactly the best directly towards a local minimum. You don't waste time going the wrong direction. So in terms of numbers gradient descent steps, you'll get there in the fewest. Of course computing the gradient over the entire dataset is expensive. So now we go to the other extreme. A batch size of just 1 sample. In this case the gradient of that sample may take you completely the wrong direction. But hey, the cost of computing the one gradient was quite trivial. As you take steps with regard to just one sample you "wander" around a bit, but on the average you head towards an equally reasonable local minimum as in full batch gradient descent. This might be a moment to point out that I have seen some literature suggesting that perhaps this bouncing around that 1-sample stochastic gradient descent might help you bounce out of a local minima that full batch mode wouldn't avoid, but that's debatable. Some other good answers here address this question more directly than I have. In terms of computational power, while the single-sample stochastic GD process takes many many more iterations, you end up getting there for less cost than the full batch mode, "typically." This is how Andrew Ng puts it. Now let's find the middle ground you asked about. We might realize that modern BLAS libraries make computing vector math quite efficient, so computing 10 or 100 samples at once, presuming you've vectorized your code properly, will be barely more work than computing 1 sample (you gain memory call efficiencies as well as computational tricks built into most efficient math libraries). And averaging over a batch of 10, 100, 1000 samples is going to produce a gradient that is a more reasonable approximation of the true, full batch-mode gradient. So our steps are now more accurate, meaning we need fewer of them to converge, and at a cost that is only marginally higher than single-sample GD. Optimizing the exact size of the mini-batch you should use is generally left to trial and error. Run some tests on a sample of the dataset with numbers ranging from say tens to a few thousand and see which converges fastest, then go with that. Batch sizes in those ranges seem quite common across the literature. And if your data truly is IID, then the central limit theorem on variation of random processes would also suggest that those ranges are a reasonable approximation of the full gradient. Deciding exactly when to stop iterating is typically done by monitoring your generalization error against an untrained on validation set and choosing the point at which validation error is at its lowest point. Training for too many iterations will eventually lead to overfitting, at which point your error on your validation set will start to climb. When you see this happening back up and stop at the optimal point.
What is the trade-off between batch size and number of iterations to train a neural network? I assume you're talking about reducing the batch size in a mini batch stochastic gradient descent algorithm and comparing that to larger batch sizes requiring fewer iterations. Andrew Ng provides a go
266
What is the trade-off between batch size and number of iterations to train a neural network?
TL;DR: Too large a mini-batch size usually leads to a lower accuracy! For those interested, here's an explanation. There are two notions of speed: Computational speed Speed of convergence of an algorithm Computational speed is simply the speed of performing numerical calculations in hardware. As you said, it is usually higher with a larger mini-batch size. That's because linear algebra libraries use vectorization for vector and matrix operations to speed them up, at the expense of using more memory. Gains can be significant up to a point. From my experience, there is a point after which there are only marginal gains in speed, if any. The point depends on the data set, hardware, and a library that's used for numerical computations (under the hood). But, let's not forget that there is also the other notion of speed, which tells us how quickly our algorithm converges. Firstly, what does it mean for our algorithm to converge? Well, it's up to us to define and decide when we are satisfied with an accuracy, or an error, that we get, calculated on the validation set. We can either define it in advance and wait for the algorithm to come to that point, or we can monitor the training process and decide to stop it when the validation error starts to rise significantly (the model starts to overfit the data set). We really shouldn't stop it right away, the first moment the error starts to rise, if we work with mini batches, because we use Stochastic Gradient Descent, SGD. In case of (full batch) Gradient Descent, after each epoch, the algorithm will settle in a minimum, be it a local or the global one. SGD never really settles in a minimum. It keeps oscillating around it. It could go on indefinitely, but it doesn't matter much, because it's close to it anyway, so the chosen values of parameters are okay, and lead to an error not far away from the one found at the minimum. Now, after all that theory, there's a "catch" that we need to pay attention to. When using a smaller batch size, calculation of the error has more noise than when we use a larger batch size. One would say, well, that's bad, isn't it? The thing is, that noise can help the algorithm jump out of a bad local minimum and have more chance of finding either a better local minimum, or hopefully the global minimum. Thus, if we can find a better solution more quickly by using a smaller batch size instead of a larger one, just by the help of the "unwanted" noise, we can tune between the total time it takes for our algorithm to find a satisfactory solution and a higher accuracy. What I want to say is, for a given accuracy (or error), smaller batch size may lead to a shorter total training time, not longer, as many believe. Or, if we decide to keep the same training time as before, we might get a slightly higher accuracy with a smaller batch size, and we most probably will, especially if we have chosen our learning rate appropriately. If you have time, check out this paper: Systematic evaluation of CNN advances on the ImageNet Especially, check out "3.7. Batch size and learning rate", and Figure 8. You will see that large mini-batch sizes lead to a worse accuracy, even if tuning learning rate to a heuristic. In general, batch size of 32 is a good starting point, and you should also try with 64, 128, and 256. Other values (lower or higher) may be fine for some data sets, but the given range is generally the best to start experimenting with. Though, under 32, it might get too slow because of significantly lower computational speed, because of not exploiting vectorization to the full extent. If you get an "out of memory" error, you should try reducing the mini-batch size anyway. So, it's not simply about using the largest possible mini-batch size that fits into memory. To conclude, and answer your question, a smaller mini-batch size (not too small) usually leads not only to a smaller number of iterations of a training algorithm, than a large batch size, but also to a higher accuracy overall, i.e, a neural network that performs better, in the same amount of training time, or less. Don't forget that the higher noise can help it jump out of a bad local minimum, rather than leaving it stuck in it.
What is the trade-off between batch size and number of iterations to train a neural network?
TL;DR: Too large a mini-batch size usually leads to a lower accuracy! For those interested, here's an explanation. There are two notions of speed: Computational speed Speed of convergence of an algor
What is the trade-off between batch size and number of iterations to train a neural network? TL;DR: Too large a mini-batch size usually leads to a lower accuracy! For those interested, here's an explanation. There are two notions of speed: Computational speed Speed of convergence of an algorithm Computational speed is simply the speed of performing numerical calculations in hardware. As you said, it is usually higher with a larger mini-batch size. That's because linear algebra libraries use vectorization for vector and matrix operations to speed them up, at the expense of using more memory. Gains can be significant up to a point. From my experience, there is a point after which there are only marginal gains in speed, if any. The point depends on the data set, hardware, and a library that's used for numerical computations (under the hood). But, let's not forget that there is also the other notion of speed, which tells us how quickly our algorithm converges. Firstly, what does it mean for our algorithm to converge? Well, it's up to us to define and decide when we are satisfied with an accuracy, or an error, that we get, calculated on the validation set. We can either define it in advance and wait for the algorithm to come to that point, or we can monitor the training process and decide to stop it when the validation error starts to rise significantly (the model starts to overfit the data set). We really shouldn't stop it right away, the first moment the error starts to rise, if we work with mini batches, because we use Stochastic Gradient Descent, SGD. In case of (full batch) Gradient Descent, after each epoch, the algorithm will settle in a minimum, be it a local or the global one. SGD never really settles in a minimum. It keeps oscillating around it. It could go on indefinitely, but it doesn't matter much, because it's close to it anyway, so the chosen values of parameters are okay, and lead to an error not far away from the one found at the minimum. Now, after all that theory, there's a "catch" that we need to pay attention to. When using a smaller batch size, calculation of the error has more noise than when we use a larger batch size. One would say, well, that's bad, isn't it? The thing is, that noise can help the algorithm jump out of a bad local minimum and have more chance of finding either a better local minimum, or hopefully the global minimum. Thus, if we can find a better solution more quickly by using a smaller batch size instead of a larger one, just by the help of the "unwanted" noise, we can tune between the total time it takes for our algorithm to find a satisfactory solution and a higher accuracy. What I want to say is, for a given accuracy (or error), smaller batch size may lead to a shorter total training time, not longer, as many believe. Or, if we decide to keep the same training time as before, we might get a slightly higher accuracy with a smaller batch size, and we most probably will, especially if we have chosen our learning rate appropriately. If you have time, check out this paper: Systematic evaluation of CNN advances on the ImageNet Especially, check out "3.7. Batch size and learning rate", and Figure 8. You will see that large mini-batch sizes lead to a worse accuracy, even if tuning learning rate to a heuristic. In general, batch size of 32 is a good starting point, and you should also try with 64, 128, and 256. Other values (lower or higher) may be fine for some data sets, but the given range is generally the best to start experimenting with. Though, under 32, it might get too slow because of significantly lower computational speed, because of not exploiting vectorization to the full extent. If you get an "out of memory" error, you should try reducing the mini-batch size anyway. So, it's not simply about using the largest possible mini-batch size that fits into memory. To conclude, and answer your question, a smaller mini-batch size (not too small) usually leads not only to a smaller number of iterations of a training algorithm, than a large batch size, but also to a higher accuracy overall, i.e, a neural network that performs better, in the same amount of training time, or less. Don't forget that the higher noise can help it jump out of a bad local minimum, rather than leaving it stuck in it.
What is the trade-off between batch size and number of iterations to train a neural network? TL;DR: Too large a mini-batch size usually leads to a lower accuracy! For those interested, here's an explanation. There are two notions of speed: Computational speed Speed of convergence of an algor
267
What is the trade-off between batch size and number of iterations to train a neural network?
I'm adding another answer to this question to reference a new (2018) ICLR conference paper from Google which almost directly addresses this question. Title: Don't Decay the Learning Rate, Increase the Batch Size https://arxiv.org/abs/1711.00489 The abstract from the above paper is copied here: It is common practice to decay the learning rate. Here we show one can usually obtain the same learning curve on both training and test sets by instead increasing the batch size during training. This procedure is successful for stochastic gradient descent (SGD), SGD with momentum, Nesterov momentum, and Adam. It reaches equivalent test accuracies after the same number of training epochs, but with fewer parameter updates, leading to greater parallelism and shorter training times. We can further reduce the number of parameter updates by increasing the learning rate ϵ and scaling the batch size B∝ϵ. Finally, one can increase the momentum coefficient m and scale B∝1/(1−m), although this tends to slightly reduce the test accuracy. Crucially, our techniques allow us to repurpose existing training schedules for large batch training with no hyper-parameter tuning. We train ResNet-50 on ImageNet to 76.1% validation accuracy in under 30 minutes.
What is the trade-off between batch size and number of iterations to train a neural network?
I'm adding another answer to this question to reference a new (2018) ICLR conference paper from Google which almost directly addresses this question. Title: Don't Decay the Learning Rate, Increase the
What is the trade-off between batch size and number of iterations to train a neural network? I'm adding another answer to this question to reference a new (2018) ICLR conference paper from Google which almost directly addresses this question. Title: Don't Decay the Learning Rate, Increase the Batch Size https://arxiv.org/abs/1711.00489 The abstract from the above paper is copied here: It is common practice to decay the learning rate. Here we show one can usually obtain the same learning curve on both training and test sets by instead increasing the batch size during training. This procedure is successful for stochastic gradient descent (SGD), SGD with momentum, Nesterov momentum, and Adam. It reaches equivalent test accuracies after the same number of training epochs, but with fewer parameter updates, leading to greater parallelism and shorter training times. We can further reduce the number of parameter updates by increasing the learning rate ϵ and scaling the batch size B∝ϵ. Finally, one can increase the momentum coefficient m and scale B∝1/(1−m), although this tends to slightly reduce the test accuracy. Crucially, our techniques allow us to repurpose existing training schedules for large batch training with no hyper-parameter tuning. We train ResNet-50 on ImageNet to 76.1% validation accuracy in under 30 minutes.
What is the trade-off between batch size and number of iterations to train a neural network? I'm adding another answer to this question to reference a new (2018) ICLR conference paper from Google which almost directly addresses this question. Title: Don't Decay the Learning Rate, Increase the
268
What is the trade-off between batch size and number of iterations to train a neural network?
I show some empirical experience here. I did an experiment with batch size 4 and batch size 4096. The size 4096 is doing 1024x fewer backpropagations. So my intuition is that larger batches do fewer and coarser search steps for the optimal solution, and so by construction will be less likely to converge on the optimal solution.
What is the trade-off between batch size and number of iterations to train a neural network?
I show some empirical experience here. I did an experiment with batch size 4 and batch size 4096. The size 4096 is doing 1024x fewer backpropagations. So my intuition is that larger batches do fewe
What is the trade-off between batch size and number of iterations to train a neural network? I show some empirical experience here. I did an experiment with batch size 4 and batch size 4096. The size 4096 is doing 1024x fewer backpropagations. So my intuition is that larger batches do fewer and coarser search steps for the optimal solution, and so by construction will be less likely to converge on the optimal solution.
What is the trade-off between batch size and number of iterations to train a neural network? I show some empirical experience here. I did an experiment with batch size 4 and batch size 4096. The size 4096 is doing 1024x fewer backpropagations. So my intuition is that larger batches do fewe
269
Is normality testing 'essentially useless'?
It's not an argument. It is a (a bit strongly stated) fact that formal normality tests always reject on the huge sample sizes we work with today. It's even easy to prove that when n gets large, even the smallest deviation from perfect normality will lead to a significant result. And as every dataset has some degree of randomness, no single dataset will be a perfectly normally distributed sample. But in applied statistics the question is not whether the data/residuals ... are perfectly normal, but normal enough for the assumptions to hold. Let me illustrate with the Shapiro-Wilk test. The code below constructs a set of distributions that approach normality but aren't completely normal. Next, we test with shapiro.test whether a sample from these almost-normal distributions deviate from normality. In R: x <- replicate(100, { # generates 100 different tests on each distribution c(shapiro.test(rnorm(10)+c(1,0,2,0,1))$p.value, #$ shapiro.test(rnorm(100)+c(1,0,2,0,1))$p.value, #$ shapiro.test(rnorm(1000)+c(1,0,2,0,1))$p.value, #$ shapiro.test(rnorm(5000)+c(1,0,2,0,1))$p.value) #$ } # rnorm gives a random draw from the normal distribution ) rownames(x) <- c("n10","n100","n1000","n5000") rowMeans(x<0.05) # the proportion of significant deviations n10 n100 n1000 n5000 0.04 0.04 0.20 0.87 The last line checks which fraction of the simulations for every sample size deviate significantly from normality. So in 87% of the cases, a sample of 5000 observations deviates significantly from normality according to Shapiro-Wilks. Yet, if you see the qq plots, you would never ever decide on a deviation from normality. Below you see as an example the qq-plots for one set of random samples with p-values n10 n100 n1000 n5000 0.760 0.681 0.164 0.007
Is normality testing 'essentially useless'?
It's not an argument. It is a (a bit strongly stated) fact that formal normality tests always reject on the huge sample sizes we work with today. It's even easy to prove that when n gets large, even t
Is normality testing 'essentially useless'? It's not an argument. It is a (a bit strongly stated) fact that formal normality tests always reject on the huge sample sizes we work with today. It's even easy to prove that when n gets large, even the smallest deviation from perfect normality will lead to a significant result. And as every dataset has some degree of randomness, no single dataset will be a perfectly normally distributed sample. But in applied statistics the question is not whether the data/residuals ... are perfectly normal, but normal enough for the assumptions to hold. Let me illustrate with the Shapiro-Wilk test. The code below constructs a set of distributions that approach normality but aren't completely normal. Next, we test with shapiro.test whether a sample from these almost-normal distributions deviate from normality. In R: x <- replicate(100, { # generates 100 different tests on each distribution c(shapiro.test(rnorm(10)+c(1,0,2,0,1))$p.value, #$ shapiro.test(rnorm(100)+c(1,0,2,0,1))$p.value, #$ shapiro.test(rnorm(1000)+c(1,0,2,0,1))$p.value, #$ shapiro.test(rnorm(5000)+c(1,0,2,0,1))$p.value) #$ } # rnorm gives a random draw from the normal distribution ) rownames(x) <- c("n10","n100","n1000","n5000") rowMeans(x<0.05) # the proportion of significant deviations n10 n100 n1000 n5000 0.04 0.04 0.20 0.87 The last line checks which fraction of the simulations for every sample size deviate significantly from normality. So in 87% of the cases, a sample of 5000 observations deviates significantly from normality according to Shapiro-Wilks. Yet, if you see the qq plots, you would never ever decide on a deviation from normality. Below you see as an example the qq-plots for one set of random samples with p-values n10 n100 n1000 n5000 0.760 0.681 0.164 0.007
Is normality testing 'essentially useless'? It's not an argument. It is a (a bit strongly stated) fact that formal normality tests always reject on the huge sample sizes we work with today. It's even easy to prove that when n gets large, even t
270
Is normality testing 'essentially useless'?
When thinking about whether normality testing is 'essentially useless', one first has to think about what it is supposed to be useful for. Many people (well... at least, many scientists) misunderstand the question the normality test answers. The question normality tests answer: Is there convincing evidence of any deviation from the Gaussian ideal? With moderately large real data sets, the answer is almost always yes. The question scientists often expect the normality test to answer: Do the data deviate enough from the Gaussian ideal to "forbid" use of a test that assumes a Gaussian distribution? Scientists often want the normality test to be the referee that decides when to abandon conventional (ANOVA, etc.) tests and instead analyze transformed data or use a rank-based nonparametric test or a resampling or bootstrap approach. For this purpose, normality tests are not very useful.
Is normality testing 'essentially useless'?
When thinking about whether normality testing is 'essentially useless', one first has to think about what it is supposed to be useful for. Many people (well... at least, many scientists) misunderstand
Is normality testing 'essentially useless'? When thinking about whether normality testing is 'essentially useless', one first has to think about what it is supposed to be useful for. Many people (well... at least, many scientists) misunderstand the question the normality test answers. The question normality tests answer: Is there convincing evidence of any deviation from the Gaussian ideal? With moderately large real data sets, the answer is almost always yes. The question scientists often expect the normality test to answer: Do the data deviate enough from the Gaussian ideal to "forbid" use of a test that assumes a Gaussian distribution? Scientists often want the normality test to be the referee that decides when to abandon conventional (ANOVA, etc.) tests and instead analyze transformed data or use a rank-based nonparametric test or a resampling or bootstrap approach. For this purpose, normality tests are not very useful.
Is normality testing 'essentially useless'? When thinking about whether normality testing is 'essentially useless', one first has to think about what it is supposed to be useful for. Many people (well... at least, many scientists) misunderstand
271
Is normality testing 'essentially useless'?
I think that tests for normality can be useful as companions to graphical examinations. They have to be used in the right way, though. In my opinion, this means that many popular tests, such as the Shapiro-Wilk, Anderson-Darling and Jarque-Bera tests never should be used. Before I explain my standpoint, let me make a few remarks: In an interesting recent paper Rochon et al. studied the impact of the Shapiro-Wilk test on the two-sample t-test. The two-step procedure of testing for normality before carrying out for instance a t-test is not without problems. Then again, neither is the two-step procedure of graphically investigating normality before carrying out a t-test. The difference is that the impact of the latter is much more difficult to investigate (as it would require a statistician to graphically investigate normality $100,000$ or so times...). It is useful to quantify non-normality, for instance by computing the sample skewness, even if you don't want to perform a formal test. Multivariate normality can be difficult to assess graphically and convergence to asymptotic distributions can be slow for multivariate statistics. Tests for normality are therefore more useful in a multivariate setting. Tests for normality are perhaps especially useful for practitioners who use statistics as a set of black-box methods. When normality is rejected, the practitioner should be alarmed and, rather than carrying out a standard procedure based on the assumption of normality, consider using a nonparametric procedure, applying a transformation or consulting a more experienced statistician. As has been pointed out by others, if $n$ is large enough, the CLT usually saves the day. However, what is "large enough" differs for different classes of distributions. (In my definiton) a test for normality is directed against a class of alternatives if it is sensitive to alternatives from that class, but not sensitive to alternatives from other classes. Typical examples are tests that are directed towards skew or kurtotic alternatives. The simplest examples use the sample skewness and kurtosis as test statistics. Directed tests of normality are arguably often preferable to omnibus tests (such as the Shapiro-Wilk and Jarque-Bera tests) since it is common that only some types of non-normality are of concern for a particular inferential procedure. Let's consider Student's t-test as an example. Assume that we have an i.i.d. sample from a distribution with skewness $\gamma=\frac{E(X-\mu)^3}{\sigma^3}$ and (excess) kurtosis $\kappa=\frac{E(X-\mu)^4}{\sigma^4}-3.$ If $X$ is symmetric about its mean, $\gamma=0$. Both $\gamma$ and $\kappa$ are 0 for the normal distribution. Under regularity assumptions, we obtain the following asymptotic expansion for the cdf of the test statistic $T_n$: $$P(T_n\leq x)=\Phi(x)+n^{-1/2}\frac{1}{6}\gamma(2x^2+1)\phi(x)-n^{-1}x\Big(\frac{1}{12}\kappa (x^2-3)-\frac{1}{18}\gamma^2(x^4+2x^2-3)-\frac{1}{4}(x^2+3)\Big)\phi(x)+o(n^{-1}),$$ where $\Phi(\cdot)$ is the cdf and $\phi(\cdot)$ is the pdf of the standard normal distribution. $\gamma$ appears for the first time in the $n^{-1/2}$ term, whereas $\kappa$ appears in the $n^{-1}$ term. The asymptotic performance of $T_n$ is much more sensitive to deviations from normality in the form of skewness than in the form of kurtosis. It can be verified using simulations that this is true for small $n$ as well. Thus Student's t-test is sensitive to skewness but relatively robust against heavy tails, and it is reasonable to use a test for normality that is directed towards skew alternatives before applying the t-test. As a rule of thumb (not a law of nature), inference about means is sensitive to skewness and inference about variances is sensitive to kurtosis. Using a directed test for normality has the benefit of getting higher power against ''dangerous'' alternatives and lower power against alternatives that are less ''dangerous'', meaning that we are less likely to reject normality because of deviations from normality that won't affect the performance of our inferential procedure. The non-normality is quantified in a way that is relevant to the problem at hand. This is not always easy to do graphically. As $n$ gets larger, skewness and kurtosis become less important - and directed tests are likely to detect if these quantities deviate from 0 even by a small amount. In such cases, it seems reasonable to, for instance, test whether $|\gamma|\leq 1$ or (looking at the first term of the expansion above) $$|n^{-1/2}\frac{1}{6}\gamma(2z_{\alpha/2}^2+1)\phi(z_{\alpha/2})|\leq 0.01$$ rather than whether $\gamma=0$. This takes care of some of the problems that we otherwise face as $n$ gets larger.
Is normality testing 'essentially useless'?
I think that tests for normality can be useful as companions to graphical examinations. They have to be used in the right way, though. In my opinion, this means that many popular tests, such as the Sh
Is normality testing 'essentially useless'? I think that tests for normality can be useful as companions to graphical examinations. They have to be used in the right way, though. In my opinion, this means that many popular tests, such as the Shapiro-Wilk, Anderson-Darling and Jarque-Bera tests never should be used. Before I explain my standpoint, let me make a few remarks: In an interesting recent paper Rochon et al. studied the impact of the Shapiro-Wilk test on the two-sample t-test. The two-step procedure of testing for normality before carrying out for instance a t-test is not without problems. Then again, neither is the two-step procedure of graphically investigating normality before carrying out a t-test. The difference is that the impact of the latter is much more difficult to investigate (as it would require a statistician to graphically investigate normality $100,000$ or so times...). It is useful to quantify non-normality, for instance by computing the sample skewness, even if you don't want to perform a formal test. Multivariate normality can be difficult to assess graphically and convergence to asymptotic distributions can be slow for multivariate statistics. Tests for normality are therefore more useful in a multivariate setting. Tests for normality are perhaps especially useful for practitioners who use statistics as a set of black-box methods. When normality is rejected, the practitioner should be alarmed and, rather than carrying out a standard procedure based on the assumption of normality, consider using a nonparametric procedure, applying a transformation or consulting a more experienced statistician. As has been pointed out by others, if $n$ is large enough, the CLT usually saves the day. However, what is "large enough" differs for different classes of distributions. (In my definiton) a test for normality is directed against a class of alternatives if it is sensitive to alternatives from that class, but not sensitive to alternatives from other classes. Typical examples are tests that are directed towards skew or kurtotic alternatives. The simplest examples use the sample skewness and kurtosis as test statistics. Directed tests of normality are arguably often preferable to omnibus tests (such as the Shapiro-Wilk and Jarque-Bera tests) since it is common that only some types of non-normality are of concern for a particular inferential procedure. Let's consider Student's t-test as an example. Assume that we have an i.i.d. sample from a distribution with skewness $\gamma=\frac{E(X-\mu)^3}{\sigma^3}$ and (excess) kurtosis $\kappa=\frac{E(X-\mu)^4}{\sigma^4}-3.$ If $X$ is symmetric about its mean, $\gamma=0$. Both $\gamma$ and $\kappa$ are 0 for the normal distribution. Under regularity assumptions, we obtain the following asymptotic expansion for the cdf of the test statistic $T_n$: $$P(T_n\leq x)=\Phi(x)+n^{-1/2}\frac{1}{6}\gamma(2x^2+1)\phi(x)-n^{-1}x\Big(\frac{1}{12}\kappa (x^2-3)-\frac{1}{18}\gamma^2(x^4+2x^2-3)-\frac{1}{4}(x^2+3)\Big)\phi(x)+o(n^{-1}),$$ where $\Phi(\cdot)$ is the cdf and $\phi(\cdot)$ is the pdf of the standard normal distribution. $\gamma$ appears for the first time in the $n^{-1/2}$ term, whereas $\kappa$ appears in the $n^{-1}$ term. The asymptotic performance of $T_n$ is much more sensitive to deviations from normality in the form of skewness than in the form of kurtosis. It can be verified using simulations that this is true for small $n$ as well. Thus Student's t-test is sensitive to skewness but relatively robust against heavy tails, and it is reasonable to use a test for normality that is directed towards skew alternatives before applying the t-test. As a rule of thumb (not a law of nature), inference about means is sensitive to skewness and inference about variances is sensitive to kurtosis. Using a directed test for normality has the benefit of getting higher power against ''dangerous'' alternatives and lower power against alternatives that are less ''dangerous'', meaning that we are less likely to reject normality because of deviations from normality that won't affect the performance of our inferential procedure. The non-normality is quantified in a way that is relevant to the problem at hand. This is not always easy to do graphically. As $n$ gets larger, skewness and kurtosis become less important - and directed tests are likely to detect if these quantities deviate from 0 even by a small amount. In such cases, it seems reasonable to, for instance, test whether $|\gamma|\leq 1$ or (looking at the first term of the expansion above) $$|n^{-1/2}\frac{1}{6}\gamma(2z_{\alpha/2}^2+1)\phi(z_{\alpha/2})|\leq 0.01$$ rather than whether $\gamma=0$. This takes care of some of the problems that we otherwise face as $n$ gets larger.
Is normality testing 'essentially useless'? I think that tests for normality can be useful as companions to graphical examinations. They have to be used in the right way, though. In my opinion, this means that many popular tests, such as the Sh
272
Is normality testing 'essentially useless'?
IMHO normality tests are absolutely useless for the following reasons: On small samples, there's a good chance that the true distribution of the population is substantially non-normal, but the normality test isn't powerful to pick it up. On large samples, things like the T-test and ANOVA are pretty robust to non-normality. The whole idea of a normally distributed population is just a convenient mathematical approximation anyhow. None of the quantities typically dealt with statistically could plausibly have distributions with a support of all real numbers. For example, people can't have a negative height. Something can't have negative mass or more mass than there is in the universe. Therefore, it's safe to say that nothing is exactly normally distributed in the real world.
Is normality testing 'essentially useless'?
IMHO normality tests are absolutely useless for the following reasons: On small samples, there's a good chance that the true distribution of the population is substantially non-normal, but the normal
Is normality testing 'essentially useless'? IMHO normality tests are absolutely useless for the following reasons: On small samples, there's a good chance that the true distribution of the population is substantially non-normal, but the normality test isn't powerful to pick it up. On large samples, things like the T-test and ANOVA are pretty robust to non-normality. The whole idea of a normally distributed population is just a convenient mathematical approximation anyhow. None of the quantities typically dealt with statistically could plausibly have distributions with a support of all real numbers. For example, people can't have a negative height. Something can't have negative mass or more mass than there is in the universe. Therefore, it's safe to say that nothing is exactly normally distributed in the real world.
Is normality testing 'essentially useless'? IMHO normality tests are absolutely useless for the following reasons: On small samples, there's a good chance that the true distribution of the population is substantially non-normal, but the normal
273
Is normality testing 'essentially useless'?
I think that pre-testing for normality (which includes informal assessments using graphics) misses the point. Users of this approach assume that the normality assessment has in effect a power near 1.0. Nonparametric tests such as the Wilcoxon, Spearman, and Kruskal-Wallis have efficiency of 0.95 if normality holds. In view of 2. one can pre-specify the use of a nonparametric test if one even entertains the possibility that the data may not arise from a normal distribution. Ordinal cumulative probability models (the proportional odds model being a member of this class) generalize standard nonparametric tests. Ordinal models are completely transformation-invariant with respect to $Y$, are robust, powerful, and allow estimation of quantiles and mean of $Y$.
Is normality testing 'essentially useless'?
I think that pre-testing for normality (which includes informal assessments using graphics) misses the point. Users of this approach assume that the normality assessment has in effect a power near 1.
Is normality testing 'essentially useless'? I think that pre-testing for normality (which includes informal assessments using graphics) misses the point. Users of this approach assume that the normality assessment has in effect a power near 1.0. Nonparametric tests such as the Wilcoxon, Spearman, and Kruskal-Wallis have efficiency of 0.95 if normality holds. In view of 2. one can pre-specify the use of a nonparametric test if one even entertains the possibility that the data may not arise from a normal distribution. Ordinal cumulative probability models (the proportional odds model being a member of this class) generalize standard nonparametric tests. Ordinal models are completely transformation-invariant with respect to $Y$, are robust, powerful, and allow estimation of quantiles and mean of $Y$.
Is normality testing 'essentially useless'? I think that pre-testing for normality (which includes informal assessments using graphics) misses the point. Users of this approach assume that the normality assessment has in effect a power near 1.
274
Is normality testing 'essentially useless'?
Before asking whether a test or any sort of rough check for normality is "useful" you have to answer the question behind the question: "Why are you asking?" For example, if you only want to put a confidence limit around the mean of a set of data, departures from normality may or not be important, depending on how much data you have and how big the departures are. However, departures from normality are apt to be crucial if you want to predict what the most extreme value will be in future observations or in the population you have sampled from.
Is normality testing 'essentially useless'?
Before asking whether a test or any sort of rough check for normality is "useful" you have to answer the question behind the question: "Why are you asking?" For example, if you only want to put a con
Is normality testing 'essentially useless'? Before asking whether a test or any sort of rough check for normality is "useful" you have to answer the question behind the question: "Why are you asking?" For example, if you only want to put a confidence limit around the mean of a set of data, departures from normality may or not be important, depending on how much data you have and how big the departures are. However, departures from normality are apt to be crucial if you want to predict what the most extreme value will be in future observations or in the population you have sampled from.
Is normality testing 'essentially useless'? Before asking whether a test or any sort of rough check for normality is "useful" you have to answer the question behind the question: "Why are you asking?" For example, if you only want to put a con
275
Is normality testing 'essentially useless'?
Let me add one small thing: Performing a normality test without taking its alpha-error into account heightens your overall probability of performing an alpha-error. You shall never forget that each additional test does this as long as you don't control for alpha-error accumulation. Hence, another good reason to dismiss normality testing.
Is normality testing 'essentially useless'?
Let me add one small thing: Performing a normality test without taking its alpha-error into account heightens your overall probability of performing an alpha-error. You shall never forget that each a
Is normality testing 'essentially useless'? Let me add one small thing: Performing a normality test without taking its alpha-error into account heightens your overall probability of performing an alpha-error. You shall never forget that each additional test does this as long as you don't control for alpha-error accumulation. Hence, another good reason to dismiss normality testing.
Is normality testing 'essentially useless'? Let me add one small thing: Performing a normality test without taking its alpha-error into account heightens your overall probability of performing an alpha-error. You shall never forget that each a
276
Is normality testing 'essentially useless'?
For what it's worth, I once developed a fast sampler for the truncated normal distribution, and normality testing (KS) was very useful in debugging the function. This sampler passes the test with huge sample sizes but, interestingly, the GSL's ziggurat sampler didn't.
Is normality testing 'essentially useless'?
For what it's worth, I once developed a fast sampler for the truncated normal distribution, and normality testing (KS) was very useful in debugging the function. This sampler passes the test with huge
Is normality testing 'essentially useless'? For what it's worth, I once developed a fast sampler for the truncated normal distribution, and normality testing (KS) was very useful in debugging the function. This sampler passes the test with huge sample sizes but, interestingly, the GSL's ziggurat sampler didn't.
Is normality testing 'essentially useless'? For what it's worth, I once developed a fast sampler for the truncated normal distribution, and normality testing (KS) was very useful in debugging the function. This sampler passes the test with huge
277
Is normality testing 'essentially useless'?
I used to think that tests of normality were completely useless. However, now I do consulting for other researchers. Often, obtaining samples is extremely expensive, and so they will want to do inference with n = 8, say. In such a case, it is very difficult to find statistical significance with non-parametric tests, but t-tests with n = 8 are sensitive to deviations from normality. So what we get is that we can say "well, conditional on the assumption of normality, we find a statistically significant difference" (don't worry, these are usually pilot studies...). Then we need some way of evaluating that assumption. I'm half-way in the camp that looking at plots is a better way to go, but truth be told there can be a lot of disagreement about that, which can be very problematic if one of the people who disagrees with you is the reviewer of your manuscript. In many ways, I still think there are plenty of flaws in tests of normality: for example, we should be thinking about the type II error more than the type I. But there is a need for them.
Is normality testing 'essentially useless'?
I used to think that tests of normality were completely useless. However, now I do consulting for other researchers. Often, obtaining samples is extremely expensive, and so they will want to do infer
Is normality testing 'essentially useless'? I used to think that tests of normality were completely useless. However, now I do consulting for other researchers. Often, obtaining samples is extremely expensive, and so they will want to do inference with n = 8, say. In such a case, it is very difficult to find statistical significance with non-parametric tests, but t-tests with n = 8 are sensitive to deviations from normality. So what we get is that we can say "well, conditional on the assumption of normality, we find a statistically significant difference" (don't worry, these are usually pilot studies...). Then we need some way of evaluating that assumption. I'm half-way in the camp that looking at plots is a better way to go, but truth be told there can be a lot of disagreement about that, which can be very problematic if one of the people who disagrees with you is the reviewer of your manuscript. In many ways, I still think there are plenty of flaws in tests of normality: for example, we should be thinking about the type II error more than the type I. But there is a need for them.
Is normality testing 'essentially useless'? I used to think that tests of normality were completely useless. However, now I do consulting for other researchers. Often, obtaining samples is extremely expensive, and so they will want to do infer
278
Is normality testing 'essentially useless'?
Answers here have already addressed several important points. To quickly summarize: There is no consistent test that can determine whether a set of data truly follow a distribution or not. Tests are no substitute for visually inspecting the data and models to identify high leverage, high influence observations and commenting on their effects on models. The assumptions for many regression routines are often misquoted as requiring normally distributed "data" [residuals] and that this is interpreted by novice statisticians as requiring that the analyst formally evaluate this in some sense before proceeding with analyses. I am adding an answer firstly to cite to one of my, personally, most frequently accessed and read statistical articles: "The Importance of Normality Assumptions in Large Public Health Datasets" by Lumley et. al. It is worth reading in entirety. The summary states: The t-test and least-squares linear regression do not require any assumption of Normal distribution in sufficiently large samples. Previous simulations studies show that “sufficiently large” is often under 100, and even for our extremely non-Normal medical cost data it is less than 500. This means that in public health research, where samples are often substantially larger than this, the t-test and the linear model are useful default tools for analyzing differences and trends in many types of data, not just those with Normal distributions. Formal statistical tests for Normality are especially undesirable as they will have low power in the small samples where the distribution matters and high power only in large samples where the distribution is unimportant. While the large-sample properties of linear regression are well understood, there has been little research into the sample sizes needed for the Normality assumption to be unimportant. In particular, it is not clear how the necessary sample size depends on the number of predictors in the model. The focus on Normal distributions can distract from the real assumptions of these methods. Linear regression does assume that the variance of the outcome variable is approximately constant, but the primary restriction on both methods is that they assume that it is sufficient to examine changes in the mean of the outcome variable. If some other summary of the distribution is of greater interest, then the t-test and linear regression may not be appropriate. To summarize: normality is generally not worth the discussion or the attention it receives in contrast to the importance of answering a particular scientific question. If the desire is to summarize mean differences in data, then the t-test and ANOVA or linear regression are justified in a much broader sense. Tests based on these models remain of the correct alpha level, even when distributional assumptions are not met, although power may be adversely affected. The reasons why normal distributions may receive the attention they do may be for classical reasons, where exact tests based on F-distributions for ANOVAs and Student-T-distributions for the T-test could be obtained. The truth is, among the many modern advancements of science, we generally deal with larger datasets than were collected previously. If one is in fact dealing with a small dataset, the rationale that those data are normally distributed cannot come from those data themselves: there is simply not enough power. Remarking on other research, replications, or even the biology or science of the measurement process is, in my opinion, a much more justified approach to discussing a possible probability model underlying the observed data. For this reason, opting for a rank-based test as an alternative misses the point entirely. However, I will agree that using robust variance estimators like the jackknife or bootstrap offer important computational alternatives that permit conducting tests under a variety of more important violations of model specification, such as independence or identical distribution of those errors.
Is normality testing 'essentially useless'?
Answers here have already addressed several important points. To quickly summarize: There is no consistent test that can determine whether a set of data truly follow a distribution or not. Tests are
Is normality testing 'essentially useless'? Answers here have already addressed several important points. To quickly summarize: There is no consistent test that can determine whether a set of data truly follow a distribution or not. Tests are no substitute for visually inspecting the data and models to identify high leverage, high influence observations and commenting on their effects on models. The assumptions for many regression routines are often misquoted as requiring normally distributed "data" [residuals] and that this is interpreted by novice statisticians as requiring that the analyst formally evaluate this in some sense before proceeding with analyses. I am adding an answer firstly to cite to one of my, personally, most frequently accessed and read statistical articles: "The Importance of Normality Assumptions in Large Public Health Datasets" by Lumley et. al. It is worth reading in entirety. The summary states: The t-test and least-squares linear regression do not require any assumption of Normal distribution in sufficiently large samples. Previous simulations studies show that “sufficiently large” is often under 100, and even for our extremely non-Normal medical cost data it is less than 500. This means that in public health research, where samples are often substantially larger than this, the t-test and the linear model are useful default tools for analyzing differences and trends in many types of data, not just those with Normal distributions. Formal statistical tests for Normality are especially undesirable as they will have low power in the small samples where the distribution matters and high power only in large samples where the distribution is unimportant. While the large-sample properties of linear regression are well understood, there has been little research into the sample sizes needed for the Normality assumption to be unimportant. In particular, it is not clear how the necessary sample size depends on the number of predictors in the model. The focus on Normal distributions can distract from the real assumptions of these methods. Linear regression does assume that the variance of the outcome variable is approximately constant, but the primary restriction on both methods is that they assume that it is sufficient to examine changes in the mean of the outcome variable. If some other summary of the distribution is of greater interest, then the t-test and linear regression may not be appropriate. To summarize: normality is generally not worth the discussion or the attention it receives in contrast to the importance of answering a particular scientific question. If the desire is to summarize mean differences in data, then the t-test and ANOVA or linear regression are justified in a much broader sense. Tests based on these models remain of the correct alpha level, even when distributional assumptions are not met, although power may be adversely affected. The reasons why normal distributions may receive the attention they do may be for classical reasons, where exact tests based on F-distributions for ANOVAs and Student-T-distributions for the T-test could be obtained. The truth is, among the many modern advancements of science, we generally deal with larger datasets than were collected previously. If one is in fact dealing with a small dataset, the rationale that those data are normally distributed cannot come from those data themselves: there is simply not enough power. Remarking on other research, replications, or even the biology or science of the measurement process is, in my opinion, a much more justified approach to discussing a possible probability model underlying the observed data. For this reason, opting for a rank-based test as an alternative misses the point entirely. However, I will agree that using robust variance estimators like the jackknife or bootstrap offer important computational alternatives that permit conducting tests under a variety of more important violations of model specification, such as independence or identical distribution of those errors.
Is normality testing 'essentially useless'? Answers here have already addressed several important points. To quickly summarize: There is no consistent test that can determine whether a set of data truly follow a distribution or not. Tests are
279
Is normality testing 'essentially useless'?
I think the first 2 questions have been thoroughly answered but I don't think question 3 was addressed. Many tests compare the empirical distribution to a known hypothesized distribution. The critical value for the Kolmogorov-Smirnov test is based on F being completely specified. It can be modified to test against a parametric distribution with parameters estimated. So if fuzzier means estimating more than two parameters then the answer to the question is yes. These tests can be applied the 3 parameter families or more. Some tests are designed to have better power when testing against a specific family of distributions. For example when testing normality the Anderson-Darling or the Shapiro-Wilk test have greater power than K-S or chi square when the null hypothesized distribution is normal. Lillefors devised a test that is preferred for exponential distributions.
Is normality testing 'essentially useless'?
I think the first 2 questions have been thoroughly answered but I don't think question 3 was addressed. Many tests compare the empirical distribution to a known hypothesized distribution. The critic
Is normality testing 'essentially useless'? I think the first 2 questions have been thoroughly answered but I don't think question 3 was addressed. Many tests compare the empirical distribution to a known hypothesized distribution. The critical value for the Kolmogorov-Smirnov test is based on F being completely specified. It can be modified to test against a parametric distribution with parameters estimated. So if fuzzier means estimating more than two parameters then the answer to the question is yes. These tests can be applied the 3 parameter families or more. Some tests are designed to have better power when testing against a specific family of distributions. For example when testing normality the Anderson-Darling or the Shapiro-Wilk test have greater power than K-S or chi square when the null hypothesized distribution is normal. Lillefors devised a test that is preferred for exponential distributions.
Is normality testing 'essentially useless'? I think the first 2 questions have been thoroughly answered but I don't think question 3 was addressed. Many tests compare the empirical distribution to a known hypothesized distribution. The critic
280
Is normality testing 'essentially useless'?
The argument you gave is an opinion. I think that the importance of normality testing is to make sure that the data does not depart severely from the normal. I use it sometimes to decide between using a parametric versus a nonparametric test for my inference procedure. I think the test can be useful in moderate and large samples (when the central limit theorem does not come into play). I tend to use Wilk-Shapiro or Anderson-Darling tests but running SAS I get them all and they generally agree pretty well. On a different note I think that graphical procedures such as Q-Q plots work equally well. The advantage of a formal test is that it is objective. In small samples it is true that these goodness of fit tests have practically no power and that makes intuitive sense because a small sample from a normal distribution might by chance look rather non normal and that is accounted for in the test. Also high skewness and kurtosis that distinguish many non normal distributions from normal distributions are not easily seen in small samples.
Is normality testing 'essentially useless'?
The argument you gave is an opinion. I think that the importance of normality testing is to make sure that the data does not depart severely from the normal. I use it sometimes to decide between usi
Is normality testing 'essentially useless'? The argument you gave is an opinion. I think that the importance of normality testing is to make sure that the data does not depart severely from the normal. I use it sometimes to decide between using a parametric versus a nonparametric test for my inference procedure. I think the test can be useful in moderate and large samples (when the central limit theorem does not come into play). I tend to use Wilk-Shapiro or Anderson-Darling tests but running SAS I get them all and they generally agree pretty well. On a different note I think that graphical procedures such as Q-Q plots work equally well. The advantage of a formal test is that it is objective. In small samples it is true that these goodness of fit tests have practically no power and that makes intuitive sense because a small sample from a normal distribution might by chance look rather non normal and that is accounted for in the test. Also high skewness and kurtosis that distinguish many non normal distributions from normal distributions are not easily seen in small samples.
Is normality testing 'essentially useless'? The argument you gave is an opinion. I think that the importance of normality testing is to make sure that the data does not depart severely from the normal. I use it sometimes to decide between usi
281
Is normality testing 'essentially useless'?
I think a maximum entropy approach could be useful here. We can assign a normal distribution because we believe the data is "normally distributed" (whatever that means) or because we only expect to see deviations of about the same Magnitude. Also, because the normal distribution has just two sufficient statistics, it is insensitive to changes in the data which do not alter these quantities. So in a sense you can think of a normal distribution as an "average" over all possible distributions with the same first and second moments. this provides one reason why least squares should work as well as it does.
Is normality testing 'essentially useless'?
I think a maximum entropy approach could be useful here. We can assign a normal distribution because we believe the data is "normally distributed" (whatever that means) or because we only expect to s
Is normality testing 'essentially useless'? I think a maximum entropy approach could be useful here. We can assign a normal distribution because we believe the data is "normally distributed" (whatever that means) or because we only expect to see deviations of about the same Magnitude. Also, because the normal distribution has just two sufficient statistics, it is insensitive to changes in the data which do not alter these quantities. So in a sense you can think of a normal distribution as an "average" over all possible distributions with the same first and second moments. this provides one reason why least squares should work as well as it does.
Is normality testing 'essentially useless'? I think a maximum entropy approach could be useful here. We can assign a normal distribution because we believe the data is "normally distributed" (whatever that means) or because we only expect to s
282
Is normality testing 'essentially useless'?
I wouldn't say it is useless, but it really depends on the application. Note, you never really know the distribution the data is coming from, and all you have is a small set of the realizations. Your sample mean is always finite in sample, but the mean could be undefined or infinite for some types of probability density functions. Let us consider the three types of Levy stable distributions i.e Normal distribution, Levy distribution and Cauchy distribution. Most of your samples do not have a lot of observations at the tail (i.e away from the sample mean). So empirically it is very hard to distinguish between the three, so the Cauchy (has undefined mean) and the Levy (has infinite mean) could easily masquerade as a normal distribution.
Is normality testing 'essentially useless'?
I wouldn't say it is useless, but it really depends on the application. Note, you never really know the distribution the data is coming from, and all you have is a small set of the realizations. Your
Is normality testing 'essentially useless'? I wouldn't say it is useless, but it really depends on the application. Note, you never really know the distribution the data is coming from, and all you have is a small set of the realizations. Your sample mean is always finite in sample, but the mean could be undefined or infinite for some types of probability density functions. Let us consider the three types of Levy stable distributions i.e Normal distribution, Levy distribution and Cauchy distribution. Most of your samples do not have a lot of observations at the tail (i.e away from the sample mean). So empirically it is very hard to distinguish between the three, so the Cauchy (has undefined mean) and the Levy (has infinite mean) could easily masquerade as a normal distribution.
Is normality testing 'essentially useless'? I wouldn't say it is useless, but it really depends on the application. Note, you never really know the distribution the data is coming from, and all you have is a small set of the realizations. Your
283
Is normality testing 'essentially useless'?
Tests where "something" important to the analysis is supported by high p-values are I think wrong headed. As others pointed out, for large data sets, a p-value below 0.05 is assured. So, the test essentially "rewards" for small and fuzzy data sets and "rewards" for a lack of evidence. Something like qq plots are much more useful. The desire for hard numbers to decide things like this always (yes/no normal/not normal) misses that modeling is partially an art and how hypotheses are actually supported.
Is normality testing 'essentially useless'?
Tests where "something" important to the analysis is supported by high p-values are I think wrong headed. As others pointed out, for large data sets, a p-value below 0.05 is assured. So, the test es
Is normality testing 'essentially useless'? Tests where "something" important to the analysis is supported by high p-values are I think wrong headed. As others pointed out, for large data sets, a p-value below 0.05 is assured. So, the test essentially "rewards" for small and fuzzy data sets and "rewards" for a lack of evidence. Something like qq plots are much more useful. The desire for hard numbers to decide things like this always (yes/no normal/not normal) misses that modeling is partially an art and how hypotheses are actually supported.
Is normality testing 'essentially useless'? Tests where "something" important to the analysis is supported by high p-values are I think wrong headed. As others pointed out, for large data sets, a p-value below 0.05 is assured. So, the test es
284
Why is Euclidean distance not a good metric in high dimensions?
A great summary of non-intuitive results in higher dimensions comes from "A Few Useful Things to Know about Machine Learning" by Pedro Domingos at the University of Washington: [O]ur intuitions, which come from a three-dimensional world, often do not apply in high-dimensional ones. In high dimensions, most of the mass of a multivariate Gaussian distribution is not near the mean, but in an increasingly distant “shell” around it; and most of the volume of a high-dimensional orange is in the skin, not the pulp. If a constant number of examples is distributed uniformly in a high-dimensional hypercube, beyond some dimensionality most examples are closer to a face of the hypercube than to their nearest neighbor. And if we approximate a hypersphere by inscribing it in a hypercube, in high dimensions almost all the volume of the hypercube is outside the hypersphere. This is bad news for machine learning, where shapes of one type are often approximated by shapes of another. The article is also full of many additional pearls of wisdom for machine learning. Another application, beyond machine learning, is nearest neighbor search: given an observation of interest, find its nearest neighbors (in the sense that these are the points with the smallest distance from the query point). But in high dimensions, a curious phenomenon arises: the ratio between the nearest and farthest points approaches 1, i.e. the points essentially become uniformly distant from each other. This phenomenon can be observed for wide variety of distance metrics, but it is more pronounced for the Euclidean metric than, say, Manhattan distance metric. The premise of nearest neighbor search is that "closer" points are more relevant than "farther" points, but if all points are essentially uniformly distant from each other, the distinction is meaningless. From Charu C. Aggarwal, Alexander Hinneburg, Daniel A. Keim, "On the Surprising Behavior of Distance Metrics in High Dimensional Space": It has been argued in [Kevin Beyer, Jonathan Goldstein, Raghu Ramakrishnan, Uri Shaft, "When Is 'Nearest Neighbor' Meaningful?"] that under certain reasonable assumptions on the data distribution, the ratio of the distances of the nearest and farthest neighbors to a given target in high dimensional space is almost 1 for a wide variety of data distributions and distance functions. In such a case, the nearest neighbor problem becomes ill defined, since the contrast between the distances to diferent data points does not exist. In such cases, even the concept of proximity may not be meaningful from a qualitative perspective: a problem which is even more fundamental than the performance degradation of high dimensional algorithms. ... Many high-dimensional indexing structures and algorithms use the [E]uclidean distance metric as a natural extension of its traditional use in two- or three-dimensional spatial applications. ... In this paper we provide some surprising theoretical and experimental results in analyzing the dependency of the $L_k$ norm on the value of $k$. More specifically, we show that the relative contrasts of the distances to a query point depend heavily on the $L_k$ metric used. This provides considerable evidence that the meaningfulness of the $L_k$ norm worsens faster within increasing dimensionality for higher values of $k$. Thus, for a given problem with a fixed (high) value for the dimensionality $d$, it may be preferable to use lower values of $k$. This means that the $L_1$ distance metric (Manhattan distance metric) is the most preferable for high dimensional applications, followed by the Euclidean metric ($L_2$). ... The authors of the "Surprising Behavior" paper then propose using $L_k$ norms with $k<1$. They produce some results which demonstrate that these "fractional norms" exhibit the property of increasing the contrast between farthest and nearest points. However, later research has concluded against fractional norms. See: "Fractional norms and quasinorms do not help to overcome the curse of dimensionality" by Mirkes, Allohibi, & Gorban (2020). (Thanks to michen00 for the comment and helpful citation.)
Why is Euclidean distance not a good metric in high dimensions?
A great summary of non-intuitive results in higher dimensions comes from "A Few Useful Things to Know about Machine Learning" by Pedro Domingos at the University of Washington: [O]ur intuitions, whic
Why is Euclidean distance not a good metric in high dimensions? A great summary of non-intuitive results in higher dimensions comes from "A Few Useful Things to Know about Machine Learning" by Pedro Domingos at the University of Washington: [O]ur intuitions, which come from a three-dimensional world, often do not apply in high-dimensional ones. In high dimensions, most of the mass of a multivariate Gaussian distribution is not near the mean, but in an increasingly distant “shell” around it; and most of the volume of a high-dimensional orange is in the skin, not the pulp. If a constant number of examples is distributed uniformly in a high-dimensional hypercube, beyond some dimensionality most examples are closer to a face of the hypercube than to their nearest neighbor. And if we approximate a hypersphere by inscribing it in a hypercube, in high dimensions almost all the volume of the hypercube is outside the hypersphere. This is bad news for machine learning, where shapes of one type are often approximated by shapes of another. The article is also full of many additional pearls of wisdom for machine learning. Another application, beyond machine learning, is nearest neighbor search: given an observation of interest, find its nearest neighbors (in the sense that these are the points with the smallest distance from the query point). But in high dimensions, a curious phenomenon arises: the ratio between the nearest and farthest points approaches 1, i.e. the points essentially become uniformly distant from each other. This phenomenon can be observed for wide variety of distance metrics, but it is more pronounced for the Euclidean metric than, say, Manhattan distance metric. The premise of nearest neighbor search is that "closer" points are more relevant than "farther" points, but if all points are essentially uniformly distant from each other, the distinction is meaningless. From Charu C. Aggarwal, Alexander Hinneburg, Daniel A. Keim, "On the Surprising Behavior of Distance Metrics in High Dimensional Space": It has been argued in [Kevin Beyer, Jonathan Goldstein, Raghu Ramakrishnan, Uri Shaft, "When Is 'Nearest Neighbor' Meaningful?"] that under certain reasonable assumptions on the data distribution, the ratio of the distances of the nearest and farthest neighbors to a given target in high dimensional space is almost 1 for a wide variety of data distributions and distance functions. In such a case, the nearest neighbor problem becomes ill defined, since the contrast between the distances to diferent data points does not exist. In such cases, even the concept of proximity may not be meaningful from a qualitative perspective: a problem which is even more fundamental than the performance degradation of high dimensional algorithms. ... Many high-dimensional indexing structures and algorithms use the [E]uclidean distance metric as a natural extension of its traditional use in two- or three-dimensional spatial applications. ... In this paper we provide some surprising theoretical and experimental results in analyzing the dependency of the $L_k$ norm on the value of $k$. More specifically, we show that the relative contrasts of the distances to a query point depend heavily on the $L_k$ metric used. This provides considerable evidence that the meaningfulness of the $L_k$ norm worsens faster within increasing dimensionality for higher values of $k$. Thus, for a given problem with a fixed (high) value for the dimensionality $d$, it may be preferable to use lower values of $k$. This means that the $L_1$ distance metric (Manhattan distance metric) is the most preferable for high dimensional applications, followed by the Euclidean metric ($L_2$). ... The authors of the "Surprising Behavior" paper then propose using $L_k$ norms with $k<1$. They produce some results which demonstrate that these "fractional norms" exhibit the property of increasing the contrast between farthest and nearest points. However, later research has concluded against fractional norms. See: "Fractional norms and quasinorms do not help to overcome the curse of dimensionality" by Mirkes, Allohibi, & Gorban (2020). (Thanks to michen00 for the comment and helpful citation.)
Why is Euclidean distance not a good metric in high dimensions? A great summary of non-intuitive results in higher dimensions comes from "A Few Useful Things to Know about Machine Learning" by Pedro Domingos at the University of Washington: [O]ur intuitions, whic
285
Why is Euclidean distance not a good metric in high dimensions?
The notion of Euclidean distance, which works well in the two-dimensional and three-dimensional worlds studied by Euclid, has some properties in higher dimensions that are contrary to our (maybe just my) geometric intuition which is also an extrapolation from two and three dimensions. Consider a $4\times 4$ square with vertices at $(\pm 2, \pm 2)$. Draw four unit-radius circles centered at $(\pm 1, \pm 1)$. These "fill" the square, with each circle touching the sides of the square at two points, and each circle touching its two neighbors. For example, the circle centered at $(1,1)$ touches the sides of the square at $(2,1)$ and $(1,2)$, and its neighboring circles at $(1,0)$ and $(0,1)$. Next, draw a small circle centered at the origin that touches all four circles. Since the line segment whose endpoints are the centers of two osculating circles passes through the point of osculation, it is easily verified that the small circle has radius $r_{2} = \sqrt{2}-1$ and that it touches touches the four larger circles at $(\pm r_2/\sqrt{2}, \pm r_2/\sqrt{2})$. Note that the small circle is "completely surrounded" by the four larger circles and thus is also completely inside the square. Note also that the point $(r_2,0)$ lies on the small circle. Notice also that from the origin, one cannot "see" the point $(2,0)$ on the edge of the square because the line of sight passes through the point of osculation $(1,0)$ of the two circles centered at $(1,1)$ and $(1,-1)$. Ditto for the lines of sight to the other points where the axes pass through the edges of the square. Next, consider a $4\times 4 \times 4$ cube with vertices at $(\pm 2, \pm 2, \pm 2)$. We fill it with $8$ osculating unit-radius spheres centered at $(\pm 1, \pm 1, \pm 1)$, and then put a smaller osculating sphere centered at the origin. Note that the small sphere has radius $r_3 = \sqrt{3}-1 < 1$ and the point $(r_3,0,0)$ lies on the surface of the small sphere. But notice also that in three dimensions, one can "see" the point $(2,0,0)$ from the origin; there are no bigger bigger spheres blocking the view as happens in two dimensions. These clear lines of sight from the origin to the points where the axes pass through the surface of the cube occur in all larger dimensions as well. Generalizing, we can consider a $n$-dimensional hypercube of side $4$ and fill it with $2^n$ osculating unit-radius hyperspheres centered at $(\pm 1, \pm 1, \ldots, \pm 1)$ and then put a "smaller" osculating sphere of radius $$r_n = \sqrt{n}-1\tag{1}$$ at the origin. The point $(r_n,0,0, \ldots, 0)$ lies on this "smaller" sphere. But, notice from $(1)$ that when $n = 4$, $r_n = 1$ and so the "smaller" sphere has unit radius and thus really does not deserve the soubriquet of "smaller" for $n\geq 4$. Indeed, it would be better if we called it the "larger sphere" or just "central sphere". As noted in the last paragraph, there is a clear line of sight from the origin to the points where the axes pass through the surface of the hypercube. Worse yet, when $n > 9$, we have from $(1)$ that $r_n >2$, and thus the point $(r_n, 0, 0, \ldots, 0)$ on the central sphere lies outside the hypercube of side $4$ even though it is "completely surrounded" by the unit-radius hyperspheres that "fill" the hypercube (in the sense of packing it). The central sphere "bulges" outside the hypercube in high-dimensional space. I find this very counter-intuitive because my mental translations of the notion of Euclidean distance to higher dimensions, using the geometric intuition that I have developed from the 2-space and 3-space that I am familiar with, do not describe the reality of high-dimensional space. My answer to the OP's question "Besides, what is 'high dimensions'?" is $n \geq 9$.
Why is Euclidean distance not a good metric in high dimensions?
The notion of Euclidean distance, which works well in the two-dimensional and three-dimensional worlds studied by Euclid, has some properties in higher dimensions that are contrary to our (maybe just
Why is Euclidean distance not a good metric in high dimensions? The notion of Euclidean distance, which works well in the two-dimensional and three-dimensional worlds studied by Euclid, has some properties in higher dimensions that are contrary to our (maybe just my) geometric intuition which is also an extrapolation from two and three dimensions. Consider a $4\times 4$ square with vertices at $(\pm 2, \pm 2)$. Draw four unit-radius circles centered at $(\pm 1, \pm 1)$. These "fill" the square, with each circle touching the sides of the square at two points, and each circle touching its two neighbors. For example, the circle centered at $(1,1)$ touches the sides of the square at $(2,1)$ and $(1,2)$, and its neighboring circles at $(1,0)$ and $(0,1)$. Next, draw a small circle centered at the origin that touches all four circles. Since the line segment whose endpoints are the centers of two osculating circles passes through the point of osculation, it is easily verified that the small circle has radius $r_{2} = \sqrt{2}-1$ and that it touches touches the four larger circles at $(\pm r_2/\sqrt{2}, \pm r_2/\sqrt{2})$. Note that the small circle is "completely surrounded" by the four larger circles and thus is also completely inside the square. Note also that the point $(r_2,0)$ lies on the small circle. Notice also that from the origin, one cannot "see" the point $(2,0)$ on the edge of the square because the line of sight passes through the point of osculation $(1,0)$ of the two circles centered at $(1,1)$ and $(1,-1)$. Ditto for the lines of sight to the other points where the axes pass through the edges of the square. Next, consider a $4\times 4 \times 4$ cube with vertices at $(\pm 2, \pm 2, \pm 2)$. We fill it with $8$ osculating unit-radius spheres centered at $(\pm 1, \pm 1, \pm 1)$, and then put a smaller osculating sphere centered at the origin. Note that the small sphere has radius $r_3 = \sqrt{3}-1 < 1$ and the point $(r_3,0,0)$ lies on the surface of the small sphere. But notice also that in three dimensions, one can "see" the point $(2,0,0)$ from the origin; there are no bigger bigger spheres blocking the view as happens in two dimensions. These clear lines of sight from the origin to the points where the axes pass through the surface of the cube occur in all larger dimensions as well. Generalizing, we can consider a $n$-dimensional hypercube of side $4$ and fill it with $2^n$ osculating unit-radius hyperspheres centered at $(\pm 1, \pm 1, \ldots, \pm 1)$ and then put a "smaller" osculating sphere of radius $$r_n = \sqrt{n}-1\tag{1}$$ at the origin. The point $(r_n,0,0, \ldots, 0)$ lies on this "smaller" sphere. But, notice from $(1)$ that when $n = 4$, $r_n = 1$ and so the "smaller" sphere has unit radius and thus really does not deserve the soubriquet of "smaller" for $n\geq 4$. Indeed, it would be better if we called it the "larger sphere" or just "central sphere". As noted in the last paragraph, there is a clear line of sight from the origin to the points where the axes pass through the surface of the hypercube. Worse yet, when $n > 9$, we have from $(1)$ that $r_n >2$, and thus the point $(r_n, 0, 0, \ldots, 0)$ on the central sphere lies outside the hypercube of side $4$ even though it is "completely surrounded" by the unit-radius hyperspheres that "fill" the hypercube (in the sense of packing it). The central sphere "bulges" outside the hypercube in high-dimensional space. I find this very counter-intuitive because my mental translations of the notion of Euclidean distance to higher dimensions, using the geometric intuition that I have developed from the 2-space and 3-space that I am familiar with, do not describe the reality of high-dimensional space. My answer to the OP's question "Besides, what is 'high dimensions'?" is $n \geq 9$.
Why is Euclidean distance not a good metric in high dimensions? The notion of Euclidean distance, which works well in the two-dimensional and three-dimensional worlds studied by Euclid, has some properties in higher dimensions that are contrary to our (maybe just
286
Why is Euclidean distance not a good metric in high dimensions?
It is a matter of signal-to-noise. Euclidean distance, due to the squared terms, is particular sensitive to noise; but even Manhattan distance and "fractional" (non-metric) distances suffer. I found the studies in this article very enlightening: Zimek, A., Schubert, E. and Kriegel, H.-P. (2012), A survey on unsupervised outlier detection in high-dimensional numerical data. Statistical Analy Data Mining, 5: 363–387. doi: 10.1002/sam.11161 It revisits the observations made in e.g. On the Surprising Behavior of Distance Metrics in High Dimensional Space by Aggarwal, Hinneburg and Keim mentioned by @Pat. But it also shows how out synthetic experiments are misleading and that in fact high-dimensional data can become easier. If you have a lot of (redundant) signal, and the new dimensions add little noise. The last claim is probably most obvious when considering duplicate dimensions. Mapping your data set $x,y \rightarrow x,y,x,y,x,y,x,y,...,x,y$ increases representative dimensionality, but does not at all make Euclidean distance fail. (See also: intrinsic dimensionality) So in the end, it still depends on your data. If you have a lot of useless attributes, Euclidean distance will become useless. If you could easily embed your data in a low-dimensional data space, then Euclidean distance should also work in the full dimensional space. In particular for sparse data, such as TF vectors from text, this does appear to be the case that the data is of much lower dimensionality than the vector space model suggests. Some people believe that cosine distance is better than Euclidean on high-dimensional data. I do not think so: cosine distance and Euclidean distance are closely related; so we must expect them to suffer from the same problems. However, textual data where cosine is popular is usually sparse, and cosine is faster on data that is sparse - so for sparse data, there are good reasons to use cosine; and because the data is sparse the intrinsic dimensionality is much much less than the vector space dimension. See also this reply I gave to an earlier question: https://stats.stackexchange.com/a/29647/7828
Why is Euclidean distance not a good metric in high dimensions?
It is a matter of signal-to-noise. Euclidean distance, due to the squared terms, is particular sensitive to noise; but even Manhattan distance and "fractional" (non-metric) distances suffer. I found t
Why is Euclidean distance not a good metric in high dimensions? It is a matter of signal-to-noise. Euclidean distance, due to the squared terms, is particular sensitive to noise; but even Manhattan distance and "fractional" (non-metric) distances suffer. I found the studies in this article very enlightening: Zimek, A., Schubert, E. and Kriegel, H.-P. (2012), A survey on unsupervised outlier detection in high-dimensional numerical data. Statistical Analy Data Mining, 5: 363–387. doi: 10.1002/sam.11161 It revisits the observations made in e.g. On the Surprising Behavior of Distance Metrics in High Dimensional Space by Aggarwal, Hinneburg and Keim mentioned by @Pat. But it also shows how out synthetic experiments are misleading and that in fact high-dimensional data can become easier. If you have a lot of (redundant) signal, and the new dimensions add little noise. The last claim is probably most obvious when considering duplicate dimensions. Mapping your data set $x,y \rightarrow x,y,x,y,x,y,x,y,...,x,y$ increases representative dimensionality, but does not at all make Euclidean distance fail. (See also: intrinsic dimensionality) So in the end, it still depends on your data. If you have a lot of useless attributes, Euclidean distance will become useless. If you could easily embed your data in a low-dimensional data space, then Euclidean distance should also work in the full dimensional space. In particular for sparse data, such as TF vectors from text, this does appear to be the case that the data is of much lower dimensionality than the vector space model suggests. Some people believe that cosine distance is better than Euclidean on high-dimensional data. I do not think so: cosine distance and Euclidean distance are closely related; so we must expect them to suffer from the same problems. However, textual data where cosine is popular is usually sparse, and cosine is faster on data that is sparse - so for sparse data, there are good reasons to use cosine; and because the data is sparse the intrinsic dimensionality is much much less than the vector space dimension. See also this reply I gave to an earlier question: https://stats.stackexchange.com/a/29647/7828
Why is Euclidean distance not a good metric in high dimensions? It is a matter of signal-to-noise. Euclidean distance, due to the squared terms, is particular sensitive to noise; but even Manhattan distance and "fractional" (non-metric) distances suffer. I found t
287
Why is Euclidean distance not a good metric in high dimensions?
The best place to start is probably to read On the Surprising Behavior of Distance Metrics in High Dimensional Space by Aggarwal, Hinneburg and Keim . There is a currently working link here (pdf), but it should be very google-able if that breaks. In short, as the number of dimensions grows, the relative euclidean distance between a point in a set and its closest neighbour, and between that point and its furthest neighbour, changes in some non-obvious ways. Whether or not this will badly affect your results depends a great deal on what you're trying to achieve and what your data's like.
Why is Euclidean distance not a good metric in high dimensions?
The best place to start is probably to read On the Surprising Behavior of Distance Metrics in High Dimensional Space by Aggarwal, Hinneburg and Keim . There is a currently working link here (pdf), but
Why is Euclidean distance not a good metric in high dimensions? The best place to start is probably to read On the Surprising Behavior of Distance Metrics in High Dimensional Space by Aggarwal, Hinneburg and Keim . There is a currently working link here (pdf), but it should be very google-able if that breaks. In short, as the number of dimensions grows, the relative euclidean distance between a point in a set and its closest neighbour, and between that point and its furthest neighbour, changes in some non-obvious ways. Whether or not this will badly affect your results depends a great deal on what you're trying to achieve and what your data's like.
Why is Euclidean distance not a good metric in high dimensions? The best place to start is probably to read On the Surprising Behavior of Distance Metrics in High Dimensional Space by Aggarwal, Hinneburg and Keim . There is a currently working link here (pdf), but
288
Why is Euclidean distance not a good metric in high dimensions?
Euclidean distance is very rarely a good distance to choose in Machine Learning and this becomes more obvious in higher dimensions. This is because most of the time in Machine Learning you are not dealing with a Euclidean Metric Space, but a Probabilistic Metric Space and therefore you should be using probabilistic and information theoretic distance functions, e.g. entropy based ones. Humans like euclidean space because it's easy to conceptualize, furthermore it's mathematically easy because of linearity properties that mean we can apply linear algebra. If we define distances in terms of, say Kullback-Leibler Divergence, then it's harder to visualize and work with mathematically.
Why is Euclidean distance not a good metric in high dimensions?
Euclidean distance is very rarely a good distance to choose in Machine Learning and this becomes more obvious in higher dimensions. This is because most of the time in Machine Learning you are not de
Why is Euclidean distance not a good metric in high dimensions? Euclidean distance is very rarely a good distance to choose in Machine Learning and this becomes more obvious in higher dimensions. This is because most of the time in Machine Learning you are not dealing with a Euclidean Metric Space, but a Probabilistic Metric Space and therefore you should be using probabilistic and information theoretic distance functions, e.g. entropy based ones. Humans like euclidean space because it's easy to conceptualize, furthermore it's mathematically easy because of linearity properties that mean we can apply linear algebra. If we define distances in terms of, say Kullback-Leibler Divergence, then it's harder to visualize and work with mathematically.
Why is Euclidean distance not a good metric in high dimensions? Euclidean distance is very rarely a good distance to choose in Machine Learning and this becomes more obvious in higher dimensions. This is because most of the time in Machine Learning you are not de
289
Why is Euclidean distance not a good metric in high dimensions?
As an analogy, imagine a circle centred at the origin. Points are distributed evenly. Suppose a randomly-selected point is at (x1, x2). The Euclidean distance from the origin is ((x1)^2 + (x2)^2)^0.5 Now, imagine points evenly distributed over a sphere. That same point (x1, x2) will now probable be (x1, x2, x3). Since, in an even distribution, only a few points have one of the co-ordinates as zero, we shall assume that [x3 != 0] for our randomly-selected evenly-distributed point. Thus, our random point is most likely (x1, x2, x3) and not (x1, x2, 0). The effect of this is: any random point is now at a distance of ((x1)^2 + (x2)^2 + (x3)^2)^0.5 from the origin of the 3-D sphere. This distance is larger than that for a random point near the origin of a 2-D circle. This problem gets worse in higher dimensions, which is why we choose metrics other than Euclidean dimensions to work with higher dimensions. EDIT: There's a saying which I recall now: "Most of the mass of a higher-dimensional orange is in the skin, not the pulp", meaning that in higher dimensions evenly distributed points are more "near" (Euclidean distance) the boundary than the origin. Side note: Euclidean distance is not TOO bad for real-world problems due to the 'blessing of non-uniformity', which basically states that for real data, your data is probably NOT going to be distributed evenly in the higher dimensional space, but will occupy a small clusted subset of the space. This makes sense intuitively: if you're measuring 100 quantities about humans like height, weight, etc, an even distribution over the dimension space just does not make sense, e.g. a person with (height=65 inches, weight=150 lbs, avg_calorie_intake=4000) which is just not possible in the real world.
Why is Euclidean distance not a good metric in high dimensions?
As an analogy, imagine a circle centred at the origin. Points are distributed evenly. Suppose a randomly-selected point is at (x1, x2). The Euclidean distance from the origin is ((x1)^2 + (x2)^2)^0.5
Why is Euclidean distance not a good metric in high dimensions? As an analogy, imagine a circle centred at the origin. Points are distributed evenly. Suppose a randomly-selected point is at (x1, x2). The Euclidean distance from the origin is ((x1)^2 + (x2)^2)^0.5 Now, imagine points evenly distributed over a sphere. That same point (x1, x2) will now probable be (x1, x2, x3). Since, in an even distribution, only a few points have one of the co-ordinates as zero, we shall assume that [x3 != 0] for our randomly-selected evenly-distributed point. Thus, our random point is most likely (x1, x2, x3) and not (x1, x2, 0). The effect of this is: any random point is now at a distance of ((x1)^2 + (x2)^2 + (x3)^2)^0.5 from the origin of the 3-D sphere. This distance is larger than that for a random point near the origin of a 2-D circle. This problem gets worse in higher dimensions, which is why we choose metrics other than Euclidean dimensions to work with higher dimensions. EDIT: There's a saying which I recall now: "Most of the mass of a higher-dimensional orange is in the skin, not the pulp", meaning that in higher dimensions evenly distributed points are more "near" (Euclidean distance) the boundary than the origin. Side note: Euclidean distance is not TOO bad for real-world problems due to the 'blessing of non-uniformity', which basically states that for real data, your data is probably NOT going to be distributed evenly in the higher dimensional space, but will occupy a small clusted subset of the space. This makes sense intuitively: if you're measuring 100 quantities about humans like height, weight, etc, an even distribution over the dimension space just does not make sense, e.g. a person with (height=65 inches, weight=150 lbs, avg_calorie_intake=4000) which is just not possible in the real world.
Why is Euclidean distance not a good metric in high dimensions? As an analogy, imagine a circle centred at the origin. Points are distributed evenly. Suppose a randomly-selected point is at (x1, x2). The Euclidean distance from the origin is ((x1)^2 + (x2)^2)^0.5
290
Why is Euclidean distance not a good metric in high dimensions?
Another facet of this question is this: Very often high dimensions in (machine-learning/statistical) problems are a result of over-constrained features. Meaning the dimensions are NOT independent (or uncorrelated), but Euclidean metrics assume (at-least) un-correlation and thus may not produce best results So to answer your question the number of "high dimensions" is related to how many features are inter-depedennt or redundant or over-constrained Additionally: It is a theorem of Csiszar (et al.), "Why Least Squares and Maximum Entropy? An Axiomatic Approach to Inference for Linear Inverse Problems" that Euclidean metrics are "natural" candidates for inference when the features are of certain forms: An attempt is made to determine the logically consistent rules for selecting a vector from any feasible set defined by linear constraints, when either all n-vectors or those with positive components or the probability vectors are permissible. Some basic postulates are satisfied if and only if the selection rule is to minimize a certain function which, if a "prior guess" is available, is a measure of distance from the prior guess. Two further natural postulates restrict the permissible distances to the author's f-divergences and Bregman's divergences, respectively. As corollaries, axiomatic characterizations of the methods of least squares and minimum discrimination information are arrived at. Alternatively, the latter are also characterized by a postulate of composition consistency. As a special case, a derivation of the method of maximum entropy from a small set of natural axioms is obtained.
Why is Euclidean distance not a good metric in high dimensions?
Another facet of this question is this: Very often high dimensions in (machine-learning/statistical) problems are a result of over-constrained features. Meaning the dimensions are NOT independent (or
Why is Euclidean distance not a good metric in high dimensions? Another facet of this question is this: Very often high dimensions in (machine-learning/statistical) problems are a result of over-constrained features. Meaning the dimensions are NOT independent (or uncorrelated), but Euclidean metrics assume (at-least) un-correlation and thus may not produce best results So to answer your question the number of "high dimensions" is related to how many features are inter-depedennt or redundant or over-constrained Additionally: It is a theorem of Csiszar (et al.), "Why Least Squares and Maximum Entropy? An Axiomatic Approach to Inference for Linear Inverse Problems" that Euclidean metrics are "natural" candidates for inference when the features are of certain forms: An attempt is made to determine the logically consistent rules for selecting a vector from any feasible set defined by linear constraints, when either all n-vectors or those with positive components or the probability vectors are permissible. Some basic postulates are satisfied if and only if the selection rule is to minimize a certain function which, if a "prior guess" is available, is a measure of distance from the prior guess. Two further natural postulates restrict the permissible distances to the author's f-divergences and Bregman's divergences, respectively. As corollaries, axiomatic characterizations of the methods of least squares and minimum discrimination information are arrived at. Alternatively, the latter are also characterized by a postulate of composition consistency. As a special case, a derivation of the method of maximum entropy from a small set of natural axioms is obtained.
Why is Euclidean distance not a good metric in high dimensions? Another facet of this question is this: Very often high dimensions in (machine-learning/statistical) problems are a result of over-constrained features. Meaning the dimensions are NOT independent (or
291
Why is Euclidean distance not a good metric in high dimensions?
This paper may help you too "Improved sqrt-cosine similarity measurement" visit https://journalofbigdata.springeropen.com/articles/10.1186/s40537-017-0083-6 This paper explains why Euclidean distance is not a good metric in high dimensional data and what is the best replacement for Euclidean distance in high dimensional data. Euclidean distance is L2 norm and by decreasing the value of k in Lk norm we can alleviate the problem of distance in high dimensional data. You can find the references in this paper as well.
Why is Euclidean distance not a good metric in high dimensions?
This paper may help you too "Improved sqrt-cosine similarity measurement" visit https://journalofbigdata.springeropen.com/articles/10.1186/s40537-017-0083-6 This paper explains why Euclidean distance
Why is Euclidean distance not a good metric in high dimensions? This paper may help you too "Improved sqrt-cosine similarity measurement" visit https://journalofbigdata.springeropen.com/articles/10.1186/s40537-017-0083-6 This paper explains why Euclidean distance is not a good metric in high dimensional data and what is the best replacement for Euclidean distance in high dimensional data. Euclidean distance is L2 norm and by decreasing the value of k in Lk norm we can alleviate the problem of distance in high dimensional data. You can find the references in this paper as well.
Why is Euclidean distance not a good metric in high dimensions? This paper may help you too "Improved sqrt-cosine similarity measurement" visit https://journalofbigdata.springeropen.com/articles/10.1186/s40537-017-0083-6 This paper explains why Euclidean distance
292
What should I do when my neural network doesn't learn?
1. Verify that your code is bug free There's a saying among writers that "All writing is re-writing" -- that is, the greater part of writing is revising. For programmers (or at least data scientists) the expression could be re-phrased as "All coding is debugging." Any time you're writing code, you need to verify that it works as intended. The best method I've ever found for verifying correctness is to break your code into small segments, and verify that each segment works. This can be done by comparing the segment output to what you know to be the correct answer. This is called unit testing. Writing good unit tests is a key piece of becoming a good statistician/data scientist/machine learning expert/neural network practitioner. There is simply no substitute. You have to check that your code is free of bugs before you can tune network performance! Otherwise, you might as well be re-arranging deck chairs on the RMS Titanic. There are two features of neural networks that make verification even more important than for other types of machine learning or statistical models. Neural networks are not "off-the-shelf" algorithms in the way that random forest or logistic regression are. Even for simple, feed-forward networks, the onus is largely on the user to make numerous decisions about how the network is configured, connected, initialized and optimized. This means writing code, and writing code means debugging. Even when a neural network code executes without raising an exception, the network can still have bugs! These bugs might even be the insidious kind for which the network will train, but get stuck at a sub-optimal solution, or the resulting network does not have the desired architecture. (This is an example of the difference between a syntactic and semantic error.) This Medium post, "How to unit test machine learning code," by Chase Roberts discusses unit-testing for machine learning models in more detail. I borrowed this example of buggy code from the article: def make_convnet(input_image): net = slim.conv2d(input_image, 32, [11, 11], scope="conv1_11x11") net = slim.conv2d(input_image, 64, [5, 5], scope="conv2_5x5") net = slim.max_pool2d(net, [4, 4], stride=4, scope='pool1') net = slim.conv2d(input_image, 64, [5, 5], scope="conv3_5x5") net = slim.conv2d(input_image, 128, [3, 3], scope="conv4_3x3") net = slim.max_pool2d(net, [2, 2], scope='pool2') net = slim.conv2d(input_image, 128, [3, 3], scope="conv5_3x3") net = slim.max_pool2d(net, [2, 2], scope='pool3') net = slim.conv2d(input_image, 32, [1, 1], scope="conv6_1x1") return net Do you see the error? Many of the different operations are not actually used because previous results are over-written with new variables. Using this block of code in a network will still train and the weights will update and the loss might even decrease -- but the code definitely isn't doing what was intended. (The author is also inconsistent about using single- or double-quotes but that's purely stylistic.) The most common programming errors pertaining to neural networks are Variables are created but never used (usually because of copy-paste errors); Expressions for gradient updates are incorrect; Weight updates are not applied; Loss functions are not measured on the correct scale (for example, cross-entropy loss can be expressed in terms of probability or logits) The loss is not appropriate for the task (for example, using categorical cross-entropy loss for a regression task). Dropout is used during testing, instead of only being used for training. Make sure you're minimizing the loss function $L(x)$, instead of minimizing $-L(x)$. Make sure your loss is computed correctly. Unit testing is not just limited to the neural network itself. You need to test all of the steps that produce or transform data and feed into the network. Some common mistakes here are NA or NaN or Inf values in your data creating NA or NaN or Inf values in the output, and therefore in the loss function. Shuffling the labels independently from the samples (for instance, creating train/test splits for the labels and samples separately); Accidentally assigning the training data as the testing data; When using a train/test split, the model references the original, non-split data instead of the training partition or the testing partition. Forgetting to scale the testing data; Scaling the testing data using the statistics of the test partition instead of the train partition; Forgetting to un-scale the predictions (e.g. pixel values are in [0,1] instead of [0, 255]). Here's an example of a question where the problem appears to be one of model configuration or hyperparameter choice, but actually the problem was a subtle bug in how gradients were computed. Is this drop in training accuracy due to a statistical or programming error? 2. For the love of all that is good, scale your data The scale of the data can make an enormous difference on training. Sometimes, networks simply won't reduce the loss if the data isn't scaled. Other networks will decrease the loss, but only very slowly. Scaling the inputs (and certain times, the targets) can dramatically improve the network's training. Prior to presenting data to a neural network, standardizing the data to have 0 mean and unit variance, or to lie in a small interval like $[-0.5, 0.5]$ can improve training. This amounts to pre-conditioning, and removes the effect that a choice in units has on network weights. For example, length in millimeters and length in kilometers both represent the same concept, but are on different scales. The exact details of how to standardize the data depend on what your data look like. Data normalization and standardization in neural networks Why does $[0,1]$ scaling dramatically increase training time for feed forward ANN (1 hidden layer)? Batch or Layer normalization can improve network training. Both seek to improve the network by keeping a running mean and standard deviation for neurons' activations as the network trains. It is not well-understood why this helps training, and remains an active area of research. "Understanding Batch Normalization" by Johan Bjorck, Carla Gomes, Bart Selman "Towards a Theoretical Understanding of Batch Normalization" by Jonas Kohler, Hadi Daneshmand, Aurelien Lucchi, Ming Zhou, Klaus Neymeyr, Thomas Hofmann "How Does Batch Normalization Help Optimization? (No, It Is Not About Internal Covariate Shift)" by Shibani Santurkar, Dimitris Tsipras, Andrew Ilyas, Aleksander Madry 3. Crawl Before You Walk; Walk Before You Run Wide and deep neural networks, and neural networks with exotic wiring, are the Hot Thing right now in machine learning. But these networks didn't spring fully-formed into existence; their designers built up to them from smaller units. First, build a small network with a single hidden layer and verify that it works correctly. Then incrementally add additional model complexity, and verify that each of those works as well. Too few neurons in a layer can restrict the representation that the network learns, causing under-fitting. Too many neurons can cause over-fitting because the network will "memorize" the training data. Even if you can prove that there is, mathematically, only a small number of neurons necessary to model a problem, it is often the case that having "a few more" neurons makes it easier for the optimizer to find a "good" configuration. (But I don't think anyone fully understands why this is the case.) I provide an example of this in the context of the XOR problem here: Aren't my iterations needed to train NN for XOR with MSE < 0.001 too high?. Choosing the number of hidden layers lets the network learn an abstraction from the raw data. Deep learning is all the rage these days, and networks with a large number of layers have shown impressive results. But adding too many hidden layers can make risk overfitting or make it very hard to optimize the network. Choosing a clever network wiring can do a lot of the work for you. Is your data source amenable to specialized network architectures? Convolutional neural networks can achieve impressive results on "structured" data sources, image or audio data. Recurrent neural networks can do well on sequential data types, such as natural language or time series data. Residual connections can improve deep feed-forward networks. 4. Neural Network Training Is Like Lock Picking To achieve state of the art, or even merely good, results, you have to set up all of the parts configured to work well together. Setting up a neural network configuration that actually learns is a lot like picking a lock: all of the pieces have to be lined up just right. Just as it is not sufficient to have a single tumbler in the right place, neither is it sufficient to have only the architecture, or only the optimizer, set up correctly. Tuning configuration choices is not really as simple as saying that one kind of configuration choice (e.g. learning rate) is more or less important than another (e.g. number of units), since all of these choices interact with all of the other choices, so one choice can do well in combination with another choice made elsewhere. This is a non-exhaustive list of the configuration options which are not also regularization options or numerical optimization options. All of these topics are active areas of research. The network initialization is often overlooked as a source of neural network bugs. Initialization over too-large an interval can set initial weights too large, meaning that single neurons have an outsize influence over the network behavior. The key difference between a neural network and a regression model is that a neural network is a composition of many nonlinear functions, called activation functions. (See: What is the essential difference between neural network and linear regression) Classical neural network results focused on sigmoidal activation functions (logistic or $\tanh$ functions). A recent result has found that ReLU (or similar) units tend to work better because the have steeper gradients, so updates can be applied quickly. (See: Why do we use ReLU in neural networks and how do we use it?) One caution about ReLUs is the "dead neuron" phenomenon, which can stymie learning; leaky relus and similar variants avoid this problem. See Why can't a single ReLU learn a ReLU? My ReLU network fails to launch There are a number of other options. See: Comprehensive list of activation functions in neural networks with pros/cons Residual connections are a neat development that can make it easier to train neural networks. "Deep Residual Learning for Image Recognition" Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun In: CVPR. (2016). Additionally, changing the order of operations within the residual block can further improve the resulting network. "Identity Mappings in Deep Residual Networks" by Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 5. Non-convex optimization is hard The objective function of a neural network is only convex when there are no hidden units, all activations are linear, and the design matrix is full-rank -- because this configuration is identically an ordinary regression problem. In all other cases, the optimization problem is non-convex, and non-convex optimization is hard. The challenges of training neural networks are well-known (see: Why is it hard to train deep neural networks?). Additionally, neural networks have a very large number of parameters, which restricts us to solely first-order methods (see: Why is Newton's method not widely used in machine learning?). This is a very active area of research. Setting the learning rate too large will cause the optimization to diverge, because you will leap from one side of the "canyon" to the other. Setting this too small will prevent you from making any real progress, and possibly allow the noise inherent in SGD to overwhelm your gradient estimates. See: How can change in cost function be positive? Gradient clipping re-scales the norm of the gradient if it's above some threshold. I used to think that this was a set-and-forget parameter, typically at 1.0, but I found that I could make an LSTM language model dramatically better by setting it to 0.25. I don't know why that is. Learning rate scheduling can decrease the learning rate over the course of training. In my experience, trying to use scheduling is a lot like regex: it replaces one problem ("How do I get learning to continue after a certain epoch?") with two problems ("How do I get learning to continue after a certain epoch?" and "How do I choose a good schedule?"). Other people insist that scheduling is essential. I'll let you decide. Choosing a good minibatch size can influence the learning process indirectly, since a larger mini-batch will tend to have a smaller variance (law-of-large-numbers) than a smaller mini-batch. You want the mini-batch to be large enough to be informative about the direction of the gradient, but small enough that SGD can regularize your network. There are a number of variants on stochastic gradient descent which use momentum, adaptive learning rates, Nesterov updates and so on to improve upon vanilla SGD. Designing a better optimizer is very much an active area of research. Some examples: No change in accuracy using Adam Optimizer when SGD works fine How does the Adam method of stochastic gradient descent work? Why does momentum escape from a saddle point in this famous image? When it first came out, the Adam optimizer generated a lot of interest. But some recent research has found that SGD with momentum can out-perform adaptive gradient methods for neural networks. "The Marginal Value of Adaptive Gradient Methods in Machine Learning" by Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht But on the other hand, this very recent paper proposes a new adaptive learning-rate optimizer which supposedly closes the gap between adaptive-rate methods and SGD with momentum. "Closing the Generalization Gap of Adaptive Gradient Methods in Training Deep Neural Networks" by Jinghui Chen, Quanquan Gu Adaptive gradient methods, which adopt historical gradient information to automatically adjust the learning rate, have been observed to generalize worse than stochastic gradient descent (SGD) with momentum in training deep neural networks. This leaves how to close the generalization gap of adaptive gradient methods an open problem. In this work, we show that adaptive gradient methods such as Adam, Amsgrad, are sometimes "over adapted". We design a new algorithm, called Partially adaptive momentum estimation method (Padam), which unifies the Adam/Amsgrad with SGD to achieve the best from both worlds. Experiments on standard benchmarks show that Padam can maintain fast convergence rate as Adam/Amsgrad while generalizing as well as SGD in training deep neural networks. These results would suggest practitioners pick up adaptive gradient methods once again for faster training of deep neural networks. Specifically for triplet-loss models, there are a number of tricks which can improve training time and generalization. See: In training a triplet network, I first have a solid drop in loss, but eventually the loss slowly but consistently increases. What could cause this? 6. Regularization Choosing and tuning network regularization is a key part of building a model that generalizes well (that is, a model that is not overfit to the training data). However, at the time that your network is struggling to decrease the loss on the training data -- when the network is not learning -- regularization can obscure what the problem is. When my network doesn't learn, I turn off all regularization and verify that the non-regularized network works correctly. Then I add each regularization piece back, and verify that each of those works along the way. This tactic can pinpoint where some regularization might be poorly set. Some examples are $L^2$ regularization (aka weight decay) or $L^1$ regularization is set too large, so the weights can't move. Two parts of regularization are in conflict. For example, it's widely observed that layer normalization and dropout are difficult to use together. Since either on its own is very useful, understanding how to use both is an active area of research. "Understanding the Disharmony between Dropout and Batch Normalization by Variance Shift" by Xiang Li, Shuo Chen, Xiaolin Hu, Jian Yang "Adjusting for Dropout Variance in Batch Normalization and Weight Initialization" by Dan Hendrycks, Kevin Gimpel. "Self-Normalizing Neural Networks" by Günter Klambauer, Thomas Unterthiner, Andreas Mayr and Sepp Hochreiter 7. Keep a Logbook of Experiments When I set up a neural network, I don't hard-code any parameter settings. Instead, I do that in a configuration file (e.g., JSON) that is read and used to populate network configuration details at runtime. I keep all of these configuration files. If I make any parameter modification, I make a new configuration file. Finally, I append as comments all of the per-epoch losses for training and validation. The reason that I'm so obsessive about retaining old results is that this makes it very easy to go back and review previous experiments. It also hedges against mistakenly repeating the same dead-end experiment. Psychologically, it also lets you look back and observe "Well, the project might not be where I want it to be today, but I am making progress compared to where I was $k$ weeks ago." As an example, I wanted to learn about LSTM language models, so I decided to make a Twitter bot that writes new tweets in response to other Twitter users. I worked on this in my free time, between grad school and my job. It took about a year, and I iterated over about 150 different models before getting to a model that did what I wanted: generate new English-language text that (sort of) makes sense. (One key sticking point, and part of the reason that it took so many attempts, is that it was not sufficient to simply get a low out-of-sample loss, since early low-loss models had managed to memorize the training data, so it was just reproducing germane blocks of text verbatim in reply to prompts -- it took some tweaking to make the model more spontaneous and still have low loss.)
What should I do when my neural network doesn't learn?
1. Verify that your code is bug free There's a saying among writers that "All writing is re-writing" -- that is, the greater part of writing is revising. For programmers (or at least data scientists)
What should I do when my neural network doesn't learn? 1. Verify that your code is bug free There's a saying among writers that "All writing is re-writing" -- that is, the greater part of writing is revising. For programmers (or at least data scientists) the expression could be re-phrased as "All coding is debugging." Any time you're writing code, you need to verify that it works as intended. The best method I've ever found for verifying correctness is to break your code into small segments, and verify that each segment works. This can be done by comparing the segment output to what you know to be the correct answer. This is called unit testing. Writing good unit tests is a key piece of becoming a good statistician/data scientist/machine learning expert/neural network practitioner. There is simply no substitute. You have to check that your code is free of bugs before you can tune network performance! Otherwise, you might as well be re-arranging deck chairs on the RMS Titanic. There are two features of neural networks that make verification even more important than for other types of machine learning or statistical models. Neural networks are not "off-the-shelf" algorithms in the way that random forest or logistic regression are. Even for simple, feed-forward networks, the onus is largely on the user to make numerous decisions about how the network is configured, connected, initialized and optimized. This means writing code, and writing code means debugging. Even when a neural network code executes without raising an exception, the network can still have bugs! These bugs might even be the insidious kind for which the network will train, but get stuck at a sub-optimal solution, or the resulting network does not have the desired architecture. (This is an example of the difference between a syntactic and semantic error.) This Medium post, "How to unit test machine learning code," by Chase Roberts discusses unit-testing for machine learning models in more detail. I borrowed this example of buggy code from the article: def make_convnet(input_image): net = slim.conv2d(input_image, 32, [11, 11], scope="conv1_11x11") net = slim.conv2d(input_image, 64, [5, 5], scope="conv2_5x5") net = slim.max_pool2d(net, [4, 4], stride=4, scope='pool1') net = slim.conv2d(input_image, 64, [5, 5], scope="conv3_5x5") net = slim.conv2d(input_image, 128, [3, 3], scope="conv4_3x3") net = slim.max_pool2d(net, [2, 2], scope='pool2') net = slim.conv2d(input_image, 128, [3, 3], scope="conv5_3x3") net = slim.max_pool2d(net, [2, 2], scope='pool3') net = slim.conv2d(input_image, 32, [1, 1], scope="conv6_1x1") return net Do you see the error? Many of the different operations are not actually used because previous results are over-written with new variables. Using this block of code in a network will still train and the weights will update and the loss might even decrease -- but the code definitely isn't doing what was intended. (The author is also inconsistent about using single- or double-quotes but that's purely stylistic.) The most common programming errors pertaining to neural networks are Variables are created but never used (usually because of copy-paste errors); Expressions for gradient updates are incorrect; Weight updates are not applied; Loss functions are not measured on the correct scale (for example, cross-entropy loss can be expressed in terms of probability or logits) The loss is not appropriate for the task (for example, using categorical cross-entropy loss for a regression task). Dropout is used during testing, instead of only being used for training. Make sure you're minimizing the loss function $L(x)$, instead of minimizing $-L(x)$. Make sure your loss is computed correctly. Unit testing is not just limited to the neural network itself. You need to test all of the steps that produce or transform data and feed into the network. Some common mistakes here are NA or NaN or Inf values in your data creating NA or NaN or Inf values in the output, and therefore in the loss function. Shuffling the labels independently from the samples (for instance, creating train/test splits for the labels and samples separately); Accidentally assigning the training data as the testing data; When using a train/test split, the model references the original, non-split data instead of the training partition or the testing partition. Forgetting to scale the testing data; Scaling the testing data using the statistics of the test partition instead of the train partition; Forgetting to un-scale the predictions (e.g. pixel values are in [0,1] instead of [0, 255]). Here's an example of a question where the problem appears to be one of model configuration or hyperparameter choice, but actually the problem was a subtle bug in how gradients were computed. Is this drop in training accuracy due to a statistical or programming error? 2. For the love of all that is good, scale your data The scale of the data can make an enormous difference on training. Sometimes, networks simply won't reduce the loss if the data isn't scaled. Other networks will decrease the loss, but only very slowly. Scaling the inputs (and certain times, the targets) can dramatically improve the network's training. Prior to presenting data to a neural network, standardizing the data to have 0 mean and unit variance, or to lie in a small interval like $[-0.5, 0.5]$ can improve training. This amounts to pre-conditioning, and removes the effect that a choice in units has on network weights. For example, length in millimeters and length in kilometers both represent the same concept, but are on different scales. The exact details of how to standardize the data depend on what your data look like. Data normalization and standardization in neural networks Why does $[0,1]$ scaling dramatically increase training time for feed forward ANN (1 hidden layer)? Batch or Layer normalization can improve network training. Both seek to improve the network by keeping a running mean and standard deviation for neurons' activations as the network trains. It is not well-understood why this helps training, and remains an active area of research. "Understanding Batch Normalization" by Johan Bjorck, Carla Gomes, Bart Selman "Towards a Theoretical Understanding of Batch Normalization" by Jonas Kohler, Hadi Daneshmand, Aurelien Lucchi, Ming Zhou, Klaus Neymeyr, Thomas Hofmann "How Does Batch Normalization Help Optimization? (No, It Is Not About Internal Covariate Shift)" by Shibani Santurkar, Dimitris Tsipras, Andrew Ilyas, Aleksander Madry 3. Crawl Before You Walk; Walk Before You Run Wide and deep neural networks, and neural networks with exotic wiring, are the Hot Thing right now in machine learning. But these networks didn't spring fully-formed into existence; their designers built up to them from smaller units. First, build a small network with a single hidden layer and verify that it works correctly. Then incrementally add additional model complexity, and verify that each of those works as well. Too few neurons in a layer can restrict the representation that the network learns, causing under-fitting. Too many neurons can cause over-fitting because the network will "memorize" the training data. Even if you can prove that there is, mathematically, only a small number of neurons necessary to model a problem, it is often the case that having "a few more" neurons makes it easier for the optimizer to find a "good" configuration. (But I don't think anyone fully understands why this is the case.) I provide an example of this in the context of the XOR problem here: Aren't my iterations needed to train NN for XOR with MSE < 0.001 too high?. Choosing the number of hidden layers lets the network learn an abstraction from the raw data. Deep learning is all the rage these days, and networks with a large number of layers have shown impressive results. But adding too many hidden layers can make risk overfitting or make it very hard to optimize the network. Choosing a clever network wiring can do a lot of the work for you. Is your data source amenable to specialized network architectures? Convolutional neural networks can achieve impressive results on "structured" data sources, image or audio data. Recurrent neural networks can do well on sequential data types, such as natural language or time series data. Residual connections can improve deep feed-forward networks. 4. Neural Network Training Is Like Lock Picking To achieve state of the art, or even merely good, results, you have to set up all of the parts configured to work well together. Setting up a neural network configuration that actually learns is a lot like picking a lock: all of the pieces have to be lined up just right. Just as it is not sufficient to have a single tumbler in the right place, neither is it sufficient to have only the architecture, or only the optimizer, set up correctly. Tuning configuration choices is not really as simple as saying that one kind of configuration choice (e.g. learning rate) is more or less important than another (e.g. number of units), since all of these choices interact with all of the other choices, so one choice can do well in combination with another choice made elsewhere. This is a non-exhaustive list of the configuration options which are not also regularization options or numerical optimization options. All of these topics are active areas of research. The network initialization is often overlooked as a source of neural network bugs. Initialization over too-large an interval can set initial weights too large, meaning that single neurons have an outsize influence over the network behavior. The key difference between a neural network and a regression model is that a neural network is a composition of many nonlinear functions, called activation functions. (See: What is the essential difference between neural network and linear regression) Classical neural network results focused on sigmoidal activation functions (logistic or $\tanh$ functions). A recent result has found that ReLU (or similar) units tend to work better because the have steeper gradients, so updates can be applied quickly. (See: Why do we use ReLU in neural networks and how do we use it?) One caution about ReLUs is the "dead neuron" phenomenon, which can stymie learning; leaky relus and similar variants avoid this problem. See Why can't a single ReLU learn a ReLU? My ReLU network fails to launch There are a number of other options. See: Comprehensive list of activation functions in neural networks with pros/cons Residual connections are a neat development that can make it easier to train neural networks. "Deep Residual Learning for Image Recognition" Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun In: CVPR. (2016). Additionally, changing the order of operations within the residual block can further improve the resulting network. "Identity Mappings in Deep Residual Networks" by Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 5. Non-convex optimization is hard The objective function of a neural network is only convex when there are no hidden units, all activations are linear, and the design matrix is full-rank -- because this configuration is identically an ordinary regression problem. In all other cases, the optimization problem is non-convex, and non-convex optimization is hard. The challenges of training neural networks are well-known (see: Why is it hard to train deep neural networks?). Additionally, neural networks have a very large number of parameters, which restricts us to solely first-order methods (see: Why is Newton's method not widely used in machine learning?). This is a very active area of research. Setting the learning rate too large will cause the optimization to diverge, because you will leap from one side of the "canyon" to the other. Setting this too small will prevent you from making any real progress, and possibly allow the noise inherent in SGD to overwhelm your gradient estimates. See: How can change in cost function be positive? Gradient clipping re-scales the norm of the gradient if it's above some threshold. I used to think that this was a set-and-forget parameter, typically at 1.0, but I found that I could make an LSTM language model dramatically better by setting it to 0.25. I don't know why that is. Learning rate scheduling can decrease the learning rate over the course of training. In my experience, trying to use scheduling is a lot like regex: it replaces one problem ("How do I get learning to continue after a certain epoch?") with two problems ("How do I get learning to continue after a certain epoch?" and "How do I choose a good schedule?"). Other people insist that scheduling is essential. I'll let you decide. Choosing a good minibatch size can influence the learning process indirectly, since a larger mini-batch will tend to have a smaller variance (law-of-large-numbers) than a smaller mini-batch. You want the mini-batch to be large enough to be informative about the direction of the gradient, but small enough that SGD can regularize your network. There are a number of variants on stochastic gradient descent which use momentum, adaptive learning rates, Nesterov updates and so on to improve upon vanilla SGD. Designing a better optimizer is very much an active area of research. Some examples: No change in accuracy using Adam Optimizer when SGD works fine How does the Adam method of stochastic gradient descent work? Why does momentum escape from a saddle point in this famous image? When it first came out, the Adam optimizer generated a lot of interest. But some recent research has found that SGD with momentum can out-perform adaptive gradient methods for neural networks. "The Marginal Value of Adaptive Gradient Methods in Machine Learning" by Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht But on the other hand, this very recent paper proposes a new adaptive learning-rate optimizer which supposedly closes the gap between adaptive-rate methods and SGD with momentum. "Closing the Generalization Gap of Adaptive Gradient Methods in Training Deep Neural Networks" by Jinghui Chen, Quanquan Gu Adaptive gradient methods, which adopt historical gradient information to automatically adjust the learning rate, have been observed to generalize worse than stochastic gradient descent (SGD) with momentum in training deep neural networks. This leaves how to close the generalization gap of adaptive gradient methods an open problem. In this work, we show that adaptive gradient methods such as Adam, Amsgrad, are sometimes "over adapted". We design a new algorithm, called Partially adaptive momentum estimation method (Padam), which unifies the Adam/Amsgrad with SGD to achieve the best from both worlds. Experiments on standard benchmarks show that Padam can maintain fast convergence rate as Adam/Amsgrad while generalizing as well as SGD in training deep neural networks. These results would suggest practitioners pick up adaptive gradient methods once again for faster training of deep neural networks. Specifically for triplet-loss models, there are a number of tricks which can improve training time and generalization. See: In training a triplet network, I first have a solid drop in loss, but eventually the loss slowly but consistently increases. What could cause this? 6. Regularization Choosing and tuning network regularization is a key part of building a model that generalizes well (that is, a model that is not overfit to the training data). However, at the time that your network is struggling to decrease the loss on the training data -- when the network is not learning -- regularization can obscure what the problem is. When my network doesn't learn, I turn off all regularization and verify that the non-regularized network works correctly. Then I add each regularization piece back, and verify that each of those works along the way. This tactic can pinpoint where some regularization might be poorly set. Some examples are $L^2$ regularization (aka weight decay) or $L^1$ regularization is set too large, so the weights can't move. Two parts of regularization are in conflict. For example, it's widely observed that layer normalization and dropout are difficult to use together. Since either on its own is very useful, understanding how to use both is an active area of research. "Understanding the Disharmony between Dropout and Batch Normalization by Variance Shift" by Xiang Li, Shuo Chen, Xiaolin Hu, Jian Yang "Adjusting for Dropout Variance in Batch Normalization and Weight Initialization" by Dan Hendrycks, Kevin Gimpel. "Self-Normalizing Neural Networks" by Günter Klambauer, Thomas Unterthiner, Andreas Mayr and Sepp Hochreiter 7. Keep a Logbook of Experiments When I set up a neural network, I don't hard-code any parameter settings. Instead, I do that in a configuration file (e.g., JSON) that is read and used to populate network configuration details at runtime. I keep all of these configuration files. If I make any parameter modification, I make a new configuration file. Finally, I append as comments all of the per-epoch losses for training and validation. The reason that I'm so obsessive about retaining old results is that this makes it very easy to go back and review previous experiments. It also hedges against mistakenly repeating the same dead-end experiment. Psychologically, it also lets you look back and observe "Well, the project might not be where I want it to be today, but I am making progress compared to where I was $k$ weeks ago." As an example, I wanted to learn about LSTM language models, so I decided to make a Twitter bot that writes new tweets in response to other Twitter users. I worked on this in my free time, between grad school and my job. It took about a year, and I iterated over about 150 different models before getting to a model that did what I wanted: generate new English-language text that (sort of) makes sense. (One key sticking point, and part of the reason that it took so many attempts, is that it was not sufficient to simply get a low out-of-sample loss, since early low-loss models had managed to memorize the training data, so it was just reproducing germane blocks of text verbatim in reply to prompts -- it took some tweaking to make the model more spontaneous and still have low loss.)
What should I do when my neural network doesn't learn? 1. Verify that your code is bug free There's a saying among writers that "All writing is re-writing" -- that is, the greater part of writing is revising. For programmers (or at least data scientists)
293
What should I do when my neural network doesn't learn?
The posted answers are great, and I wanted to add a few "Sanity Checks" which have greatly helped me in the past. 1) Train your model on a single data point. If this works, train it on two inputs with different outputs. This verifies a few things. First, it quickly shows you that your model is able to learn by checking if your model can overfit your data. In my case, I constantly make silly mistakes of doing Dense(1,activation='softmax') vs Dense(1,activation='sigmoid') for binary predictions, and the first one gives garbage results. If your model is unable to overfit a few data points, then either it's too small (which is unlikely in today's age),or something is wrong in its structure or the learning algorithm. 2) Pay attention to your initial loss. Continuing the binary example, if your data is 30% 0's and 70% 1's, then your intial expected loss around $L=-0.3\ln(0.5)-0.7\ln(0.5)\approx 0.7$. This is because your model should start out close to randomly guessing. A lot of times you'll see an initial loss of something ridiculous, like 6.5. Conceptually this means that your output is heavily saturated, for example toward 0. For example $-0.3\ln(0.99)-0.7\ln(0.01) = 3.2$, so if you're seeing a loss that's bigger than 1, it's likely your model is very skewed. This usually happens when your neural network weights aren't properly balanced, especially closer to the softmax/sigmoid. So this would tell you if your initialization is bad. You can study this further by making your model predict on a few thousand examples, and then histogramming the outputs. This is especially useful for checking that your data is correctly normalized. As an example, if you expect your output to be heavily skewed toward 0, it might be a good idea to transform your expected outputs (your training data) by taking the square roots of the expected output. This will avoid gradient issues for saturated sigmoids, at the output. 3) Generalize your model outputs to debug As an example, imagine you're using an LSTM to make predictions from time-series data. Maybe in your example, you only care about the latest prediction, so your LSTM outputs a single value and not a sequence. Switch the LSTM to return predictions at each step (in keras, this is return_sequences=True). Then you can take a look at your hidden-state outputs after every step and make sure they are actually different. An application of this is to make sure that when you're masking your sequences (i.e. padding them with data to make them equal length), the LSTM is correctly ignoring your masked data. Without generalizing your model you will never find this issue. 4) Look at individual layers Tensorboard provides a useful way of visualizing your layer outputs. This can help make sure that inputs/outputs are properly normalized in each layer. It can also catch buggy activations. You can also query layer outputs in keras on a batch of predictions, and then look for layers which have suspiciously skewed activations (either all 0, or all nonzero). 5) Build a simpler model first You've decided that the best approach to solve your problem is to use a CNN combined with a bounding box detector, that further processes image crops and then uses an LSTM to combine everything. It takes 10 minutes just for your GPU to initialize your model. Instead, make a batch of fake data (same shape), and break your model down into components. Then make dummy models in place of each component (your "CNN" could just be a single 2x2 20-stride convolution, the LSTM with just 2 hidden units). This will help you make sure that your model structure is correct and that there are no extraneous issues. I struggled for a while with such a model, and when I tried a simpler version, I found out that one of the layers wasn't being masked properly due to a keras bug. You can easily (and quickly) query internal model layers and see if you've setup your graph correctly. 6) Standardize your Preprocessing and Package Versions Neural networks in particular are extremely sensitive to small changes in your data. As an example, two popular image loading packages are cv2 and PIL. Just by virtue of opening a JPEG, both these packages will produce slightly different images. The differences are usually really small, but you'll occasionally see drops in model performance due to this kind of stuff. Also it makes debugging a nightmare: you got a validation score during training, and then later on you use a different loader and get different accuracy on the same darn dataset. So if you're downloading someone's model from github, pay close attention to their preprocessing. What image loaders do they use? What image preprocessing routines do they use? When resizing an image, what interpolation do they use? Do they first resize and then normalize the image? Or the other way around? What's the channel order for RGB images? The safest way of standardizing packages is to use a requirements.txt file that outlines all your packages just like on your training system setup, down to the keras==2.1.5 version numbers. In theory then, using Docker along with the same GPU as on your training system should then produce the same results.
What should I do when my neural network doesn't learn?
The posted answers are great, and I wanted to add a few "Sanity Checks" which have greatly helped me in the past. 1) Train your model on a single data point. If this works, train it on two inputs with
What should I do when my neural network doesn't learn? The posted answers are great, and I wanted to add a few "Sanity Checks" which have greatly helped me in the past. 1) Train your model on a single data point. If this works, train it on two inputs with different outputs. This verifies a few things. First, it quickly shows you that your model is able to learn by checking if your model can overfit your data. In my case, I constantly make silly mistakes of doing Dense(1,activation='softmax') vs Dense(1,activation='sigmoid') for binary predictions, and the first one gives garbage results. If your model is unable to overfit a few data points, then either it's too small (which is unlikely in today's age),or something is wrong in its structure or the learning algorithm. 2) Pay attention to your initial loss. Continuing the binary example, if your data is 30% 0's and 70% 1's, then your intial expected loss around $L=-0.3\ln(0.5)-0.7\ln(0.5)\approx 0.7$. This is because your model should start out close to randomly guessing. A lot of times you'll see an initial loss of something ridiculous, like 6.5. Conceptually this means that your output is heavily saturated, for example toward 0. For example $-0.3\ln(0.99)-0.7\ln(0.01) = 3.2$, so if you're seeing a loss that's bigger than 1, it's likely your model is very skewed. This usually happens when your neural network weights aren't properly balanced, especially closer to the softmax/sigmoid. So this would tell you if your initialization is bad. You can study this further by making your model predict on a few thousand examples, and then histogramming the outputs. This is especially useful for checking that your data is correctly normalized. As an example, if you expect your output to be heavily skewed toward 0, it might be a good idea to transform your expected outputs (your training data) by taking the square roots of the expected output. This will avoid gradient issues for saturated sigmoids, at the output. 3) Generalize your model outputs to debug As an example, imagine you're using an LSTM to make predictions from time-series data. Maybe in your example, you only care about the latest prediction, so your LSTM outputs a single value and not a sequence. Switch the LSTM to return predictions at each step (in keras, this is return_sequences=True). Then you can take a look at your hidden-state outputs after every step and make sure they are actually different. An application of this is to make sure that when you're masking your sequences (i.e. padding them with data to make them equal length), the LSTM is correctly ignoring your masked data. Without generalizing your model you will never find this issue. 4) Look at individual layers Tensorboard provides a useful way of visualizing your layer outputs. This can help make sure that inputs/outputs are properly normalized in each layer. It can also catch buggy activations. You can also query layer outputs in keras on a batch of predictions, and then look for layers which have suspiciously skewed activations (either all 0, or all nonzero). 5) Build a simpler model first You've decided that the best approach to solve your problem is to use a CNN combined with a bounding box detector, that further processes image crops and then uses an LSTM to combine everything. It takes 10 minutes just for your GPU to initialize your model. Instead, make a batch of fake data (same shape), and break your model down into components. Then make dummy models in place of each component (your "CNN" could just be a single 2x2 20-stride convolution, the LSTM with just 2 hidden units). This will help you make sure that your model structure is correct and that there are no extraneous issues. I struggled for a while with such a model, and when I tried a simpler version, I found out that one of the layers wasn't being masked properly due to a keras bug. You can easily (and quickly) query internal model layers and see if you've setup your graph correctly. 6) Standardize your Preprocessing and Package Versions Neural networks in particular are extremely sensitive to small changes in your data. As an example, two popular image loading packages are cv2 and PIL. Just by virtue of opening a JPEG, both these packages will produce slightly different images. The differences are usually really small, but you'll occasionally see drops in model performance due to this kind of stuff. Also it makes debugging a nightmare: you got a validation score during training, and then later on you use a different loader and get different accuracy on the same darn dataset. So if you're downloading someone's model from github, pay close attention to their preprocessing. What image loaders do they use? What image preprocessing routines do they use? When resizing an image, what interpolation do they use? Do they first resize and then normalize the image? Or the other way around? What's the channel order for RGB images? The safest way of standardizing packages is to use a requirements.txt file that outlines all your packages just like on your training system setup, down to the keras==2.1.5 version numbers. In theory then, using Docker along with the same GPU as on your training system should then produce the same results.
What should I do when my neural network doesn't learn? The posted answers are great, and I wanted to add a few "Sanity Checks" which have greatly helped me in the past. 1) Train your model on a single data point. If this works, train it on two inputs with
294
What should I do when my neural network doesn't learn?
Do not train a neural network to start with! All the answers are great, but there is one point which ought to be mentioned : is there anything to learn from your data ? (which could be considered as some kind of testing). If the label you are trying to predict is independent from your features, then it is likely that the training loss will have a hard time reducing. Instead, start calibrating a linear regression, a random forest (or any method you like whose number of hyperparameters is low, and whose behavior you can understand). Then, if you achieve a decent performance on these models (better than random guessing), you can start tuning a neural network (and @Sycorax 's answer will solve most issues).
What should I do when my neural network doesn't learn?
Do not train a neural network to start with! All the answers are great, but there is one point which ought to be mentioned : is there anything to learn from your data ? (which could be considered as s
What should I do when my neural network doesn't learn? Do not train a neural network to start with! All the answers are great, but there is one point which ought to be mentioned : is there anything to learn from your data ? (which could be considered as some kind of testing). If the label you are trying to predict is independent from your features, then it is likely that the training loss will have a hard time reducing. Instead, start calibrating a linear regression, a random forest (or any method you like whose number of hyperparameters is low, and whose behavior you can understand). Then, if you achieve a decent performance on these models (better than random guessing), you can start tuning a neural network (and @Sycorax 's answer will solve most issues).
What should I do when my neural network doesn't learn? Do not train a neural network to start with! All the answers are great, but there is one point which ought to be mentioned : is there anything to learn from your data ? (which could be considered as s
295
What should I do when my neural network doesn't learn?
At its core, the basic workflow for training a NN/DNN model is more or less always the same: define the NN architecture (how many layers, which kind of layers, the connections among layers, the activation functions, etc.) read data from some source (the Internet, a database, a set of local files, etc.), have a look at a few samples (to make sure the import has gone well) and perform data cleaning if/when needed. This step is not as trivial as people usually assume it to be. The reason is that for DNNs, we usually deal with gigantic data sets, several orders of magnitude larger than what we're used to, when we fit more standard nonlinear parametric statistical models (NNs belong to this family, in theory). normalize or standardize the data in some way. Since NNs are nonlinear models, normalizing the data can affect not only the numerical stability, but also the training time, and the NN outputs (a linear function such as normalization doesn't commute with a nonlinear hierarchical function). split data in training/validation/test set, or in multiple folds if using cross-validation. train the neural network, while at the same time controlling the loss on the validation set. Here you can enjoy the soul-wrenching pleasures of non-convex optimization, where you don't know if any solution exists, if multiple solutions exist, which is the best solution(s) in terms of generalization error and how close you got to it. The comparison between the training loss and validation loss curve guides you, of course, but don't underestimate the die hard attitude of NNs (and especially DNNs): they often show a (maybe slowly) decreasing training/validation loss even when you have crippling bugs in your code. Check the accuracy on the test set, and make some diagnostic plots/tables. Go back to point 1 because the results aren't good. Reiterate ad nauseam. Of course details will change based on the specific use case, but with this rough canvas in mind, we can think of what is more likely to go wrong. Basic Architecture checks This can be a source of issues. Usually I make these preliminary checks: look for a simple architecture which works well on your problem (for example, MobileNetV2 in the case of image classification) and apply a suitable initialization (at this level, random will usually do). If this trains correctly on your data, at least you know that there are no glaring issues in the data set. If you can't find a simple, tested architecture which works in your case, think of a simple baseline. For example a Naive Bayes classifier for classification (or even just classifying always the most common class), or an ARIMA model for time series forecasting Build unit tests. Neglecting to do this (and the use of the bloody Jupyter Notebook) are usually the root causes of issues in NN code I'm asked to review, especially when the model is supposed to be deployed in production. As the most upvoted answer has already covered unit tests, I'll just add that there exists a library which supports unit tests development for NN (only in Tensorflow, unfortunately). Training Set Double check your input data. See if you inverted the training set and test set labels, for example (happened to me once -___-), or if you imported the wrong file. Have a look at a few input samples, and the associated labels, and make sure they make sense. Check that the normalized data are really normalized (have a look at their range). Also, real-world datasets are dirty: for classification, there could be a high level of label noise (samples having the wrong class label) or for multivariate time series forecast, some of the time series components may have a lot of missing data (I've seen numbers as high as 94% for some of the inputs). The order in which the training set is fed to the net during training may have an effect. Try a random shuffle of the training set (without breaking the association between inputs and outputs) and see if the training loss goes down. Finally, the best way to check if you have training set issues is to use another training set. If you're doing image classification, instead than the images you collected, use a standard dataset such CIFAR10 or CIFAR100 (or ImageNet, if you can afford to train on that). These data sets are well-tested: if your training loss goes down here but not on your original data set, you may have issues in the data set. Do the Golden Tests There are two tests which I call Golden Tests, which are very useful to find issues in a NN which doesn't train: reduce the training set to 1 or 2 samples, and train on this. The NN should immediately overfit the training set, reaching an accuracy of 100% on the training set very quickly, while the accuracy on the validation/test set will go to 0%. If this doesn't happen, there's a bug in your code. the opposite test: you keep the full training set, but you shuffle the labels. The only way the NN can learn now is by memorising the training set, which means that the training loss will decrease very slowly, while the test loss will increase very quickly. In particular, you should reach the random chance loss on the test set. This means that if you have 1000 classes, you should reach an accuracy of 0.1%. If you don't see any difference between the training loss before and after shuffling labels, this means that your code is buggy (remember that we have already checked the labels of the training set in the step before). Check that your training metric makes sense Accuracy (0-1 loss) is a crappy metric if you have strong class imbalance. Try something more meaningful such as cross-entropy loss: you don't just want to classify correctly, but you'd like to classify with high accuracy. Bring out the big guns If nothing helped, it's now the time to start fiddling with hyperparameters. This is easily the worse part of NN training, but these are gigantic, non-identifiable models whose parameters are fit by solving a non-convex optimization, so these iterations often can't be avoided. try different optimizers: SGD trains slower, but it leads to a lower generalization error, while Adam trains faster, but the test loss stalls to a higher value try decreasing the batch size increase the learning rate initially, and then decay it, or use a cyclic learning rate add layers add hidden units remove regularization gradually (maybe switch batch norm for a few layers). The training loss should now decrease, but the test loss may increase. visualize the distribution of weights and biases for each layer. I never had to get here, but if you're using BatchNorm, you would expect approximately standard normal distributions. See if the norm of the weights is increasing abnormally with epochs. if you're getting some error at training time, google that error. I wasted one morning while trying to fix a perfectly working architecture, only to find out that the version of Keras I had installed had buggy multi-GPU support and I had to update it. Sometimes I had to do the opposite (downgrade a package version). update your CV and start looking for a different job :-)
What should I do when my neural network doesn't learn?
At its core, the basic workflow for training a NN/DNN model is more or less always the same: define the NN architecture (how many layers, which kind of layers, the connections among layers, the activ
What should I do when my neural network doesn't learn? At its core, the basic workflow for training a NN/DNN model is more or less always the same: define the NN architecture (how many layers, which kind of layers, the connections among layers, the activation functions, etc.) read data from some source (the Internet, a database, a set of local files, etc.), have a look at a few samples (to make sure the import has gone well) and perform data cleaning if/when needed. This step is not as trivial as people usually assume it to be. The reason is that for DNNs, we usually deal with gigantic data sets, several orders of magnitude larger than what we're used to, when we fit more standard nonlinear parametric statistical models (NNs belong to this family, in theory). normalize or standardize the data in some way. Since NNs are nonlinear models, normalizing the data can affect not only the numerical stability, but also the training time, and the NN outputs (a linear function such as normalization doesn't commute with a nonlinear hierarchical function). split data in training/validation/test set, or in multiple folds if using cross-validation. train the neural network, while at the same time controlling the loss on the validation set. Here you can enjoy the soul-wrenching pleasures of non-convex optimization, where you don't know if any solution exists, if multiple solutions exist, which is the best solution(s) in terms of generalization error and how close you got to it. The comparison between the training loss and validation loss curve guides you, of course, but don't underestimate the die hard attitude of NNs (and especially DNNs): they often show a (maybe slowly) decreasing training/validation loss even when you have crippling bugs in your code. Check the accuracy on the test set, and make some diagnostic plots/tables. Go back to point 1 because the results aren't good. Reiterate ad nauseam. Of course details will change based on the specific use case, but with this rough canvas in mind, we can think of what is more likely to go wrong. Basic Architecture checks This can be a source of issues. Usually I make these preliminary checks: look for a simple architecture which works well on your problem (for example, MobileNetV2 in the case of image classification) and apply a suitable initialization (at this level, random will usually do). If this trains correctly on your data, at least you know that there are no glaring issues in the data set. If you can't find a simple, tested architecture which works in your case, think of a simple baseline. For example a Naive Bayes classifier for classification (or even just classifying always the most common class), or an ARIMA model for time series forecasting Build unit tests. Neglecting to do this (and the use of the bloody Jupyter Notebook) are usually the root causes of issues in NN code I'm asked to review, especially when the model is supposed to be deployed in production. As the most upvoted answer has already covered unit tests, I'll just add that there exists a library which supports unit tests development for NN (only in Tensorflow, unfortunately). Training Set Double check your input data. See if you inverted the training set and test set labels, for example (happened to me once -___-), or if you imported the wrong file. Have a look at a few input samples, and the associated labels, and make sure they make sense. Check that the normalized data are really normalized (have a look at their range). Also, real-world datasets are dirty: for classification, there could be a high level of label noise (samples having the wrong class label) or for multivariate time series forecast, some of the time series components may have a lot of missing data (I've seen numbers as high as 94% for some of the inputs). The order in which the training set is fed to the net during training may have an effect. Try a random shuffle of the training set (without breaking the association between inputs and outputs) and see if the training loss goes down. Finally, the best way to check if you have training set issues is to use another training set. If you're doing image classification, instead than the images you collected, use a standard dataset such CIFAR10 or CIFAR100 (or ImageNet, if you can afford to train on that). These data sets are well-tested: if your training loss goes down here but not on your original data set, you may have issues in the data set. Do the Golden Tests There are two tests which I call Golden Tests, which are very useful to find issues in a NN which doesn't train: reduce the training set to 1 or 2 samples, and train on this. The NN should immediately overfit the training set, reaching an accuracy of 100% on the training set very quickly, while the accuracy on the validation/test set will go to 0%. If this doesn't happen, there's a bug in your code. the opposite test: you keep the full training set, but you shuffle the labels. The only way the NN can learn now is by memorising the training set, which means that the training loss will decrease very slowly, while the test loss will increase very quickly. In particular, you should reach the random chance loss on the test set. This means that if you have 1000 classes, you should reach an accuracy of 0.1%. If you don't see any difference between the training loss before and after shuffling labels, this means that your code is buggy (remember that we have already checked the labels of the training set in the step before). Check that your training metric makes sense Accuracy (0-1 loss) is a crappy metric if you have strong class imbalance. Try something more meaningful such as cross-entropy loss: you don't just want to classify correctly, but you'd like to classify with high accuracy. Bring out the big guns If nothing helped, it's now the time to start fiddling with hyperparameters. This is easily the worse part of NN training, but these are gigantic, non-identifiable models whose parameters are fit by solving a non-convex optimization, so these iterations often can't be avoided. try different optimizers: SGD trains slower, but it leads to a lower generalization error, while Adam trains faster, but the test loss stalls to a higher value try decreasing the batch size increase the learning rate initially, and then decay it, or use a cyclic learning rate add layers add hidden units remove regularization gradually (maybe switch batch norm for a few layers). The training loss should now decrease, but the test loss may increase. visualize the distribution of weights and biases for each layer. I never had to get here, but if you're using BatchNorm, you would expect approximately standard normal distributions. See if the norm of the weights is increasing abnormally with epochs. if you're getting some error at training time, google that error. I wasted one morning while trying to fix a perfectly working architecture, only to find out that the version of Keras I had installed had buggy multi-GPU support and I had to update it. Sometimes I had to do the opposite (downgrade a package version). update your CV and start looking for a different job :-)
What should I do when my neural network doesn't learn? At its core, the basic workflow for training a NN/DNN model is more or less always the same: define the NN architecture (how many layers, which kind of layers, the connections among layers, the activ
296
What should I do when my neural network doesn't learn?
If the model isn't learning, there is a decent chance that your backpropagation is not working. But there are so many things can go wrong with a black box model like Neural Network, there are many things you need to check. I think Sycorax and Alex both provide very good comprehensive answers. Just want to add on one technique haven't been discussed yet. In the Machine Learning Course by Andrew Ng, he suggests running Gradient Checking in the first few iterations to make sure the backpropagation is doing the right thing. Basically, the idea is to calculate the derivative by defining two points with a $\epsilon$ interval. Making sure the derivative is approximately matching your result from backpropagation should help in locating where is the problem.
What should I do when my neural network doesn't learn?
If the model isn't learning, there is a decent chance that your backpropagation is not working. But there are so many things can go wrong with a black box model like Neural Network, there are many thi
What should I do when my neural network doesn't learn? If the model isn't learning, there is a decent chance that your backpropagation is not working. But there are so many things can go wrong with a black box model like Neural Network, there are many things you need to check. I think Sycorax and Alex both provide very good comprehensive answers. Just want to add on one technique haven't been discussed yet. In the Machine Learning Course by Andrew Ng, he suggests running Gradient Checking in the first few iterations to make sure the backpropagation is doing the right thing. Basically, the idea is to calculate the derivative by defining two points with a $\epsilon$ interval. Making sure the derivative is approximately matching your result from backpropagation should help in locating where is the problem.
What should I do when my neural network doesn't learn? If the model isn't learning, there is a decent chance that your backpropagation is not working. But there are so many things can go wrong with a black box model like Neural Network, there are many thi
297
What should I do when my neural network doesn't learn?
Check the data pre-processing and augmentation. I just learned this lesson recently and I think it is interesting to share. Nowadays, many frameworks have built in data pre-processing pipeline and augmentation. And these elements may completely destroy the data. For example, suppose we are building a classifier to classify 6 and 9, and we use random rotation augmentation ... A toy example can be found here Why can't scikit-learn SVM solve two concentric circles? My recent lesson is trying to detect if an image contains some hidden information, by stenography tools. And struggled for a long time that the model does not learn. The reason is many packages are rescaling images to certain size and this operation completely destroys the hidden information inside.
What should I do when my neural network doesn't learn?
Check the data pre-processing and augmentation. I just learned this lesson recently and I think it is interesting to share. Nowadays, many frameworks have built in data pre-processing pipeline and aug
What should I do when my neural network doesn't learn? Check the data pre-processing and augmentation. I just learned this lesson recently and I think it is interesting to share. Nowadays, many frameworks have built in data pre-processing pipeline and augmentation. And these elements may completely destroy the data. For example, suppose we are building a classifier to classify 6 and 9, and we use random rotation augmentation ... A toy example can be found here Why can't scikit-learn SVM solve two concentric circles? My recent lesson is trying to detect if an image contains some hidden information, by stenography tools. And struggled for a long time that the model does not learn. The reason is many packages are rescaling images to certain size and this operation completely destroys the hidden information inside.
What should I do when my neural network doesn't learn? Check the data pre-processing and augmentation. I just learned this lesson recently and I think it is interesting to share. Nowadays, many frameworks have built in data pre-processing pipeline and aug
298
What should I do when my neural network doesn't learn?
In my case the initial training set was probably too difficult for the network, so it was not making any progress. I have prepared the easier set, selecting cases where differences between categories were seen by my own perception as more obvious. The network picked this simplified case well. After it reached really good results, it was then able to progress further by training from the original, more complex data set without blundering around with training score close to zero. To make sure the existing knowledge is not lost, reduce the set learning rate.
What should I do when my neural network doesn't learn?
In my case the initial training set was probably too difficult for the network, so it was not making any progress. I have prepared the easier set, selecting cases where differences between categories
What should I do when my neural network doesn't learn? In my case the initial training set was probably too difficult for the network, so it was not making any progress. I have prepared the easier set, selecting cases where differences between categories were seen by my own perception as more obvious. The network picked this simplified case well. After it reached really good results, it was then able to progress further by training from the original, more complex data set without blundering around with training score close to zero. To make sure the existing knowledge is not lost, reduce the set learning rate.
What should I do when my neural network doesn't learn? In my case the initial training set was probably too difficult for the network, so it was not making any progress. I have prepared the easier set, selecting cases where differences between categories
299
What should I do when my neural network doesn't learn?
I had a model that did not train at all. It just stucks at random chance of particular result with no loss improvement during training. Loss was constant 4.000 and accuracy 0.142 on 7 target values dataset. It become true that I was doing regression with ReLU last activation layer, which is obviously wrong. Before I was knowing that this is wrong, I did add Batch Normalisation layer after every learnable layer, and that helps. However, training become somehow erratic so accuracy during training could easily drop from 40% down to 9% on validation set. Accuracy on training dataset was always okay. Then I realized that it is enough to put Batch Normalisation before that last ReLU activation layer only, to keep improving loss/accuracy during training. That probably did fix wrong activation method. However, when I did replace ReLU with Linear activation (for regression), no Batch Normalisation was needed any more and model started to train significantly better.
What should I do when my neural network doesn't learn?
I had a model that did not train at all. It just stucks at random chance of particular result with no loss improvement during training. Loss was constant 4.000 and accuracy 0.142 on 7 target values da
What should I do when my neural network doesn't learn? I had a model that did not train at all. It just stucks at random chance of particular result with no loss improvement during training. Loss was constant 4.000 and accuracy 0.142 on 7 target values dataset. It become true that I was doing regression with ReLU last activation layer, which is obviously wrong. Before I was knowing that this is wrong, I did add Batch Normalisation layer after every learnable layer, and that helps. However, training become somehow erratic so accuracy during training could easily drop from 40% down to 9% on validation set. Accuracy on training dataset was always okay. Then I realized that it is enough to put Batch Normalisation before that last ReLU activation layer only, to keep improving loss/accuracy during training. That probably did fix wrong activation method. However, when I did replace ReLU with Linear activation (for regression), no Batch Normalisation was needed any more and model started to train significantly better.
What should I do when my neural network doesn't learn? I had a model that did not train at all. It just stucks at random chance of particular result with no loss improvement during training. Loss was constant 4.000 and accuracy 0.142 on 7 target values da
300
What should I do when my neural network doesn't learn?
Curriculum Learning Curriculum learning is a formalization of @h22's answer. The essential idea of curriculum learning is best described in the abstract of the previously linked paper by Bengio et al.: Humans and animals learn much better when the examples are not randomly presented but organized in a meaningful order which illustrates gradually more concepts, and gradually more complex ones. Here, we formalize such training strategies in the context of machine learning, and call them “curriculum learning”. In the context of recent research studying the difficulty of training in the presence of non-convex training criteria (for deep deterministic and stochastic neural networks), we explore curriculum learning in various set-ups. The experiments show that significant improvements in generalization can be achieved. We hypothesize that curriculum learning has both an effect on the speed of convergence of the training process to a minimum and, in the case of non-convex criteria, on the quality of the local minima obtained: curriculum learning can be seen as a particular form of continuation method (a general strategy for global optimization of non-convex functions). One way for implementing curriculum learning is to rank the training examples by difficulty. Of course, this can be cumbersome. Instead, several authors have proposed easier methods, such as Curriculum by Smoothing, where the output of each convolutional layer in a convolutional neural network (CNN) is smoothed using a Gaussian kernel. Make sure that each part of the net can be trained A standard neural network is composed of layers. Before checking that the entire neural network can overfit on a training example, as the other answers suggest, it would be a good idea to first check that each layer, or group of layers, can overfit on specific targets. For example, let $\alpha(\cdot)$ represent an arbitrary activation function, such that $f(\mathbf x) = \alpha(\mathbf W \mathbf x + \mathbf b)$ represents a classic fully-connected layer, where $\mathbf x \in \mathbb R^d$ and $\mathbf W \in \mathbb R^{k \times d}$. Before combining $f(\mathbf x)$ with several other layers, generate a random target vector $\mathbf y \in \mathbb R^k$. Then, let $\ell (\mathbf x,\mathbf y) = (f(\mathbf x) - \mathbf y)^2$ be a loss function. Try to adjust the parameters $\mathbf W$ and $\mathbf b$ to minimize this loss function. If the loss decreases consistently, then this check has passed. Alternatively, rather than generating a random target as we did above with $\mathbf y$, we could work backwards from the actual loss function to be used in training the entire neural network to determine a more realistic target. As a simple example, suppose that we are classifying images, and that we expect the output to be the $k$-dimensional vector $\mathbf y = \begin{bmatrix}1 & 0 & 0 & \cdots & 0\end{bmatrix}$. Suppose that the softmax operation was not applied to obtain $\mathbf y$ (as is normally done), and suppose instead that some other operation, called $\delta(\cdot)$, that is also monotonically increasing in the inputs, was applied instead. If we do not trust that $\delta(\cdot)$ is working as expected, then since we know that it is monotonically increasing in the inputs, then we can work backwards and deduce that the input must have been a $k$-dimensional vector where the maximum element occurs at the first element. We can then generate a similar target to aim for, rather than a random one.
What should I do when my neural network doesn't learn?
Curriculum Learning Curriculum learning is a formalization of @h22's answer. The essential idea of curriculum learning is best described in the abstract of the previously linked paper by Bengio et al.
What should I do when my neural network doesn't learn? Curriculum Learning Curriculum learning is a formalization of @h22's answer. The essential idea of curriculum learning is best described in the abstract of the previously linked paper by Bengio et al.: Humans and animals learn much better when the examples are not randomly presented but organized in a meaningful order which illustrates gradually more concepts, and gradually more complex ones. Here, we formalize such training strategies in the context of machine learning, and call them “curriculum learning”. In the context of recent research studying the difficulty of training in the presence of non-convex training criteria (for deep deterministic and stochastic neural networks), we explore curriculum learning in various set-ups. The experiments show that significant improvements in generalization can be achieved. We hypothesize that curriculum learning has both an effect on the speed of convergence of the training process to a minimum and, in the case of non-convex criteria, on the quality of the local minima obtained: curriculum learning can be seen as a particular form of continuation method (a general strategy for global optimization of non-convex functions). One way for implementing curriculum learning is to rank the training examples by difficulty. Of course, this can be cumbersome. Instead, several authors have proposed easier methods, such as Curriculum by Smoothing, where the output of each convolutional layer in a convolutional neural network (CNN) is smoothed using a Gaussian kernel. Make sure that each part of the net can be trained A standard neural network is composed of layers. Before checking that the entire neural network can overfit on a training example, as the other answers suggest, it would be a good idea to first check that each layer, or group of layers, can overfit on specific targets. For example, let $\alpha(\cdot)$ represent an arbitrary activation function, such that $f(\mathbf x) = \alpha(\mathbf W \mathbf x + \mathbf b)$ represents a classic fully-connected layer, where $\mathbf x \in \mathbb R^d$ and $\mathbf W \in \mathbb R^{k \times d}$. Before combining $f(\mathbf x)$ with several other layers, generate a random target vector $\mathbf y \in \mathbb R^k$. Then, let $\ell (\mathbf x,\mathbf y) = (f(\mathbf x) - \mathbf y)^2$ be a loss function. Try to adjust the parameters $\mathbf W$ and $\mathbf b$ to minimize this loss function. If the loss decreases consistently, then this check has passed. Alternatively, rather than generating a random target as we did above with $\mathbf y$, we could work backwards from the actual loss function to be used in training the entire neural network to determine a more realistic target. As a simple example, suppose that we are classifying images, and that we expect the output to be the $k$-dimensional vector $\mathbf y = \begin{bmatrix}1 & 0 & 0 & \cdots & 0\end{bmatrix}$. Suppose that the softmax operation was not applied to obtain $\mathbf y$ (as is normally done), and suppose instead that some other operation, called $\delta(\cdot)$, that is also monotonically increasing in the inputs, was applied instead. If we do not trust that $\delta(\cdot)$ is working as expected, then since we know that it is monotonically increasing in the inputs, then we can work backwards and deduce that the input must have been a $k$-dimensional vector where the maximum element occurs at the first element. We can then generate a similar target to aim for, rather than a random one.
What should I do when my neural network doesn't learn? Curriculum Learning Curriculum learning is a formalization of @h22's answer. The essential idea of curriculum learning is best described in the abstract of the previously linked paper by Bengio et al.