idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
101
Why square the difference instead of taking the absolute value in standard deviation?
This is an old thread, but most answers focus on analytical simplicity, which IMO is a weak argument in times of computers (although numerical stability might be an issue when using absolute values in optimization routines). Here are some more fundamental arguments in favor of the variance. The statistical mean minimizes the variance (MSE), i.e. $$\overline{x}=\arg\min_m\left\{\sum_{1=1}^n(x_i-m)^2\right\}$$ whereas the mean absolute error (MAE) is minimized by the median: $$x_{med}=\arg\min_m\left\{\sum_{1=1}^n|x_i-m|\right\}$$ A dispersion measure that uses the absolute distances should thus be based on the median and not the mean. Variances are additive for independent variables, i.e. $Var(X+Y)=Var(X)+Var(Y)$. The same does not hold for the mean absolute deviation (this was already mentioned by @eric-l-michelsen). For mean absolute deviations, there is no approximation rule for error propagation (propagation of uncertainty). For the variance, there is Gauss' law $$Var(f(X_1,X_2,\ldots,X_k))\approx \sum_{i=1}^k \left(\left.\frac{\partial f}{\partial X_i}\right|_{E(X)}\right)^2 Var(X_i)$$ The variance nicely generalizes to unsymmetric distributions, because it is the second central moment. Central moments are shape descriptors and with an increasing number of moments you can describe a distribution with increasing accuracy. Stopping this series expansion after the second term leaves you with mean and variance as shape descriptors.
Why square the difference instead of taking the absolute value in standard deviation?
This is an old thread, but most answers focus on analytical simplicity, which IMO is a weak argument in times of computers (although numerical stability might be an issue when using absolute values in
Why square the difference instead of taking the absolute value in standard deviation? This is an old thread, but most answers focus on analytical simplicity, which IMO is a weak argument in times of computers (although numerical stability might be an issue when using absolute values in optimization routines). Here are some more fundamental arguments in favor of the variance. The statistical mean minimizes the variance (MSE), i.e. $$\overline{x}=\arg\min_m\left\{\sum_{1=1}^n(x_i-m)^2\right\}$$ whereas the mean absolute error (MAE) is minimized by the median: $$x_{med}=\arg\min_m\left\{\sum_{1=1}^n|x_i-m|\right\}$$ A dispersion measure that uses the absolute distances should thus be based on the median and not the mean. Variances are additive for independent variables, i.e. $Var(X+Y)=Var(X)+Var(Y)$. The same does not hold for the mean absolute deviation (this was already mentioned by @eric-l-michelsen). For mean absolute deviations, there is no approximation rule for error propagation (propagation of uncertainty). For the variance, there is Gauss' law $$Var(f(X_1,X_2,\ldots,X_k))\approx \sum_{i=1}^k \left(\left.\frac{\partial f}{\partial X_i}\right|_{E(X)}\right)^2 Var(X_i)$$ The variance nicely generalizes to unsymmetric distributions, because it is the second central moment. Central moments are shape descriptors and with an increasing number of moments you can describe a distribution with increasing accuracy. Stopping this series expansion after the second term leaves you with mean and variance as shape descriptors.
Why square the difference instead of taking the absolute value in standard deviation? This is an old thread, but most answers focus on analytical simplicity, which IMO is a weak argument in times of computers (although numerical stability might be an issue when using absolute values in
102
Why square the difference instead of taking the absolute value in standard deviation?
Squaring amplifies larger deviations. If your sample has values that are all over the chart then to bring the 68.2% within the first standard deviation your standard deviation needs to be a little wider. If your data tended to all fall around the mean then σ can be tighter. Some say that it is to simplify calculations. Using the positive square root of the square would have solved that so that argument doesn't float. $|x| = \sqrt{x^{2}}$ So if algebraic simplicity was the goal then it would have looked like this: $\sigma = \text{E}\left[\sqrt{(x-\mu)^{2}}\right]$ which yields the same results as $\text{E}\left[|x-\mu|\right]$. Obviously squaring this also has the effect of amplifying outlying errors (doh!).
Why square the difference instead of taking the absolute value in standard deviation?
Squaring amplifies larger deviations. If your sample has values that are all over the chart then to bring the 68.2% within the first standard deviation your standard deviation needs to be a little wi
Why square the difference instead of taking the absolute value in standard deviation? Squaring amplifies larger deviations. If your sample has values that are all over the chart then to bring the 68.2% within the first standard deviation your standard deviation needs to be a little wider. If your data tended to all fall around the mean then σ can be tighter. Some say that it is to simplify calculations. Using the positive square root of the square would have solved that so that argument doesn't float. $|x| = \sqrt{x^{2}}$ So if algebraic simplicity was the goal then it would have looked like this: $\sigma = \text{E}\left[\sqrt{(x-\mu)^{2}}\right]$ which yields the same results as $\text{E}\left[|x-\mu|\right]$. Obviously squaring this also has the effect of amplifying outlying errors (doh!).
Why square the difference instead of taking the absolute value in standard deviation? Squaring amplifies larger deviations. If your sample has values that are all over the chart then to bring the 68.2% within the first standard deviation your standard deviation needs to be a little wi
103
Why square the difference instead of taking the absolute value in standard deviation?
My guess is this: Most populations (distributions) tend to congregate around the mean. The farther a value is from the mean, the rarer it is. In order to adequately express how "out of line" a value is, it is necessary to take into account both its distance from the mean and its (normally speaking) rareness of occurrence. Squaring the difference from the mean does this, as compared to values which have smaller deviations. Once all the variances are averaged, then it is OK to take the square root, which returns the units to their original dimensions.
Why square the difference instead of taking the absolute value in standard deviation?
My guess is this: Most populations (distributions) tend to congregate around the mean. The farther a value is from the mean, the rarer it is. In order to adequately express how "out of line" a value i
Why square the difference instead of taking the absolute value in standard deviation? My guess is this: Most populations (distributions) tend to congregate around the mean. The farther a value is from the mean, the rarer it is. In order to adequately express how "out of line" a value is, it is necessary to take into account both its distance from the mean and its (normally speaking) rareness of occurrence. Squaring the difference from the mean does this, as compared to values which have smaller deviations. Once all the variances are averaged, then it is OK to take the square root, which returns the units to their original dimensions.
Why square the difference instead of taking the absolute value in standard deviation? My guess is this: Most populations (distributions) tend to congregate around the mean. The farther a value is from the mean, the rarer it is. In order to adequately express how "out of line" a value i
104
The Two Cultures: statistics vs. machine learning?
I think the answer to your first question is simply in the affirmative. Take any issue of Statistical Science, JASA, Annals of Statistics of the past 10 years and you'll find papers on boosting, SVM, and neural networks, although this area is less active now. Statisticians have appropriated the work of Valiant and Vapnik, but on the other side, computer scientists have absorbed the work of Donoho and Talagrand. I don't think there is much difference in scope and methods any more. I have never bought Breiman's argument that CS people were only interested in minimizing loss using whatever works. That view was heavily influenced by his participation in Neural Networks conferences and his consulting work; but PAC, SVMs, Boosting have all solid foundations. And today, unlike 2001, Statistics is more concerned with finite-sample properties, algorithms and massive datasets. But I think that there are still three important differences that are not going away soon. Methodological Statistics papers are still overwhelmingly formal and deductive, whereas Machine Learning researchers are more tolerant of new approaches even if they don't come with a proof attached; The ML community primarily shares new results and publications in conferences and related proceedings, whereas statisticians use journal papers. This slows down progress in Statistics and identification of star researchers. John Langford has a nice post on the subject from a while back; Statistics still covers areas that are (for now) of little concern to ML, such as survey design, sampling, industrial Statistics etc.
The Two Cultures: statistics vs. machine learning?
I think the answer to your first question is simply in the affirmative. Take any issue of Statistical Science, JASA, Annals of Statistics of the past 10 years and you'll find papers on boosting, SVM,
The Two Cultures: statistics vs. machine learning? I think the answer to your first question is simply in the affirmative. Take any issue of Statistical Science, JASA, Annals of Statistics of the past 10 years and you'll find papers on boosting, SVM, and neural networks, although this area is less active now. Statisticians have appropriated the work of Valiant and Vapnik, but on the other side, computer scientists have absorbed the work of Donoho and Talagrand. I don't think there is much difference in scope and methods any more. I have never bought Breiman's argument that CS people were only interested in minimizing loss using whatever works. That view was heavily influenced by his participation in Neural Networks conferences and his consulting work; but PAC, SVMs, Boosting have all solid foundations. And today, unlike 2001, Statistics is more concerned with finite-sample properties, algorithms and massive datasets. But I think that there are still three important differences that are not going away soon. Methodological Statistics papers are still overwhelmingly formal and deductive, whereas Machine Learning researchers are more tolerant of new approaches even if they don't come with a proof attached; The ML community primarily shares new results and publications in conferences and related proceedings, whereas statisticians use journal papers. This slows down progress in Statistics and identification of star researchers. John Langford has a nice post on the subject from a while back; Statistics still covers areas that are (for now) of little concern to ML, such as survey design, sampling, industrial Statistics etc.
The Two Cultures: statistics vs. machine learning? I think the answer to your first question is simply in the affirmative. Take any issue of Statistical Science, JASA, Annals of Statistics of the past 10 years and you'll find papers on boosting, SVM,
105
The Two Cultures: statistics vs. machine learning?
The biggest difference I see between the communities is that statistics emphasizes inference, whereas machine learning emphasized prediction. When you do statistics, you want to infer the process by which data you have was generated. When you do machine learning, you want to know how you can predict what future data will look like w.r.t. some variable. Of course the two overlap. Knowing how the data was generated will give you some hints about what a good predictor would be, for example. However, one example of the difference is that machine learning has dealt with the p >> n problem (more features/variables than training samples) since its infancy, whereas statistics is just starting to get serious about this problem. Why? Because you can still make good predictions when p >> n, but you can't make very good inferences about what variables are actually important and why.
The Two Cultures: statistics vs. machine learning?
The biggest difference I see between the communities is that statistics emphasizes inference, whereas machine learning emphasized prediction. When you do statistics, you want to infer the process by
The Two Cultures: statistics vs. machine learning? The biggest difference I see between the communities is that statistics emphasizes inference, whereas machine learning emphasized prediction. When you do statistics, you want to infer the process by which data you have was generated. When you do machine learning, you want to know how you can predict what future data will look like w.r.t. some variable. Of course the two overlap. Knowing how the data was generated will give you some hints about what a good predictor would be, for example. However, one example of the difference is that machine learning has dealt with the p >> n problem (more features/variables than training samples) since its infancy, whereas statistics is just starting to get serious about this problem. Why? Because you can still make good predictions when p >> n, but you can't make very good inferences about what variables are actually important and why.
The Two Cultures: statistics vs. machine learning? The biggest difference I see between the communities is that statistics emphasizes inference, whereas machine learning emphasized prediction. When you do statistics, you want to infer the process by
106
The Two Cultures: statistics vs. machine learning?
Bayesian: "Hello, Machine Learner!" Frequentist: "Hello, Machine Learner!" Machine Learning: "I hear you guys are good at stuff. Here's some data." F: "Yes, let's write down a model and then calculate the MLE." B: "Hey, F, that's not what you told me yesterday! I had some univariate data and I wanted to estimate the variance, and I calculated the MLE. Then you pounced on me and told me to divide by $n-1$ instead of by $n$." F: "Ah yes, thanks for reminding me. I often think that I'm supposed to use the MLE for everything, but I'm interested in unbiased estimators and so on." ML: "Eh, what's this philosophizing about? Will it help me?" F: " OK, an estimator is a black box, you put data in and it gives you some numbers out. We frequentists don't care about how the box was constructed, about what principles were used to design it. For example, I don't know how to derive the $\div(n-1)$ rule." ML: " So, what do you care about?" F: "Evaluation." ML: "I like the sound of that." F: "A black box is a black box. If somebody claims a particular estimator is an unbiased estimator for $\theta$, then we try many values of $\theta$ in turn, generate many samples from each based on some assumed model, push them through the estimator, and find the average estimated $\theta$. If we can prove that the expected estimate equals the true value, for all values, then we say it's unbiased." ML: "Sounds great! It sounds like frequentists are pragmatic people. You judge each black box by its results. Evaluation is key." F: "Indeed! I understand you guys take a similar approach. Cross-validation, or something? But that sounds messy to me." ML: "Messy?" F: "The idea of testing your estimator on real data seems dangerous to me. The empirical data you use might have all sorts of problems with it, and might not behave according the model we agreed upon for evaluation." ML: "What? I thought you said you'd proved some results? That your estimator would always be unbiased, for all $\theta$." F: "Yes. While your method might have worked on one dataset (the dataset with train and test data) that you used in your evaluation, I can prove that mine will always work." ML: "For all datasets?" F: "No." ML: "So my method has been cross-validated on one dataset. You haven't test yours on any real dataset?" F: "That's right." ML: "That puts me in the lead then! My method is better than yours. It predicts cancer 90% of the time. Your 'proof' is only valid if the entire dataset behaves according to the model you assumed." F: "Emm, yeah, I suppose." ML: "And that interval has 95% coverage. But I shouldn't be surprised if it only contains the correct value of $\theta$ 20% of the time?" F: "That's right. Unless the data is truly i.i.d Normal (or whatever), my proof is useless." ML: "So my evaluation is more trustworthy and comprehensive? It only works on the datasets I've tried so far, but at least they're real datasets, warts and all. There you were, trying to claim you were more 'conservative' and 'thorough' and that you were interested in model-checking and stuff." B: (interjects) "Hey guys, Sorry to interrupt. I'd love to step in and balance things up, perhaps demonstrating some other issues, but I really love watching my frequentist colleague squirm." F: "Woah!" ML: "OK, children. It was all about evaluation. An estimator is a black box. Data goes in, data comes out. We approve, or disapprove, of an estimator based on how it performs under evaluation. We don't care about the 'recipe' or 'design principles' that are used." F: "Yes. But we have very different ideas about which evaluations are important. ML will do train-and-test on real data. Whereas I will do an evaluation that is more general (because it involves a broadly-applicable proof) and also more limited (because I don't know if your dataset is actually drawn from the modelling assumptions I use while designing my evaluation.)" ML: "What evaluation do you use, B?" F: (interjects) "Hey. Don't make me laugh. He doesn't evaluate anything. He just uses his subjective beliefs and runs with it. Or something." B: "That's the common interpretation. But it's also possible to define Bayesianism by the evaluations preferred. Then we can use the idea that none of us care what's in the black box, we care only about different ways to evaluate." B continues: "Classic example: Medical test. The result of the blood test is either Positive or Negative. A frequentist will be interested in, of the Healthy people, what proportion get a Negative result. And similarly, what proportion of Sick people will get a Positive. The frequentist will calculate these for each blood testing method that's under consideration and then recommend that we use the test that got the best pair of scores." F: "Exactly. What more could you want?" B: "What about those individuals that got a Positive test result? They will want to know 'of those that get a Positive result, how many will get Sick?' and 'of those that get a Negative result, how many are Healthy?' " ML: "Ah yes, that seems like a better pair of questions to ask." F: "HERESY!" B: "Here we go again. He doesn't like where this is going." ML: "This is about 'priors', isn't it?" F: "EVIL". B: "Anyway, yes, you're right ML. In order to calculate the proportion of Positive-result people that are Sick you must do one of two things. One option is to run the tests on lots of people and just observe the relevant proportions. How many of those people go on to die of the disease, for example." ML: "That sounds like what I do. Use train-and-test." B: "But you can calculate these numbers in advance, if you are willing to make an assumption about the rate of Sickness in the population. The frequentist also makes his calcuations in advance, but without using this population-level Sickness rate." F: "MORE UNFOUNDED ASSUMPTIONS." B: "Oh shut up. Earlier, you were found out. ML discovered that you are just as fond of unfounded assumptions as anyone. Your 'proven' coverage probabilities won't stack up in the real world unless all your assumptions stand up. Why is my prior assumption so diffent? You call me crazy, yet you pretend your assumptions are the work of a conservative, solid, assumption-free analysis." B (continues): "Anyway, ML, as I was saying. Bayesians like a different kind of evaluation. We are more interested in conditioning on the observed data, and calculating the accuracy of our estimator accordingly. We cannot perform this evaluation without using a prior. But the interesting thing is that, once we decide on this form of evaluation, and once we choose our prior, we have an automatic 'recipe' to create an appropriate estimator. The frequentist has no such recipe. If he wants an unbiased estimator for a complex model, he doesn't have any automated way to build a suitable estimator." ML: "And you do? You can automatically build an estimator?" B: "Yes. I don't have an automatic way to create an unbiased estimator, because I think bias is a bad way to evaluate an estimator. But given the conditional-on-data estimation that I like, and the prior, I can connect the prior and the likelihood to give me the estimator." ML: "So anyway, let's recap. We all have different ways to evaluate our methods, and we'll probably never agree on which methods are best." B: "Well, that's not fair. We could mix and match them. If any of us have good labelled training data, we should probably test against it. And generally we all should test as many assumptions as we can. And some 'frequentist' proofs might be fun too, predicting the performance under some presumed model of data generation." F: "Yeah guys. Let's be pragmatic about evaluation. And actually, I'll stop obsessing over infinite-sample properties. I've been asking the scientists to give me an infinite sample, but they still haven't done so. It's time for me to focus again on finite samples." ML: "So, we just have one last question. We've argued a lot about how to evaluate our methods, but how do we create our methods." B: "Ah. As I was getting at earlier, we Bayesians have the more powerful general method. It might be complicated, but we can always write some sort of algorithm (maybe a naive form of MCMC) that will sample from our posterior." F(interjects): "But it might have bias." B: "So might your methods. Need I remind you that the MLE is often biased? Sometimes, you have great difficulty finding unbiased estimators, and even when you do you have a stupid estimator (for some really complex model) that will say the variance is negative. And you call that unbiased. Unbiased, yes. But useful, no!" ML: "OK guys. You're ranting again. Let me ask you a question, F. Have you ever compared the bias of your method with the bias of B's method, when you've both worked on the same problem?" F: "Yes. In fact, I hate to admit it, but B's approach sometimes has lower bias and MSE than my estimator!" ML: "The lesson here is that, while we disagree a little on evaluation, none of us has a monopoly on how to create estimator that have properties we want." B: "Yes, we should read each other's work a bit more. We can give each other inspiration for estimators. We might find that other's estimators work great, out-of-the-box, on our own problems." F: "And I should stop obsessing about bias. An unbiased estimator might have ridiculous variance. I suppose all of us have to 'take responsibility' for the choices we make in how we evaluate and the properties we wish to see in our estimators. We can't hind behind a philosophy. Try all the evaluations you can. And I will keep sneaking a look at the Bayesian literature to get new ideas for estimators!" B:"In fact, a lot of people don't really know what their own philosophy is. I'm not even sure myself. If I use a Bayesian recipe, and then proof some nice theoretical result, doesn't that mean I'm a frequentist? A frequentist cares about above proofs about performance, he doesn't care about recipes. And if I do some train-and-test instead (or as well), does that mean I'm a machine-learner?" ML: "It seems we're all pretty similar then."
The Two Cultures: statistics vs. machine learning?
Bayesian: "Hello, Machine Learner!" Frequentist: "Hello, Machine Learner!" Machine Learning: "I hear you guys are good at stuff. Here's some data." F: "Yes, let's write down a model and then calcula
The Two Cultures: statistics vs. machine learning? Bayesian: "Hello, Machine Learner!" Frequentist: "Hello, Machine Learner!" Machine Learning: "I hear you guys are good at stuff. Here's some data." F: "Yes, let's write down a model and then calculate the MLE." B: "Hey, F, that's not what you told me yesterday! I had some univariate data and I wanted to estimate the variance, and I calculated the MLE. Then you pounced on me and told me to divide by $n-1$ instead of by $n$." F: "Ah yes, thanks for reminding me. I often think that I'm supposed to use the MLE for everything, but I'm interested in unbiased estimators and so on." ML: "Eh, what's this philosophizing about? Will it help me?" F: " OK, an estimator is a black box, you put data in and it gives you some numbers out. We frequentists don't care about how the box was constructed, about what principles were used to design it. For example, I don't know how to derive the $\div(n-1)$ rule." ML: " So, what do you care about?" F: "Evaluation." ML: "I like the sound of that." F: "A black box is a black box. If somebody claims a particular estimator is an unbiased estimator for $\theta$, then we try many values of $\theta$ in turn, generate many samples from each based on some assumed model, push them through the estimator, and find the average estimated $\theta$. If we can prove that the expected estimate equals the true value, for all values, then we say it's unbiased." ML: "Sounds great! It sounds like frequentists are pragmatic people. You judge each black box by its results. Evaluation is key." F: "Indeed! I understand you guys take a similar approach. Cross-validation, or something? But that sounds messy to me." ML: "Messy?" F: "The idea of testing your estimator on real data seems dangerous to me. The empirical data you use might have all sorts of problems with it, and might not behave according the model we agreed upon for evaluation." ML: "What? I thought you said you'd proved some results? That your estimator would always be unbiased, for all $\theta$." F: "Yes. While your method might have worked on one dataset (the dataset with train and test data) that you used in your evaluation, I can prove that mine will always work." ML: "For all datasets?" F: "No." ML: "So my method has been cross-validated on one dataset. You haven't test yours on any real dataset?" F: "That's right." ML: "That puts me in the lead then! My method is better than yours. It predicts cancer 90% of the time. Your 'proof' is only valid if the entire dataset behaves according to the model you assumed." F: "Emm, yeah, I suppose." ML: "And that interval has 95% coverage. But I shouldn't be surprised if it only contains the correct value of $\theta$ 20% of the time?" F: "That's right. Unless the data is truly i.i.d Normal (or whatever), my proof is useless." ML: "So my evaluation is more trustworthy and comprehensive? It only works on the datasets I've tried so far, but at least they're real datasets, warts and all. There you were, trying to claim you were more 'conservative' and 'thorough' and that you were interested in model-checking and stuff." B: (interjects) "Hey guys, Sorry to interrupt. I'd love to step in and balance things up, perhaps demonstrating some other issues, but I really love watching my frequentist colleague squirm." F: "Woah!" ML: "OK, children. It was all about evaluation. An estimator is a black box. Data goes in, data comes out. We approve, or disapprove, of an estimator based on how it performs under evaluation. We don't care about the 'recipe' or 'design principles' that are used." F: "Yes. But we have very different ideas about which evaluations are important. ML will do train-and-test on real data. Whereas I will do an evaluation that is more general (because it involves a broadly-applicable proof) and also more limited (because I don't know if your dataset is actually drawn from the modelling assumptions I use while designing my evaluation.)" ML: "What evaluation do you use, B?" F: (interjects) "Hey. Don't make me laugh. He doesn't evaluate anything. He just uses his subjective beliefs and runs with it. Or something." B: "That's the common interpretation. But it's also possible to define Bayesianism by the evaluations preferred. Then we can use the idea that none of us care what's in the black box, we care only about different ways to evaluate." B continues: "Classic example: Medical test. The result of the blood test is either Positive or Negative. A frequentist will be interested in, of the Healthy people, what proportion get a Negative result. And similarly, what proportion of Sick people will get a Positive. The frequentist will calculate these for each blood testing method that's under consideration and then recommend that we use the test that got the best pair of scores." F: "Exactly. What more could you want?" B: "What about those individuals that got a Positive test result? They will want to know 'of those that get a Positive result, how many will get Sick?' and 'of those that get a Negative result, how many are Healthy?' " ML: "Ah yes, that seems like a better pair of questions to ask." F: "HERESY!" B: "Here we go again. He doesn't like where this is going." ML: "This is about 'priors', isn't it?" F: "EVIL". B: "Anyway, yes, you're right ML. In order to calculate the proportion of Positive-result people that are Sick you must do one of two things. One option is to run the tests on lots of people and just observe the relevant proportions. How many of those people go on to die of the disease, for example." ML: "That sounds like what I do. Use train-and-test." B: "But you can calculate these numbers in advance, if you are willing to make an assumption about the rate of Sickness in the population. The frequentist also makes his calcuations in advance, but without using this population-level Sickness rate." F: "MORE UNFOUNDED ASSUMPTIONS." B: "Oh shut up. Earlier, you were found out. ML discovered that you are just as fond of unfounded assumptions as anyone. Your 'proven' coverage probabilities won't stack up in the real world unless all your assumptions stand up. Why is my prior assumption so diffent? You call me crazy, yet you pretend your assumptions are the work of a conservative, solid, assumption-free analysis." B (continues): "Anyway, ML, as I was saying. Bayesians like a different kind of evaluation. We are more interested in conditioning on the observed data, and calculating the accuracy of our estimator accordingly. We cannot perform this evaluation without using a prior. But the interesting thing is that, once we decide on this form of evaluation, and once we choose our prior, we have an automatic 'recipe' to create an appropriate estimator. The frequentist has no such recipe. If he wants an unbiased estimator for a complex model, he doesn't have any automated way to build a suitable estimator." ML: "And you do? You can automatically build an estimator?" B: "Yes. I don't have an automatic way to create an unbiased estimator, because I think bias is a bad way to evaluate an estimator. But given the conditional-on-data estimation that I like, and the prior, I can connect the prior and the likelihood to give me the estimator." ML: "So anyway, let's recap. We all have different ways to evaluate our methods, and we'll probably never agree on which methods are best." B: "Well, that's not fair. We could mix and match them. If any of us have good labelled training data, we should probably test against it. And generally we all should test as many assumptions as we can. And some 'frequentist' proofs might be fun too, predicting the performance under some presumed model of data generation." F: "Yeah guys. Let's be pragmatic about evaluation. And actually, I'll stop obsessing over infinite-sample properties. I've been asking the scientists to give me an infinite sample, but they still haven't done so. It's time for me to focus again on finite samples." ML: "So, we just have one last question. We've argued a lot about how to evaluate our methods, but how do we create our methods." B: "Ah. As I was getting at earlier, we Bayesians have the more powerful general method. It might be complicated, but we can always write some sort of algorithm (maybe a naive form of MCMC) that will sample from our posterior." F(interjects): "But it might have bias." B: "So might your methods. Need I remind you that the MLE is often biased? Sometimes, you have great difficulty finding unbiased estimators, and even when you do you have a stupid estimator (for some really complex model) that will say the variance is negative. And you call that unbiased. Unbiased, yes. But useful, no!" ML: "OK guys. You're ranting again. Let me ask you a question, F. Have you ever compared the bias of your method with the bias of B's method, when you've both worked on the same problem?" F: "Yes. In fact, I hate to admit it, but B's approach sometimes has lower bias and MSE than my estimator!" ML: "The lesson here is that, while we disagree a little on evaluation, none of us has a monopoly on how to create estimator that have properties we want." B: "Yes, we should read each other's work a bit more. We can give each other inspiration for estimators. We might find that other's estimators work great, out-of-the-box, on our own problems." F: "And I should stop obsessing about bias. An unbiased estimator might have ridiculous variance. I suppose all of us have to 'take responsibility' for the choices we make in how we evaluate and the properties we wish to see in our estimators. We can't hind behind a philosophy. Try all the evaluations you can. And I will keep sneaking a look at the Bayesian literature to get new ideas for estimators!" B:"In fact, a lot of people don't really know what their own philosophy is. I'm not even sure myself. If I use a Bayesian recipe, and then proof some nice theoretical result, doesn't that mean I'm a frequentist? A frequentist cares about above proofs about performance, he doesn't care about recipes. And if I do some train-and-test instead (or as well), does that mean I'm a machine-learner?" ML: "It seems we're all pretty similar then."
The Two Cultures: statistics vs. machine learning? Bayesian: "Hello, Machine Learner!" Frequentist: "Hello, Machine Learner!" Machine Learning: "I hear you guys are good at stuff. Here's some data." F: "Yes, let's write down a model and then calcula
107
The Two Cultures: statistics vs. machine learning?
In such a discussion, I always recall the famous Ken Thompson quote When in doubt, use brute force. In this case, machine learning is a salvation when the assumptions are hard to catch; or at least it is much better than guessing them wrong.
The Two Cultures: statistics vs. machine learning?
In such a discussion, I always recall the famous Ken Thompson quote When in doubt, use brute force. In this case, machine learning is a salvation when the assumptions are hard to catch; or at least
The Two Cultures: statistics vs. machine learning? In such a discussion, I always recall the famous Ken Thompson quote When in doubt, use brute force. In this case, machine learning is a salvation when the assumptions are hard to catch; or at least it is much better than guessing them wrong.
The Two Cultures: statistics vs. machine learning? In such a discussion, I always recall the famous Ken Thompson quote When in doubt, use brute force. In this case, machine learning is a salvation when the assumptions are hard to catch; or at least
108
The Two Cultures: statistics vs. machine learning?
What enforces more separation than there should be is each discipline's lexicon. There are many instances where ML uses one term and Statistics uses a different term--but both refer to the same thing--fine, you would expect that, and it doesn't cause any permanent confusion (e.g., features/attributes versus expectation variables, or neural network/MLP versus projection-pursuit). What's much more troublesome is that both disciplines use the same term to refer to completely different concepts. A few examples: Kernel Function In ML, kernel functions are used in classifiers (e.g., SVM) and of course in kernel machines. The term refers to a simple function (cosine, sigmoidal, rbf, polynomial) to map non-linearly separable to a new input space, so that the data is now linearly separable in this new input space. (versus using a non-linear model to begin with). In statistics, a kernel function is weighting function used in density estimation to smooth the density curve. Regression In ML, predictive algorithms, or implementations of those algorithms that return class labels "classifiers" are (sometimes) referred to as machines--e.g., support vector machine, kernel machine. The counterpart to machines are regressors, which return a score (continuous variable)--e.g., support vector regression. Rarely do the algorithms have different names based on mode--e.g., a MLP is the term used whether it returns a class label or a continuous variable. In Statistics, regression, if you are attempting to build a model based on empirical data, to predict some response variable based on one or more explanatory variables or more variables--then you are doing regression analysis. It doesn't matter whether the output is a continuous variable or a class label (e.g., logistic regression). So for instance, least-squares regression refers to a model that returns a continuous value; logistic regression on the other hand, returns a probability estimate which is then discretized to a class labels. Bias In ML, the bias term in the algorithm is conceptually identical to the intercept term used by statisticians in regression modeling. In Statistics, bias is non-random error--i.e., some phenomenon influenced the entire data set in the same direction, which in turn means that this kind of error cannot be removed by resampling or increasing the sample size.
The Two Cultures: statistics vs. machine learning?
What enforces more separation than there should be is each discipline's lexicon. There are many instances where ML uses one term and Statistics uses a different term--but both refer to the same thing
The Two Cultures: statistics vs. machine learning? What enforces more separation than there should be is each discipline's lexicon. There are many instances where ML uses one term and Statistics uses a different term--but both refer to the same thing--fine, you would expect that, and it doesn't cause any permanent confusion (e.g., features/attributes versus expectation variables, or neural network/MLP versus projection-pursuit). What's much more troublesome is that both disciplines use the same term to refer to completely different concepts. A few examples: Kernel Function In ML, kernel functions are used in classifiers (e.g., SVM) and of course in kernel machines. The term refers to a simple function (cosine, sigmoidal, rbf, polynomial) to map non-linearly separable to a new input space, so that the data is now linearly separable in this new input space. (versus using a non-linear model to begin with). In statistics, a kernel function is weighting function used in density estimation to smooth the density curve. Regression In ML, predictive algorithms, or implementations of those algorithms that return class labels "classifiers" are (sometimes) referred to as machines--e.g., support vector machine, kernel machine. The counterpart to machines are regressors, which return a score (continuous variable)--e.g., support vector regression. Rarely do the algorithms have different names based on mode--e.g., a MLP is the term used whether it returns a class label or a continuous variable. In Statistics, regression, if you are attempting to build a model based on empirical data, to predict some response variable based on one or more explanatory variables or more variables--then you are doing regression analysis. It doesn't matter whether the output is a continuous variable or a class label (e.g., logistic regression). So for instance, least-squares regression refers to a model that returns a continuous value; logistic regression on the other hand, returns a probability estimate which is then discretized to a class labels. Bias In ML, the bias term in the algorithm is conceptually identical to the intercept term used by statisticians in regression modeling. In Statistics, bias is non-random error--i.e., some phenomenon influenced the entire data set in the same direction, which in turn means that this kind of error cannot be removed by resampling or increasing the sample size.
The Two Cultures: statistics vs. machine learning? What enforces more separation than there should be is each discipline's lexicon. There are many instances where ML uses one term and Statistics uses a different term--but both refer to the same thing
109
The Two Cultures: statistics vs. machine learning?
The largest differences I've been noticing in the past year are: Machine learning experts do not spend enough time on fundamentals, and many of them do not understand optimal decision making and proper accuracy scoring rules. They do not understand that predictive methods that make no assumptions require larger sample sizes than those that do. We statisticians spend too little time learning good programming practice and new computational languages. We are too slow to change when it comes to computing and adopting new methods from the statistical literature.
The Two Cultures: statistics vs. machine learning?
The largest differences I've been noticing in the past year are: Machine learning experts do not spend enough time on fundamentals, and many of them do not understand optimal decision making and prop
The Two Cultures: statistics vs. machine learning? The largest differences I've been noticing in the past year are: Machine learning experts do not spend enough time on fundamentals, and many of them do not understand optimal decision making and proper accuracy scoring rules. They do not understand that predictive methods that make no assumptions require larger sample sizes than those that do. We statisticians spend too little time learning good programming practice and new computational languages. We are too slow to change when it comes to computing and adopting new methods from the statistical literature.
The Two Cultures: statistics vs. machine learning? The largest differences I've been noticing in the past year are: Machine learning experts do not spend enough time on fundamentals, and many of them do not understand optimal decision making and prop
110
The Two Cultures: statistics vs. machine learning?
Machine Learning seems to have its basis in the pragmatic - a Practical observation or simulation of reality. Even within statistics, mindless "checking of models and assumptions" can lead to discarding methods that are useful. For example, years ago, the very first commercially available (and working) Bankruptcy model implemented by the credit bureaus was created through a plain old linear regression model targeting a 0-1 outcome. Technically, that's a bad approach, but practically, it worked.
The Two Cultures: statistics vs. machine learning?
Machine Learning seems to have its basis in the pragmatic - a Practical observation or simulation of reality. Even within statistics, mindless "checking of models and assumptions" can lead to discard
The Two Cultures: statistics vs. machine learning? Machine Learning seems to have its basis in the pragmatic - a Practical observation or simulation of reality. Even within statistics, mindless "checking of models and assumptions" can lead to discarding methods that are useful. For example, years ago, the very first commercially available (and working) Bankruptcy model implemented by the credit bureaus was created through a plain old linear regression model targeting a 0-1 outcome. Technically, that's a bad approach, but practically, it worked.
The Two Cultures: statistics vs. machine learning? Machine Learning seems to have its basis in the pragmatic - a Practical observation or simulation of reality. Even within statistics, mindless "checking of models and assumptions" can lead to discard
111
The Two Cultures: statistics vs. machine learning?
I disagree with this question as it suggests that machine learning and statistics are different or conflicting sciences.... when the opposite is true! machine learning makes extensive use of statistics... a quick survey of any Machine learning or data mining software package will reveal Clustering techniques such as k-means also found in statistics.... will also show dimension reduction techniques such as Principal components analysis also a statistical technique... even logistic regression yet another. In my view the main difference is that traditionally statistics was used to proove a pre conceived theory and usually the analysis was design around that principal theory. Where with data mining or machine learning the opposite approach is usually the norm in that we have the outcome we just want to find a way to predict it rather than ask the question or form the theory is this the outcome!
The Two Cultures: statistics vs. machine learning?
I disagree with this question as it suggests that machine learning and statistics are different or conflicting sciences.... when the opposite is true! machine learning makes extensive use of statistic
The Two Cultures: statistics vs. machine learning? I disagree with this question as it suggests that machine learning and statistics are different or conflicting sciences.... when the opposite is true! machine learning makes extensive use of statistics... a quick survey of any Machine learning or data mining software package will reveal Clustering techniques such as k-means also found in statistics.... will also show dimension reduction techniques such as Principal components analysis also a statistical technique... even logistic regression yet another. In my view the main difference is that traditionally statistics was used to proove a pre conceived theory and usually the analysis was design around that principal theory. Where with data mining or machine learning the opposite approach is usually the norm in that we have the outcome we just want to find a way to predict it rather than ask the question or form the theory is this the outcome!
The Two Cultures: statistics vs. machine learning? I disagree with this question as it suggests that machine learning and statistics are different or conflicting sciences.... when the opposite is true! machine learning makes extensive use of statistic
112
The Two Cultures: statistics vs. machine learning?
The real problem is that this question is misguided. It is not machine learning vs statistics, it is machine learning against real scientific advance. If a machine learning device gives the right predictions 90% of the time but I cannot understand "why", what is the contribution of machine learning to science at large? Imagine if machine learning techniques were used to predict the positions of planets: there would be a lot of smug people thinking that they can accurately predict a number of things with their SVMs, but what would they really know about the problem they have in their hands? Obviously, science does not really advance by numerical predictions, it advances by means of models (mental, mathematical) who let us see far beyond than just numbers.
The Two Cultures: statistics vs. machine learning?
The real problem is that this question is misguided. It is not machine learning vs statistics, it is machine learning against real scientific advance. If a machine learning device gives the right pred
The Two Cultures: statistics vs. machine learning? The real problem is that this question is misguided. It is not machine learning vs statistics, it is machine learning against real scientific advance. If a machine learning device gives the right predictions 90% of the time but I cannot understand "why", what is the contribution of machine learning to science at large? Imagine if machine learning techniques were used to predict the positions of planets: there would be a lot of smug people thinking that they can accurately predict a number of things with their SVMs, but what would they really know about the problem they have in their hands? Obviously, science does not really advance by numerical predictions, it advances by means of models (mental, mathematical) who let us see far beyond than just numbers.
The Two Cultures: statistics vs. machine learning? The real problem is that this question is misguided. It is not machine learning vs statistics, it is machine learning against real scientific advance. If a machine learning device gives the right pred
113
The Two Cultures: statistics vs. machine learning?
I have spoken on this at a different forum the ASA Statistical Consulting eGroup. My response was more specifically to data mining but the two go hand in hand. We statisticians have snubbed our noses at data miners, computer scientists, and engineers. It is wrong. I think part of the reason it happens is because we see some people in those fields ignoring the stochastic nature of their problem. Some statisticians call data mining data snooping or data fishing. Some people do abuse and misuse the methods but statisticians have fallen behind in data mining and machine learning because we paint them with a broad brush. Some of the big statistical results have come from outside the field of statistics. Boosting is one important example. But statisticians like Breiman, Friedman, Hastie, Tibshirani, Efron, Gelman and others got it and their leadership has brought statisticians into the analysis of microarrays and other large scale inference problems. So while the cultures may never mesh there is now more cooperation and collaboration between the computer scientists, engineers and statisticians.
The Two Cultures: statistics vs. machine learning?
I have spoken on this at a different forum the ASA Statistical Consulting eGroup. My response was more specifically to data mining but the two go hand in hand. We statisticians have snubbed our nose
The Two Cultures: statistics vs. machine learning? I have spoken on this at a different forum the ASA Statistical Consulting eGroup. My response was more specifically to data mining but the two go hand in hand. We statisticians have snubbed our noses at data miners, computer scientists, and engineers. It is wrong. I think part of the reason it happens is because we see some people in those fields ignoring the stochastic nature of their problem. Some statisticians call data mining data snooping or data fishing. Some people do abuse and misuse the methods but statisticians have fallen behind in data mining and machine learning because we paint them with a broad brush. Some of the big statistical results have come from outside the field of statistics. Boosting is one important example. But statisticians like Breiman, Friedman, Hastie, Tibshirani, Efron, Gelman and others got it and their leadership has brought statisticians into the analysis of microarrays and other large scale inference problems. So while the cultures may never mesh there is now more cooperation and collaboration between the computer scientists, engineers and statisticians.
The Two Cultures: statistics vs. machine learning? I have spoken on this at a different forum the ASA Statistical Consulting eGroup. My response was more specifically to data mining but the two go hand in hand. We statisticians have snubbed our nose
114
The Two Cultures: statistics vs. machine learning?
Statistical learning (AKA Machine Learning) has its origins in the quest to create software by "learning from examples". There are many tasks that we would like computers to do (e.g., computer vision, speech recognition, robot control) that are difficult to program but for which it is easy to provide training examples. The machine learning/statistical learning research community developed algorithms to learn functions from these examples. The loss function was typically related to the performance task (vision, speech recognition). And of course we had no reason to believe there was any simple "model" underlying these tasks (because otherwise we would have coded up that simple program ourselves). Hence, the whole idea of doing statistical inference didn't make any sense. The goal is predictive accuracy and nothing else. Over time, various forces started driving machine learning people to learn more about statistics. One was the need to incorporate background knowledge and other constraints on the learning process. This led people to consider generative probabilistic models, because these make it easy to incorporate prior knowledge through the structure of the model and priors on model parameters and structure. This led the field to discover the rich statistical literature in this area. Another force was the discovery of the phenomenon of overfitting. This led the ML community to learn about cross-validation and regularization and again we discovered the rich statistical literature on the subject. Nonetheless, the focus of most machine learning work is to create a system that exhibits certain performance rather than the make inferences about an unknown process. This is the fundamental difference between ML and statistics.
The Two Cultures: statistics vs. machine learning?
Statistical learning (AKA Machine Learning) has its origins in the quest to create software by "learning from examples". There are many tasks that we would like computers to do (e.g., computer vision
The Two Cultures: statistics vs. machine learning? Statistical learning (AKA Machine Learning) has its origins in the quest to create software by "learning from examples". There are many tasks that we would like computers to do (e.g., computer vision, speech recognition, robot control) that are difficult to program but for which it is easy to provide training examples. The machine learning/statistical learning research community developed algorithms to learn functions from these examples. The loss function was typically related to the performance task (vision, speech recognition). And of course we had no reason to believe there was any simple "model" underlying these tasks (because otherwise we would have coded up that simple program ourselves). Hence, the whole idea of doing statistical inference didn't make any sense. The goal is predictive accuracy and nothing else. Over time, various forces started driving machine learning people to learn more about statistics. One was the need to incorporate background knowledge and other constraints on the learning process. This led people to consider generative probabilistic models, because these make it easy to incorporate prior knowledge through the structure of the model and priors on model parameters and structure. This led the field to discover the rich statistical literature in this area. Another force was the discovery of the phenomenon of overfitting. This led the ML community to learn about cross-validation and regularization and again we discovered the rich statistical literature on the subject. Nonetheless, the focus of most machine learning work is to create a system that exhibits certain performance rather than the make inferences about an unknown process. This is the fundamental difference between ML and statistics.
The Two Cultures: statistics vs. machine learning? Statistical learning (AKA Machine Learning) has its origins in the quest to create software by "learning from examples". There are many tasks that we would like computers to do (e.g., computer vision
115
The Two Cultures: statistics vs. machine learning?
Ideally one should have a thorough knowledge of both statsitics and machine learning before attempting to answer his question. I am very much a neophyte to ML, so forgive me if wat I say is naive. I have limited experience in SVMs and regression trees. What strikes me as lacking in ML from a stats point of view is a well developed concept of inference. Inference in ML seems to boil down almost exclusively to the predictice accuracy, as measured by (for example) mean classification error (MCE), or balanced error rate (BER) or similar. ML is in the very good habit of dividing data randomly (usually 2:1) into a training set and a test set. Models are fit using the training set and performance (MCE, BER etc) is assessed using the test set. This is an excellent practice and is only slowly making its way into mainstream statistics. ML also makes heavy use of resampling methods (especially cross-validation), whose origins appear to be in statistics. However, ML seems to lack a fully developed concept of inference - beyond predictive accuracy. This has two results. 1) There does not seem to be an appreciation that any prediction (parameter estimation etc.) is subject to a random error and perhaps systemmatics error (bias). Statisticians will accept that this is an inevitable part of prediction and will try and estimate the error. Statistical techniques will try and find an estimate that has minimum bias and random error. Their techniques are usually driven by a model of the data process, but not always (eg. Bootstrap). 2) There does not seem to be a deep understanding in ML of the limits of applying a model to new data to a new sample from the same population (in spite of what I said earlier about the training-test data set approach). Various statistical techniques, among them cross validation and penalty terms applied to likelihood-based methods, guide statisticians in the trade-off between parsimony and model complexity. Such guidelines in ML seem much more ad hoc. I've seen several papers in ML where cross validation is used to optimise a fitting of many models on a training dataset - producing better and better fit as the model complexity increases. There appears little appreciation that the tiny gains in accuracy are not worth the extra complexity and this naturally leads to over-fitting. Then all these optimised models are applied to the test set as a check on predictive performance and to prevent overfitting. Two things have been forgotten (above). The predictive performance will have a stochastic component. Secondly multiple tests against a test set will again result in over-fitting. The "best" model will be choisen by the ML practitioner without a full appreciation he/she has cherry picked from one realisation of many possible outomes of this experiment. The best of several tested models will almost certainly not reflect the true performance on new data. Any my 2 cents worth. We have much to learn from each other.
The Two Cultures: statistics vs. machine learning?
Ideally one should have a thorough knowledge of both statsitics and machine learning before attempting to answer his question. I am very much a neophyte to ML, so forgive me if wat I say is naive. I h
The Two Cultures: statistics vs. machine learning? Ideally one should have a thorough knowledge of both statsitics and machine learning before attempting to answer his question. I am very much a neophyte to ML, so forgive me if wat I say is naive. I have limited experience in SVMs and regression trees. What strikes me as lacking in ML from a stats point of view is a well developed concept of inference. Inference in ML seems to boil down almost exclusively to the predictice accuracy, as measured by (for example) mean classification error (MCE), or balanced error rate (BER) or similar. ML is in the very good habit of dividing data randomly (usually 2:1) into a training set and a test set. Models are fit using the training set and performance (MCE, BER etc) is assessed using the test set. This is an excellent practice and is only slowly making its way into mainstream statistics. ML also makes heavy use of resampling methods (especially cross-validation), whose origins appear to be in statistics. However, ML seems to lack a fully developed concept of inference - beyond predictive accuracy. This has two results. 1) There does not seem to be an appreciation that any prediction (parameter estimation etc.) is subject to a random error and perhaps systemmatics error (bias). Statisticians will accept that this is an inevitable part of prediction and will try and estimate the error. Statistical techniques will try and find an estimate that has minimum bias and random error. Their techniques are usually driven by a model of the data process, but not always (eg. Bootstrap). 2) There does not seem to be a deep understanding in ML of the limits of applying a model to new data to a new sample from the same population (in spite of what I said earlier about the training-test data set approach). Various statistical techniques, among them cross validation and penalty terms applied to likelihood-based methods, guide statisticians in the trade-off between parsimony and model complexity. Such guidelines in ML seem much more ad hoc. I've seen several papers in ML where cross validation is used to optimise a fitting of many models on a training dataset - producing better and better fit as the model complexity increases. There appears little appreciation that the tiny gains in accuracy are not worth the extra complexity and this naturally leads to over-fitting. Then all these optimised models are applied to the test set as a check on predictive performance and to prevent overfitting. Two things have been forgotten (above). The predictive performance will have a stochastic component. Secondly multiple tests against a test set will again result in over-fitting. The "best" model will be choisen by the ML practitioner without a full appreciation he/she has cherry picked from one realisation of many possible outomes of this experiment. The best of several tested models will almost certainly not reflect the true performance on new data. Any my 2 cents worth. We have much to learn from each other.
The Two Cultures: statistics vs. machine learning? Ideally one should have a thorough knowledge of both statsitics and machine learning before attempting to answer his question. I am very much a neophyte to ML, so forgive me if wat I say is naive. I h
116
The Two Cultures: statistics vs. machine learning?
This question can also be extended to the so-called super-culture of data science in 2015 David Donoho paper 50 years of Data Science, where he confronts different points of view from statistics and computer science (including machine learning), for instance direct standpoints (from different persons) such that: Why Do We Need Data Science When We've Had Statistics for Centuries? Data Science is statistics. Data Science without statistics is possible, even desirable. Statistics is the least important part of data science. and assorted with historical, philosophical considerations, for instance: It is striking how, when I review a presentation on today's data science, in which statistics is super cially given pretty short shrift, I can't avoid noticing that the underlying tools, examples, and ideas which are being taught as data science were all literally invented by someone trained in Ph.D. statistics, and in many cases the actual software being used was developed by someone with an MA or Ph.D. in statistics. The accumulated e orts of statisticians over centuries are just too overwhelming to be papered over completely, and can't be hidden in the teaching, research, and exercise of Data Science. This essay has generated many responses and contributions to the debate.
The Two Cultures: statistics vs. machine learning?
This question can also be extended to the so-called super-culture of data science in 2015 David Donoho paper 50 years of Data Science, where he confronts different points of view from statistics and c
The Two Cultures: statistics vs. machine learning? This question can also be extended to the so-called super-culture of data science in 2015 David Donoho paper 50 years of Data Science, where he confronts different points of view from statistics and computer science (including machine learning), for instance direct standpoints (from different persons) such that: Why Do We Need Data Science When We've Had Statistics for Centuries? Data Science is statistics. Data Science without statistics is possible, even desirable. Statistics is the least important part of data science. and assorted with historical, philosophical considerations, for instance: It is striking how, when I review a presentation on today's data science, in which statistics is super cially given pretty short shrift, I can't avoid noticing that the underlying tools, examples, and ideas which are being taught as data science were all literally invented by someone trained in Ph.D. statistics, and in many cases the actual software being used was developed by someone with an MA or Ph.D. in statistics. The accumulated e orts of statisticians over centuries are just too overwhelming to be papered over completely, and can't be hidden in the teaching, research, and exercise of Data Science. This essay has generated many responses and contributions to the debate.
The Two Cultures: statistics vs. machine learning? This question can also be extended to the so-called super-culture of data science in 2015 David Donoho paper 50 years of Data Science, where he confronts different points of view from statistics and c
117
The Two Cultures: statistics vs. machine learning?
I don't really know what the conceptual/historical difference between machine learning and statistic is but I am sure it is not that obvious... and I am not really interest in knowing if I am a machine learner or a statistician, I think 10 years after Breiman's paper, lots of people are both... Anyway, I found interesting the question about predictive accuracy of models. We have to remember that it is not always possible to measure the accuracy of a model and more precisely we are most often implicitly making some modeling when measuring errors. For Example, mean absolute error in time series forecast is a mean over time and it measures the performance of a procedure to forecast the median with the assumption that performance is, in some sense, stationary and shows some ergodic property. If (for some reason) you need to forecast the mean temperature on earth for the next 50 years and if your modeling performs well for the last 50 years... it does not means that... More generally, (if I remember, it is called no free lunch) you can't do anything without modeling... In addition, I think statistic is trying to find an answer to the question : "is something significant or not ", this is a very important question in science and can't be answered through a learning process. To state John Tukey (was he a statistician ?) : The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data Hope this helps !
The Two Cultures: statistics vs. machine learning?
I don't really know what the conceptual/historical difference between machine learning and statistic is but I am sure it is not that obvious... and I am not really interest in knowing if I am a machin
The Two Cultures: statistics vs. machine learning? I don't really know what the conceptual/historical difference between machine learning and statistic is but I am sure it is not that obvious... and I am not really interest in knowing if I am a machine learner or a statistician, I think 10 years after Breiman's paper, lots of people are both... Anyway, I found interesting the question about predictive accuracy of models. We have to remember that it is not always possible to measure the accuracy of a model and more precisely we are most often implicitly making some modeling when measuring errors. For Example, mean absolute error in time series forecast is a mean over time and it measures the performance of a procedure to forecast the median with the assumption that performance is, in some sense, stationary and shows some ergodic property. If (for some reason) you need to forecast the mean temperature on earth for the next 50 years and if your modeling performs well for the last 50 years... it does not means that... More generally, (if I remember, it is called no free lunch) you can't do anything without modeling... In addition, I think statistic is trying to find an answer to the question : "is something significant or not ", this is a very important question in science and can't be answered through a learning process. To state John Tukey (was he a statistician ?) : The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data Hope this helps !
The Two Cultures: statistics vs. machine learning? I don't really know what the conceptual/historical difference between machine learning and statistic is but I am sure it is not that obvious... and I am not really interest in knowing if I am a machin
118
The Two Cultures: statistics vs. machine learning?
Clearly, the two fields clearly face similar but different problems, in similar but not identical ways with analogous but not identical concepts, and work in different departments, journals and conferences. When I read Cressie and Read's Power Divergence Statistic it all snapped into place for me. Their formula generalizes commonly used test statistics into one that varies by one exponent, lambda. There are two special cases, lambda=0 and lambda=1. Computer Science and Statistics fit along a continuum (that presumably could include other points). At one value of lambda, you get statistics commonly cited in Statistics circles, and at the other you get statistics commonly cited in Comp Sci circles. Statistics Lambda = 1 Sums of squares appear a lot Variance as a measure of variability Covariance as a measure of association Chi-squared statistic as a measure of model fit Computer science: Lambda = 0 Sums of logs appear a lot Entropy as a measure of variability Mutual information as a measure of association G-squared statistic as a measure of model fit
The Two Cultures: statistics vs. machine learning?
Clearly, the two fields clearly face similar but different problems, in similar but not identical ways with analogous but not identical concepts, and work in different departments, journals and confer
The Two Cultures: statistics vs. machine learning? Clearly, the two fields clearly face similar but different problems, in similar but not identical ways with analogous but not identical concepts, and work in different departments, journals and conferences. When I read Cressie and Read's Power Divergence Statistic it all snapped into place for me. Their formula generalizes commonly used test statistics into one that varies by one exponent, lambda. There are two special cases, lambda=0 and lambda=1. Computer Science and Statistics fit along a continuum (that presumably could include other points). At one value of lambda, you get statistics commonly cited in Statistics circles, and at the other you get statistics commonly cited in Comp Sci circles. Statistics Lambda = 1 Sums of squares appear a lot Variance as a measure of variability Covariance as a measure of association Chi-squared statistic as a measure of model fit Computer science: Lambda = 0 Sums of logs appear a lot Entropy as a measure of variability Mutual information as a measure of association G-squared statistic as a measure of model fit
The Two Cultures: statistics vs. machine learning? Clearly, the two fields clearly face similar but different problems, in similar but not identical ways with analogous but not identical concepts, and work in different departments, journals and confer
119
The Two Cultures: statistics vs. machine learning?
You run a fancy computer algorithm once -- and you get a CS conference presentation/statistics paper (wow, what a fast convergence!). You commercialize it and run it 1 million times -- and you go broke (ouch, why am I getting useless and irreproducible results all the time???) unless you know how to employ probability and statistics to generalize the properties of the algorithm.
The Two Cultures: statistics vs. machine learning?
You run a fancy computer algorithm once -- and you get a CS conference presentation/statistics paper (wow, what a fast convergence!). You commercialize it and run it 1 million times -- and you go brok
The Two Cultures: statistics vs. machine learning? You run a fancy computer algorithm once -- and you get a CS conference presentation/statistics paper (wow, what a fast convergence!). You commercialize it and run it 1 million times -- and you go broke (ouch, why am I getting useless and irreproducible results all the time???) unless you know how to employ probability and statistics to generalize the properties of the algorithm.
The Two Cultures: statistics vs. machine learning? You run a fancy computer algorithm once -- and you get a CS conference presentation/statistics paper (wow, what a fast convergence!). You commercialize it and run it 1 million times -- and you go brok
120
The Two Cultures: statistics vs. machine learning?
There is an area of application of statistics where focus on the data generating model makes a lot of sense. In designed experiments, e.g., animal studies, clinical trials, industrial DOEs, statisticians can have a hand in what the data generating model is. ML tends not to spend much time on this very important problem as ML usually focuses on another very important problem of prediction based on “large” observational data. That is not to say that ML can’t be applied to “large” designed experiments, but it is important to acknowledge that statistics has particular expertise on “small” data problems arising from resource constrained experiments. At the end of the day I think we can all agree to use what works best to solve the problem at hand. E.g., we may have a designed experiment that produces very wide data with the goal of prediction. Statistical design principles are very useful here and ML methods could be useful to build the predictor.
The Two Cultures: statistics vs. machine learning?
There is an area of application of statistics where focus on the data generating model makes a lot of sense. In designed experiments, e.g., animal studies, clinical trials, industrial DOEs, statistici
The Two Cultures: statistics vs. machine learning? There is an area of application of statistics where focus on the data generating model makes a lot of sense. In designed experiments, e.g., animal studies, clinical trials, industrial DOEs, statisticians can have a hand in what the data generating model is. ML tends not to spend much time on this very important problem as ML usually focuses on another very important problem of prediction based on “large” observational data. That is not to say that ML can’t be applied to “large” designed experiments, but it is important to acknowledge that statistics has particular expertise on “small” data problems arising from resource constrained experiments. At the end of the day I think we can all agree to use what works best to solve the problem at hand. E.g., we may have a designed experiment that produces very wide data with the goal of prediction. Statistical design principles are very useful here and ML methods could be useful to build the predictor.
The Two Cultures: statistics vs. machine learning? There is an area of application of statistics where focus on the data generating model makes a lot of sense. In designed experiments, e.g., animal studies, clinical trials, industrial DOEs, statistici
121
The Two Cultures: statistics vs. machine learning?
I think machine learning needs to be a sub-branch under statistics, just like, in my view, chemistry needs to be a sub-branch under physics. I think physics-inspired view into chemistry is pretty solid (I guess). I don't think there is any chemical reaction whose equivalent is not known in physical terms. I think physics has done an amazing job by explaining everything we can see at a chemistry level. Now the physicists' challenge seems to be explaining the tiny mysteries at the quantum level, under extreme conditions that are not observable. Now back to machine learning. I think it too should be a sub-branch under statistics (just how chemistry is a sub-branch of physics). But it seems to me that, somehow, either the current state of machine learning, or statistics, is not mature enough to perfectly realize this. But in the long run, I think one must become a sub-branch of the other. I think it's ML that will to get under statistics. I personally think that "learning" and "analyzing samples" to estimate/infer functions or predictions are all essentially a question of statistics.
The Two Cultures: statistics vs. machine learning?
I think machine learning needs to be a sub-branch under statistics, just like, in my view, chemistry needs to be a sub-branch under physics. I think physics-inspired view into chemistry is pretty soli
The Two Cultures: statistics vs. machine learning? I think machine learning needs to be a sub-branch under statistics, just like, in my view, chemistry needs to be a sub-branch under physics. I think physics-inspired view into chemistry is pretty solid (I guess). I don't think there is any chemical reaction whose equivalent is not known in physical terms. I think physics has done an amazing job by explaining everything we can see at a chemistry level. Now the physicists' challenge seems to be explaining the tiny mysteries at the quantum level, under extreme conditions that are not observable. Now back to machine learning. I think it too should be a sub-branch under statistics (just how chemistry is a sub-branch of physics). But it seems to me that, somehow, either the current state of machine learning, or statistics, is not mature enough to perfectly realize this. But in the long run, I think one must become a sub-branch of the other. I think it's ML that will to get under statistics. I personally think that "learning" and "analyzing samples" to estimate/infer functions or predictions are all essentially a question of statistics.
The Two Cultures: statistics vs. machine learning? I think machine learning needs to be a sub-branch under statistics, just like, in my view, chemistry needs to be a sub-branch under physics. I think physics-inspired view into chemistry is pretty soli
122
The Two Cultures: statistics vs. machine learning?
From the Coursera course "Data Science in real life" by Brian Caffo Machine Learning Emphasize predictions Evaluates results via prediction performance Concern for overfitting but not model complexity per se Emphasis on performance Generalizability is obtained through performance on novel datasets Usually, no superpopulation model specified Concern over performance and robustness Traditional statistical analysis Emphasizes superpopulation inference Focuses on a-priori hypotheses Simpler models preferred over complex ones (parsimony), even if the more complex models perform slightly better Emphasis on parameter interpretability Statistical modeling or sampling assumptions connects data to a population of interest Concern over assumptions and robustness
The Two Cultures: statistics vs. machine learning?
From the Coursera course "Data Science in real life" by Brian Caffo Machine Learning Emphasize predictions Evaluates results via prediction performance Concern for overfitting but not model complexit
The Two Cultures: statistics vs. machine learning? From the Coursera course "Data Science in real life" by Brian Caffo Machine Learning Emphasize predictions Evaluates results via prediction performance Concern for overfitting but not model complexity per se Emphasis on performance Generalizability is obtained through performance on novel datasets Usually, no superpopulation model specified Concern over performance and robustness Traditional statistical analysis Emphasizes superpopulation inference Focuses on a-priori hypotheses Simpler models preferred over complex ones (parsimony), even if the more complex models perform slightly better Emphasis on parameter interpretability Statistical modeling or sampling assumptions connects data to a population of interest Concern over assumptions and robustness
The Two Cultures: statistics vs. machine learning? From the Coursera course "Data Science in real life" by Brian Caffo Machine Learning Emphasize predictions Evaluates results via prediction performance Concern for overfitting but not model complexit
123
The Two Cultures: statistics vs. machine learning?
As as Computer Scientist, I am always intrigued when looking to statistical approaches. To me many times it looks like the statistical models used in the statistical analysis are way too complex for the data in many situations! For example there is a strong link between data compression and statistics. Basically one needs a good statistical model which is able to predict the data well and this brings a very good compression of the data. In computer science when compressing the data always the complexity of the statistical model and the accuracy of the prediction are very important. Nobody wants to get have EVER a data file (containing sound data or image data or video data) becoming bigger after the compression! I find that there are more dynamic things in computer science regarding statistics, like for example Minimum Description Length and Normalized Maximum Likelihood.
The Two Cultures: statistics vs. machine learning?
As as Computer Scientist, I am always intrigued when looking to statistical approaches. To me many times it looks like the statistical models used in the statistical analysis are way too complex for t
The Two Cultures: statistics vs. machine learning? As as Computer Scientist, I am always intrigued when looking to statistical approaches. To me many times it looks like the statistical models used in the statistical analysis are way too complex for the data in many situations! For example there is a strong link between data compression and statistics. Basically one needs a good statistical model which is able to predict the data well and this brings a very good compression of the data. In computer science when compressing the data always the complexity of the statistical model and the accuracy of the prediction are very important. Nobody wants to get have EVER a data file (containing sound data or image data or video data) becoming bigger after the compression! I find that there are more dynamic things in computer science regarding statistics, like for example Minimum Description Length and Normalized Maximum Likelihood.
The Two Cultures: statistics vs. machine learning? As as Computer Scientist, I am always intrigued when looking to statistical approaches. To me many times it looks like the statistical models used in the statistical analysis are way too complex for t
124
How to understand the drawbacks of K-means
While I like David Robinson's answer here a lot, here's some additional critique of k-means. Clustering non-clustered data Run k-means on uniform data, and you will still get clusters! It doesn't tell you when the data just does not cluster, and can take your research into a dead end this way. Sensitive to scale Rescaling your datasets will completely change results. While this itself is not bad, not realizing that you have to spend extra attention to scaling your data is bad. Scaling factors are extra $d$ hidden parameters in k-means that "default" to 1 and thus are easily overlooked, yet have a major impact (but of course this applies to many other algorithms, too). This is probably what you referred to as "all variables have the same variance". Except that ideally, you would also consider non-linear scaling when appropriate. Also be aware that it is only a heuristic to scale every axis to have unit variance. This doesn't ensure that k-means works. Scaling depends on the meaning of your data set. And if you have more than one cluster, you would want every cluster (independently) to have the same variance in every variable, too. Here is a classic counterexample of data sets that k-means cannot cluster. Both axes are i.i.d. in each cluster, so it would be sufficient to do this in 1 dimension. But the clusters have varying variances, and k-means thus splits them incorrectly. I don't think this counterexample for k-means is covered by your points: All clusters are spherical (i.i.d. Gaussian). All axes have the same distribution and thus variance. Both clusters have 500 elements each. Yet, k-means still fails badly (and it gets worse if I increase the variance beyond 0.5 for the larger cluster) But: it is not the algorithm that failed. It's the assumptions, which don't hold. K-means is working perfectly, it's just optimizing the wrong criterion. Even on perfect data sets, it can get stuck in a local minimum Below is the best of 10 runs of k-means on the classic A3 data set. This is a synthetic data set, designed for k-means. 50 clusters, each of Gaussian shape, reasonably well separated. Yet, it only with k-means++ and 100 iterations I did get the expected result... (below is 10 iterations of regular k-means, for illustration). You'll quickly find many clusters in this data set, where k-means failed to find the correct structure. For example in the bottom right, a cluster was broken into three parts. But there is no way, k-means is going to move one of these centroids to an entirely different place of the data set - it's trapped in a local minimum (and this already was the best of 10 runs!) And there are many of such local minima in this data set. Very often when you get two samples from the same cluster, it will get stuck in a minimum where this cluster remains split, and two other clusters merged instead. Not always, but very often. So you need a lot of iterations to have a lucky pick. With 100 iterations of k-means, I still counted 6 errors, and with 1000 iterations I got this down to 4 errors. K-means++ by the way it weights the random samples, works much better on this data set. Means are continuous While you can run k-means on binary data (or one-hot encoded categorical data) the results will not be binary anymore. So you do get a result out, but you may be unable to interpret it in the end, because it has a different data type than your original data. Hidden assumption: SSE is worth minimizing This is essentially already present in above answer, nicely demonstrated with linear regression. There are some use cases where k-means makes perfect sense. When Lloyd had to decode PCM signals, he did know the number of different tones, and least squared error minimizes the chance of decoding errors. And in color quantization of imaged, you do minimize color error when reducing the palette, too. But on your data, is the sum of squared deviations a meaningful criterion to minimize? In above counterexample, the variance is not worth minimizing, because it depends on the cluster. Instead, a Gaussian Mixture Model should be fit to the data, as in the figure below: (But this is not the ultimate method either. It's just as easy to construct data that does not satisfy the "mixture of k Gaussian distributions" assumptions, e.g., by adding a lot of background noise) Too easy to use badly All in all, it's too easy to throw k-means on your data, and nevertheless get a result out (that is pretty much random, but you won't notice). I think it would be better to have a method which can fail if you haven't understood your data... K-means as quantization If you want a theoretical model of what k-means does, consider it a quantization approach, not a clustering algorithm. The objective of k-means - minimizing the squared error - is a reasonable choice if you replace every object by its nearest centroid. (It makes a lot less sense if you inspect the groups original data IMHO.) There are very good use cases for this. The original PCM use case of Lloyd comes to mind, or e.g. color quanization (Wikipedia). If you want to reduce an image to k colors, you do want to replace every pixel with the nearest centroid. Minimizing the squared color deviation then does measure L2 optimality in image approximation using $k$ colors only. This quantization is probably quite similar to the linear regression example. Linear regression finds the best linear model. And k-means finds (sometimes) the best reduction to k values of a multidimensional data set. Where "best" is the least squared error. IMHO, k-means is a good quantization algorithm (see the first image in this post - if you want to approximate the data set to two points, this is a reasonable choice!). If you want to do cluster analysis as in discover structure then k-means is IMHO not the best choice. It tends to cluster when there are not clusters, and it cannot recognize various structures you do see a lot in data. Fine print: all images were generated with ELKI. Data were generated using the .xml data generation format, but they are so basic it is not worth sharing them.
How to understand the drawbacks of K-means
While I like David Robinson's answer here a lot, here's some additional critique of k-means. Clustering non-clustered data Run k-means on uniform data, and you will still get clusters! It doesn't te
How to understand the drawbacks of K-means While I like David Robinson's answer here a lot, here's some additional critique of k-means. Clustering non-clustered data Run k-means on uniform data, and you will still get clusters! It doesn't tell you when the data just does not cluster, and can take your research into a dead end this way. Sensitive to scale Rescaling your datasets will completely change results. While this itself is not bad, not realizing that you have to spend extra attention to scaling your data is bad. Scaling factors are extra $d$ hidden parameters in k-means that "default" to 1 and thus are easily overlooked, yet have a major impact (but of course this applies to many other algorithms, too). This is probably what you referred to as "all variables have the same variance". Except that ideally, you would also consider non-linear scaling when appropriate. Also be aware that it is only a heuristic to scale every axis to have unit variance. This doesn't ensure that k-means works. Scaling depends on the meaning of your data set. And if you have more than one cluster, you would want every cluster (independently) to have the same variance in every variable, too. Here is a classic counterexample of data sets that k-means cannot cluster. Both axes are i.i.d. in each cluster, so it would be sufficient to do this in 1 dimension. But the clusters have varying variances, and k-means thus splits them incorrectly. I don't think this counterexample for k-means is covered by your points: All clusters are spherical (i.i.d. Gaussian). All axes have the same distribution and thus variance. Both clusters have 500 elements each. Yet, k-means still fails badly (and it gets worse if I increase the variance beyond 0.5 for the larger cluster) But: it is not the algorithm that failed. It's the assumptions, which don't hold. K-means is working perfectly, it's just optimizing the wrong criterion. Even on perfect data sets, it can get stuck in a local minimum Below is the best of 10 runs of k-means on the classic A3 data set. This is a synthetic data set, designed for k-means. 50 clusters, each of Gaussian shape, reasonably well separated. Yet, it only with k-means++ and 100 iterations I did get the expected result... (below is 10 iterations of regular k-means, for illustration). You'll quickly find many clusters in this data set, where k-means failed to find the correct structure. For example in the bottom right, a cluster was broken into three parts. But there is no way, k-means is going to move one of these centroids to an entirely different place of the data set - it's trapped in a local minimum (and this already was the best of 10 runs!) And there are many of such local minima in this data set. Very often when you get two samples from the same cluster, it will get stuck in a minimum where this cluster remains split, and two other clusters merged instead. Not always, but very often. So you need a lot of iterations to have a lucky pick. With 100 iterations of k-means, I still counted 6 errors, and with 1000 iterations I got this down to 4 errors. K-means++ by the way it weights the random samples, works much better on this data set. Means are continuous While you can run k-means on binary data (or one-hot encoded categorical data) the results will not be binary anymore. So you do get a result out, but you may be unable to interpret it in the end, because it has a different data type than your original data. Hidden assumption: SSE is worth minimizing This is essentially already present in above answer, nicely demonstrated with linear regression. There are some use cases where k-means makes perfect sense. When Lloyd had to decode PCM signals, he did know the number of different tones, and least squared error minimizes the chance of decoding errors. And in color quantization of imaged, you do minimize color error when reducing the palette, too. But on your data, is the sum of squared deviations a meaningful criterion to minimize? In above counterexample, the variance is not worth minimizing, because it depends on the cluster. Instead, a Gaussian Mixture Model should be fit to the data, as in the figure below: (But this is not the ultimate method either. It's just as easy to construct data that does not satisfy the "mixture of k Gaussian distributions" assumptions, e.g., by adding a lot of background noise) Too easy to use badly All in all, it's too easy to throw k-means on your data, and nevertheless get a result out (that is pretty much random, but you won't notice). I think it would be better to have a method which can fail if you haven't understood your data... K-means as quantization If you want a theoretical model of what k-means does, consider it a quantization approach, not a clustering algorithm. The objective of k-means - minimizing the squared error - is a reasonable choice if you replace every object by its nearest centroid. (It makes a lot less sense if you inspect the groups original data IMHO.) There are very good use cases for this. The original PCM use case of Lloyd comes to mind, or e.g. color quanization (Wikipedia). If you want to reduce an image to k colors, you do want to replace every pixel with the nearest centroid. Minimizing the squared color deviation then does measure L2 optimality in image approximation using $k$ colors only. This quantization is probably quite similar to the linear regression example. Linear regression finds the best linear model. And k-means finds (sometimes) the best reduction to k values of a multidimensional data set. Where "best" is the least squared error. IMHO, k-means is a good quantization algorithm (see the first image in this post - if you want to approximate the data set to two points, this is a reasonable choice!). If you want to do cluster analysis as in discover structure then k-means is IMHO not the best choice. It tends to cluster when there are not clusters, and it cannot recognize various structures you do see a lot in data. Fine print: all images were generated with ELKI. Data were generated using the .xml data generation format, but they are so basic it is not worth sharing them.
How to understand the drawbacks of K-means While I like David Robinson's answer here a lot, here's some additional critique of k-means. Clustering non-clustered data Run k-means on uniform data, and you will still get clusters! It doesn't te
125
How to understand the drawbacks of K-means
What a great question- it's a chance to show how one would inspect the drawbacks and assumptions of any statistical method. Namely: make up some data and try the algorithm on it! We'll consider two of your assumptions, and we'll see what happens to the k-means algorithm when those assumptions are broken. We'll stick to 2-dimensional data since it's easy to visualize. (Thanks to the curse of dimensionality, adding additional dimensions is likely to make these problems more severe, not less). We'll work with the statistical programming language R: you can find the full code here (and the post in blog form here). Diversion: Anscombe's Quartet First, an analogy. Imagine someone argued the following: I read some material about the drawbacks of linear regression- that it expects a linear trend, that the residuals are normally distributed, and that there are no outliers. But all linear regression is doing is minimizing the sum of squared errors (SSE) from the predicted line. That's an optimization problem that can be solved no matter what the shape of the curve or the distribution of the residuals is. Thus, linear regression requires no assumptions to work. Well, yes, linear regression works by minimizing the sum of squared residuals. But that by itself is not the goal of a regression: what we're trying to do is draw a line that serves as a reliable, unbiased predictor of y based on x. The Gauss-Markov theorem tells us that minimizing the SSE accomplishes that goal- but that theorem rests on some very specific assumptions. If those assumptions are broken, you can still minimize the SSE, but it might not do anything. Imagine saying "You drive a car by pushing the pedal: driving is essentially a 'pedal-pushing process.' The pedal can be pushed no matter how much gas in the tank. Therefore, even if the tank is empty, you can still push the pedal and drive the car." But talk is cheap. Let's look at the cold, hard, data. Or actually, made-up data. This is in fact my favorite made-up data: Anscombe's Quartet. Created in 1973 by statistician Francis Anscombe, this delightful concoction illustrates the folly of trusting statistical methods blindly. Each of the datasets has the same linear regression slope, intercept, p-value and $R^2$- and yet at a glance we can see that only one of them, I, is appropriate for linear regression. In II it suggests the wrong shape, in III it is skewed by a single outlier- and in IV there is clearly no trend at all! One could say "Linear regression is still working in those cases, because it's minimizing the sum of squares of the residuals." But what a Pyrrhic victory! Linear regression will always draw a line, but if it's a meaningless line, who cares? So now we see that just because an optimization can be performed doesn't mean we're accomplishing our goal. And we see that making up data, and visualizing it, is a good way to inspect the assumptions of a model. Hang on to that intuition, we're going to need it in a minute. Broken Assumption: Non-Spherical Data You argue that the k-means algorithm will work fine on non-spherical clusters. Non-spherical clusters like... these? Maybe this isn't what you were expecting- but it's a perfectly reasonable way to construct clusters. Looking at this image, we humans immediately recognize two natural groups of points- there's no mistaking them. So let's see how k-means does: assignments are shown in color, imputed centers are shown as X's. Well, that's not right. K-means was trying to fit a square peg in a round hole- trying to find nice centers with neat spheres around them- and it failed. Yes, it's still minimizing the within-cluster sum of squares- but just like in Anscombe's Quartet above, it's a Pyrrhic victory! You might say "That's not a fair example... no clustering method could correctly find clusters that are that weird." Not true! Try single linkage hierachical clustering: Nailed it! This is because single-linkage hierarchical clustering makes the right assumptions for this dataset. (There's a whole other class of situations where it fails). You might say "That's a single, extreme, pathological case." But it's not! For instance, you can make the outer group a semi-circle instead of a circle, and you'll see k-means still does terribly (and hierarchical clustering still does well). I could come up with other problematic situations easily, and that's just in two dimensions. When you're clustering 16-dimensional data, there's all kinds of pathologies that could arise. Lastly, I should note that k-means is still salvagable! If you start by transforming your data into polar coordinates, the clustering now works: That's why understanding the assumptions underlying a method is essential: it doesn't just tell you when a method has drawbacks, it tells you how to fix them. Broken Assumption: Unevenly Sized Clusters What if the clusters have an uneven number of points- does that also break k-means clustering? Well, consider this set of clusters, of sizes 20, 100, 500. I've generated each from a multivariate Gaussian: This looks like k-means could probably find those clusters, right? Everything seems to be generated into neat and tidy groups. So let's try k-means: Ouch. What happened here is a bit subtler. In its quest to minimize the within-cluster sum of squares, the k-means algorithm gives more "weight" to larger clusters. In practice, that means it's happy to let that small cluster end up far away from any center, while it uses those centers to "split up" a much larger cluster. If you play with these examples a little (R code here!), you'll see that you can construct far more scenarios where k-means gets it embarrassingly wrong. Conclusion: No Free Lunch There's a charming construction in mathematical folklore, formalized by Wolpert and Macready, called the "No Free Lunch Theorem." It's probably my favorite theorem in machine learning philosophy, and I relish any chance to bring it up (did I mention I love this question?) The basic idea is stated (non-rigorously) as this: "When averaged across all possible situations, every algorithm performs equally well." Sound counterintuitive? Consider that for every case where an algorithm works, I could construct a situation where it fails terribly. Linear regression assumes your data falls along a line- but what if it follows a sinusoidal wave? A t-test assumes each sample comes from a normal distribution: what if you throw in an outlier? Any gradient ascent algorithm can get trapped in local maxima, and any supervised classification can be tricked into overfitting. What does this mean? It means that assumptions are where your power comes from! When Netflix recommends movies to you, it's assuming that if you like one movie, you'll like similar ones (and vice versa). Imagine a world where that wasn't true, and your tastes are perfectly random- scattered haphazardly across genres, actors and directors. Their recommendation algorithm would fail terribly. Would it make sense to say "Well, it's still minimizing some expected squared error, so the algorithm is still working"? You can't make a recommendation algorithm without making some assumptions about users' tastes- just like you can't make a clustering algorithm without making some assumptions about the nature of those clusters. So don't just accept these drawbacks. Know them, so they can inform your choice of algorithms. Understand them, so you can tweak your algorithm and transform your data to solve them. And love them, because if your model could never be wrong, that means it will never be right.
How to understand the drawbacks of K-means
What a great question- it's a chance to show how one would inspect the drawbacks and assumptions of any statistical method. Namely: make up some data and try the algorithm on it! We'll consider two o
How to understand the drawbacks of K-means What a great question- it's a chance to show how one would inspect the drawbacks and assumptions of any statistical method. Namely: make up some data and try the algorithm on it! We'll consider two of your assumptions, and we'll see what happens to the k-means algorithm when those assumptions are broken. We'll stick to 2-dimensional data since it's easy to visualize. (Thanks to the curse of dimensionality, adding additional dimensions is likely to make these problems more severe, not less). We'll work with the statistical programming language R: you can find the full code here (and the post in blog form here). Diversion: Anscombe's Quartet First, an analogy. Imagine someone argued the following: I read some material about the drawbacks of linear regression- that it expects a linear trend, that the residuals are normally distributed, and that there are no outliers. But all linear regression is doing is minimizing the sum of squared errors (SSE) from the predicted line. That's an optimization problem that can be solved no matter what the shape of the curve or the distribution of the residuals is. Thus, linear regression requires no assumptions to work. Well, yes, linear regression works by minimizing the sum of squared residuals. But that by itself is not the goal of a regression: what we're trying to do is draw a line that serves as a reliable, unbiased predictor of y based on x. The Gauss-Markov theorem tells us that minimizing the SSE accomplishes that goal- but that theorem rests on some very specific assumptions. If those assumptions are broken, you can still minimize the SSE, but it might not do anything. Imagine saying "You drive a car by pushing the pedal: driving is essentially a 'pedal-pushing process.' The pedal can be pushed no matter how much gas in the tank. Therefore, even if the tank is empty, you can still push the pedal and drive the car." But talk is cheap. Let's look at the cold, hard, data. Or actually, made-up data. This is in fact my favorite made-up data: Anscombe's Quartet. Created in 1973 by statistician Francis Anscombe, this delightful concoction illustrates the folly of trusting statistical methods blindly. Each of the datasets has the same linear regression slope, intercept, p-value and $R^2$- and yet at a glance we can see that only one of them, I, is appropriate for linear regression. In II it suggests the wrong shape, in III it is skewed by a single outlier- and in IV there is clearly no trend at all! One could say "Linear regression is still working in those cases, because it's minimizing the sum of squares of the residuals." But what a Pyrrhic victory! Linear regression will always draw a line, but if it's a meaningless line, who cares? So now we see that just because an optimization can be performed doesn't mean we're accomplishing our goal. And we see that making up data, and visualizing it, is a good way to inspect the assumptions of a model. Hang on to that intuition, we're going to need it in a minute. Broken Assumption: Non-Spherical Data You argue that the k-means algorithm will work fine on non-spherical clusters. Non-spherical clusters like... these? Maybe this isn't what you were expecting- but it's a perfectly reasonable way to construct clusters. Looking at this image, we humans immediately recognize two natural groups of points- there's no mistaking them. So let's see how k-means does: assignments are shown in color, imputed centers are shown as X's. Well, that's not right. K-means was trying to fit a square peg in a round hole- trying to find nice centers with neat spheres around them- and it failed. Yes, it's still minimizing the within-cluster sum of squares- but just like in Anscombe's Quartet above, it's a Pyrrhic victory! You might say "That's not a fair example... no clustering method could correctly find clusters that are that weird." Not true! Try single linkage hierachical clustering: Nailed it! This is because single-linkage hierarchical clustering makes the right assumptions for this dataset. (There's a whole other class of situations where it fails). You might say "That's a single, extreme, pathological case." But it's not! For instance, you can make the outer group a semi-circle instead of a circle, and you'll see k-means still does terribly (and hierarchical clustering still does well). I could come up with other problematic situations easily, and that's just in two dimensions. When you're clustering 16-dimensional data, there's all kinds of pathologies that could arise. Lastly, I should note that k-means is still salvagable! If you start by transforming your data into polar coordinates, the clustering now works: That's why understanding the assumptions underlying a method is essential: it doesn't just tell you when a method has drawbacks, it tells you how to fix them. Broken Assumption: Unevenly Sized Clusters What if the clusters have an uneven number of points- does that also break k-means clustering? Well, consider this set of clusters, of sizes 20, 100, 500. I've generated each from a multivariate Gaussian: This looks like k-means could probably find those clusters, right? Everything seems to be generated into neat and tidy groups. So let's try k-means: Ouch. What happened here is a bit subtler. In its quest to minimize the within-cluster sum of squares, the k-means algorithm gives more "weight" to larger clusters. In practice, that means it's happy to let that small cluster end up far away from any center, while it uses those centers to "split up" a much larger cluster. If you play with these examples a little (R code here!), you'll see that you can construct far more scenarios where k-means gets it embarrassingly wrong. Conclusion: No Free Lunch There's a charming construction in mathematical folklore, formalized by Wolpert and Macready, called the "No Free Lunch Theorem." It's probably my favorite theorem in machine learning philosophy, and I relish any chance to bring it up (did I mention I love this question?) The basic idea is stated (non-rigorously) as this: "When averaged across all possible situations, every algorithm performs equally well." Sound counterintuitive? Consider that for every case where an algorithm works, I could construct a situation where it fails terribly. Linear regression assumes your data falls along a line- but what if it follows a sinusoidal wave? A t-test assumes each sample comes from a normal distribution: what if you throw in an outlier? Any gradient ascent algorithm can get trapped in local maxima, and any supervised classification can be tricked into overfitting. What does this mean? It means that assumptions are where your power comes from! When Netflix recommends movies to you, it's assuming that if you like one movie, you'll like similar ones (and vice versa). Imagine a world where that wasn't true, and your tastes are perfectly random- scattered haphazardly across genres, actors and directors. Their recommendation algorithm would fail terribly. Would it make sense to say "Well, it's still minimizing some expected squared error, so the algorithm is still working"? You can't make a recommendation algorithm without making some assumptions about users' tastes- just like you can't make a clustering algorithm without making some assumptions about the nature of those clusters. So don't just accept these drawbacks. Know them, so they can inform your choice of algorithms. Understand them, so you can tweak your algorithm and transform your data to solve them. And love them, because if your model could never be wrong, that means it will never be right.
How to understand the drawbacks of K-means What a great question- it's a chance to show how one would inspect the drawbacks and assumptions of any statistical method. Namely: make up some data and try the algorithm on it! We'll consider two o
126
How to understand the drawbacks of K-means
Logically speaking, the drawbacks of K-means are : needs linear separability of the clusters need to specify the number of clusters Algorithmics : Loyds procedure does not converge to the true global maximum even with a good initialization when there are many points or dimensions But K-means is better than we usually think. I've become quite enthusiastic about it after testing it against other clustering methods (spectral, density...) and LDA in real life text classification of one million texts : K-means had far better accuracy than LDA for example (88% vs 59%). Some other clustering methods were good, but K-means was close to the top... and more affordable in terms of complexity. I've never read about a clustering method that is universally better on a wide range of problems. Not saying K-means is universally better either, just that there is no universal clustering superhero as far as I know. Many articles, many methods, not a true revolution (in my personal limited experience of testing some of them). The main reason why the logical drawbacks of K-means are often only apparent is that clustering points in a 2D plane is something you rarely do in machine learning. Many things from geometric intuition that is true in 2D, 3D... are irrelevant in rather high dimension or abstract vector spaces (like bag of words, vector of variables...) Linear separability : You rarely have to deal with circular clusters in real life data. It's even better to assume they do not exist in these cases. Allowing your algorithm to search for them would allow it to find odd circular clusters in the noise. The linear assumption in K-means makes it often more robust. Number of clusters : There is often no true ideal number of clusters that you wish to see. For text classification for example, there may be 100 categories, 105, 110... it's all rather subjective. Specifying the number of clusters becomes equivalent to specifying a global granularity. All clustering methods need a granularity specification anyway. Global maximum : I think it's a true issue. The true abstract K-means that would consist in finding the global minimum for the S.O.D is fundamentally NP-Hard. Only Lloyd is affordable and it is... very imperfect. We have really seen that being close to the real minimum (thanks to replications) clearly improved the quality of the results. Replication of the K-means is an improvement but not a perfect solution. For a big dataset you would need $10^{\text{a lot}}$ replications to have a small chance to find the true minimum. Others methods like "finish it with greedy search" (proposed in Matlab) are astronomically costly in big datasets. But all clustering algorithms have such limitations. For example in Spectral clustering: you can't find the true eigenvectors, only approximations. For the same computation time, a quite optimized LDA library did less good than our home-made (not perfectly optimized) K-means. Since then, I think a bit differently.
How to understand the drawbacks of K-means
Logically speaking, the drawbacks of K-means are : needs linear separability of the clusters need to specify the number of clusters Algorithmics : Loyds procedure does not converge to the true global
How to understand the drawbacks of K-means Logically speaking, the drawbacks of K-means are : needs linear separability of the clusters need to specify the number of clusters Algorithmics : Loyds procedure does not converge to the true global maximum even with a good initialization when there are many points or dimensions But K-means is better than we usually think. I've become quite enthusiastic about it after testing it against other clustering methods (spectral, density...) and LDA in real life text classification of one million texts : K-means had far better accuracy than LDA for example (88% vs 59%). Some other clustering methods were good, but K-means was close to the top... and more affordable in terms of complexity. I've never read about a clustering method that is universally better on a wide range of problems. Not saying K-means is universally better either, just that there is no universal clustering superhero as far as I know. Many articles, many methods, not a true revolution (in my personal limited experience of testing some of them). The main reason why the logical drawbacks of K-means are often only apparent is that clustering points in a 2D plane is something you rarely do in machine learning. Many things from geometric intuition that is true in 2D, 3D... are irrelevant in rather high dimension or abstract vector spaces (like bag of words, vector of variables...) Linear separability : You rarely have to deal with circular clusters in real life data. It's even better to assume they do not exist in these cases. Allowing your algorithm to search for them would allow it to find odd circular clusters in the noise. The linear assumption in K-means makes it often more robust. Number of clusters : There is often no true ideal number of clusters that you wish to see. For text classification for example, there may be 100 categories, 105, 110... it's all rather subjective. Specifying the number of clusters becomes equivalent to specifying a global granularity. All clustering methods need a granularity specification anyway. Global maximum : I think it's a true issue. The true abstract K-means that would consist in finding the global minimum for the S.O.D is fundamentally NP-Hard. Only Lloyd is affordable and it is... very imperfect. We have really seen that being close to the real minimum (thanks to replications) clearly improved the quality of the results. Replication of the K-means is an improvement but not a perfect solution. For a big dataset you would need $10^{\text{a lot}}$ replications to have a small chance to find the true minimum. Others methods like "finish it with greedy search" (proposed in Matlab) are astronomically costly in big datasets. But all clustering algorithms have such limitations. For example in Spectral clustering: you can't find the true eigenvectors, only approximations. For the same computation time, a quite optimized LDA library did less good than our home-made (not perfectly optimized) K-means. Since then, I think a bit differently.
How to understand the drawbacks of K-means Logically speaking, the drawbacks of K-means are : needs linear separability of the clusters need to specify the number of clusters Algorithmics : Loyds procedure does not converge to the true global
127
How to understand the drawbacks of K-means
I would just like to add to @DavidRobinson's answer that clustering to minimal total cluster variance is actually a combinatorial optimization problem, of which k-Means is just one technique - and given the latter's "one shot", local "steepest descent" nature, a pretty bad one too. Also, trying to substantially improve the "bare bones" k-Means by somehow (but quickly!) figuring out where the cluster seeds should be, is doomed from the outset: since the seeds impact (drastically!) the final clusters, it amounts to "knowing" what the optimum is... before actually computing it. However, as most optimization problems, it may nevertheless be amenable to some serious optimization technique. One of them very closely fits the structure of the problem (as the NFL requires!), and it certainly shows in its outcomes. I don't want to make any ads here (it would be - and rightly so - against etiquette), so if you're interested, just read it here and make your own judgement. That being said, I agree with @ttnphns that k-Means certainly does not identify a Gaussian Mixture - the cost functions of the two problems are completely different. It turns out that finding the best-fitting (in terms of probability of the model given the data) Gaussian Mixture is also a combinatorial optimization problem - and one for which a serious optimization technique exists as well. Once again, no ads: you can reach your own conclusion here - I will just say that the algorithm discussed there can, indeed, correctly identify clusters like the last image in @DavidRobinson's post. It even correctly (i.e., in a mathematically well-defined way) solves the perennial problem of outliers, i.e., data points that do not belong to any of the clusters because they're just completely random (notoriously, they completely derail k-Means for instance). This is done by having one additional, uniform distribution compete with the Gaussians... and the splendid result is that on uniformly distributed data, it indeed reports there's nothing in there (I've never seen that anywhere else). Now obviously, according to the NFL, and as your rightly pointed out, even globally optimal Gaussian Mixtures with outlier identification do rely on a prior assumption - namely that the data are, indeed, distributed normally. Fortunately though, thanks to the Law of Large Numbers, numerous natural phenomena do comply with that assumption. DISCLAIMER: with my deepest apologies, I wrote both of the papers above, and the algorithms they discuss. P.S. I met Macready in a conference once - an extremely bright and nice guy!
How to understand the drawbacks of K-means
I would just like to add to @DavidRobinson's answer that clustering to minimal total cluster variance is actually a combinatorial optimization problem, of which k-Means is just one technique - and giv
How to understand the drawbacks of K-means I would just like to add to @DavidRobinson's answer that clustering to minimal total cluster variance is actually a combinatorial optimization problem, of which k-Means is just one technique - and given the latter's "one shot", local "steepest descent" nature, a pretty bad one too. Also, trying to substantially improve the "bare bones" k-Means by somehow (but quickly!) figuring out where the cluster seeds should be, is doomed from the outset: since the seeds impact (drastically!) the final clusters, it amounts to "knowing" what the optimum is... before actually computing it. However, as most optimization problems, it may nevertheless be amenable to some serious optimization technique. One of them very closely fits the structure of the problem (as the NFL requires!), and it certainly shows in its outcomes. I don't want to make any ads here (it would be - and rightly so - against etiquette), so if you're interested, just read it here and make your own judgement. That being said, I agree with @ttnphns that k-Means certainly does not identify a Gaussian Mixture - the cost functions of the two problems are completely different. It turns out that finding the best-fitting (in terms of probability of the model given the data) Gaussian Mixture is also a combinatorial optimization problem - and one for which a serious optimization technique exists as well. Once again, no ads: you can reach your own conclusion here - I will just say that the algorithm discussed there can, indeed, correctly identify clusters like the last image in @DavidRobinson's post. It even correctly (i.e., in a mathematically well-defined way) solves the perennial problem of outliers, i.e., data points that do not belong to any of the clusters because they're just completely random (notoriously, they completely derail k-Means for instance). This is done by having one additional, uniform distribution compete with the Gaussians... and the splendid result is that on uniformly distributed data, it indeed reports there's nothing in there (I've never seen that anywhere else). Now obviously, according to the NFL, and as your rightly pointed out, even globally optimal Gaussian Mixtures with outlier identification do rely on a prior assumption - namely that the data are, indeed, distributed normally. Fortunately though, thanks to the Law of Large Numbers, numerous natural phenomena do comply with that assumption. DISCLAIMER: with my deepest apologies, I wrote both of the papers above, and the algorithms they discuss. P.S. I met Macready in a conference once - an extremely bright and nice guy!
How to understand the drawbacks of K-means I would just like to add to @DavidRobinson's answer that clustering to minimal total cluster variance is actually a combinatorial optimization problem, of which k-Means is just one technique - and giv
128
How to understand the drawbacks of K-means
To understand the drawbacks of K-means, I like to think of what the model behind it is. K-means is a special case of Gaussian Mixture Models (GMM). GMM assumes that the data comes from a mixture of $K$ Gaussian distributions. In other words, there is a certain probability that the data comes from one of $K$ of the Gaussian distributions. If we make the the probability to be in each of the $K$ Gaussians equal and make the covariance matrices to be $\sigma^2 \mathbf{I}$, where $\sigma^2 $ is the same fixed constant for each of the $K$ Gaussians, and take the limit when $\sigma^2 \rightarrow 0$ then we get K-means. So, what does this tell us about the drawbacks of K-means? K-means leads to clusters that look multivariate Gaussian. Since the variance across the variables is the same, K-means leads to clusters that look spherical. Not only do clusters look spherical, but since the covariance matrix is the same across the $K$ groups, K-means leads to clusters that look like the same sphere. K-means tends towards equal sized groups. K-means is actually quite a restrictive algorithm. The advantage being that with the assumptions above, you can perform the algorithm quite quickly. But if clustering performance is your top concern, K-means is usually way too restrictive in real situations.
How to understand the drawbacks of K-means
To understand the drawbacks of K-means, I like to think of what the model behind it is. K-means is a special case of Gaussian Mixture Models (GMM). GMM assumes that the data comes from a mixture of $
How to understand the drawbacks of K-means To understand the drawbacks of K-means, I like to think of what the model behind it is. K-means is a special case of Gaussian Mixture Models (GMM). GMM assumes that the data comes from a mixture of $K$ Gaussian distributions. In other words, there is a certain probability that the data comes from one of $K$ of the Gaussian distributions. If we make the the probability to be in each of the $K$ Gaussians equal and make the covariance matrices to be $\sigma^2 \mathbf{I}$, where $\sigma^2 $ is the same fixed constant for each of the $K$ Gaussians, and take the limit when $\sigma^2 \rightarrow 0$ then we get K-means. So, what does this tell us about the drawbacks of K-means? K-means leads to clusters that look multivariate Gaussian. Since the variance across the variables is the same, K-means leads to clusters that look spherical. Not only do clusters look spherical, but since the covariance matrix is the same across the $K$ groups, K-means leads to clusters that look like the same sphere. K-means tends towards equal sized groups. K-means is actually quite a restrictive algorithm. The advantage being that with the assumptions above, you can perform the algorithm quite quickly. But if clustering performance is your top concern, K-means is usually way too restrictive in real situations.
How to understand the drawbacks of K-means To understand the drawbacks of K-means, I like to think of what the model behind it is. K-means is a special case of Gaussian Mixture Models (GMM). GMM assumes that the data comes from a mixture of $
129
Bayesian and frequentist reasoning in plain English
Here is how I would explain the basic difference to my grandma: I have misplaced my phone somewhere in the home. I can use the phone locator on the base of the instrument to locate the phone and when I press the phone locator the phone starts beeping. Problem: Which area of my home should I search? Frequentist Reasoning I can hear the phone beeping. I also have a mental model which helps me identify the area from which the sound is coming. Therefore, upon hearing the beep, I infer the area of my home I must search to locate the phone. Bayesian Reasoning I can hear the phone beeping. Now, apart from a mental model which helps me identify the area from which the sound is coming from, I also know the locations where I have misplaced the phone in the past. So, I combine my inferences using the beeps and my prior information about the locations I have misplaced the phone in the past to identify an area I must search to locate the phone.
Bayesian and frequentist reasoning in plain English
Here is how I would explain the basic difference to my grandma: I have misplaced my phone somewhere in the home. I can use the phone locator on the base of the instrument to locate the phone and when
Bayesian and frequentist reasoning in plain English Here is how I would explain the basic difference to my grandma: I have misplaced my phone somewhere in the home. I can use the phone locator on the base of the instrument to locate the phone and when I press the phone locator the phone starts beeping. Problem: Which area of my home should I search? Frequentist Reasoning I can hear the phone beeping. I also have a mental model which helps me identify the area from which the sound is coming. Therefore, upon hearing the beep, I infer the area of my home I must search to locate the phone. Bayesian Reasoning I can hear the phone beeping. Now, apart from a mental model which helps me identify the area from which the sound is coming from, I also know the locations where I have misplaced the phone in the past. So, I combine my inferences using the beeps and my prior information about the locations I have misplaced the phone in the past to identify an area I must search to locate the phone.
Bayesian and frequentist reasoning in plain English Here is how I would explain the basic difference to my grandma: I have misplaced my phone somewhere in the home. I can use the phone locator on the base of the instrument to locate the phone and when
130
Bayesian and frequentist reasoning in plain English
Tongue firmly in cheek: A Bayesian defines a "probability" in exactly the same way that most non-statisticians do - namely an indication of the plausibility of a proposition or a situation. If you ask them a question about a particular proposition or situation, they will give you a direct answer assigning probabilities describing the plausibilities of the possible outcomes for the particular situation (and state their prior assumptions). A Frequentist is someone that believes probabilities represent long run frequencies with which events occur; if needs be, they will invent a fictitious population from which your particular situation could be considered a random sample so that they can meaningfully talk about long run frequencies. If you ask them a question about a particular situation, they will not give a direct answer, but instead make a statement about this (possibly imaginary) population. Many non-frequentist statisticians will be easily confused by the answer and interpret it as Bayesian probability about the particular situation. However, it is important to note that most Frequentist methods have a Bayesian equivalent that in most circumstances will give essentially the same result, the difference is largely a matter of philosophy, and in practice it is a matter of "horses for courses". As you may have guessed, I am a Bayesian and an engineer. ;o)
Bayesian and frequentist reasoning in plain English
Tongue firmly in cheek: A Bayesian defines a "probability" in exactly the same way that most non-statisticians do - namely an indication of the plausibility of a proposition or a situation. If you as
Bayesian and frequentist reasoning in plain English Tongue firmly in cheek: A Bayesian defines a "probability" in exactly the same way that most non-statisticians do - namely an indication of the plausibility of a proposition or a situation. If you ask them a question about a particular proposition or situation, they will give you a direct answer assigning probabilities describing the plausibilities of the possible outcomes for the particular situation (and state their prior assumptions). A Frequentist is someone that believes probabilities represent long run frequencies with which events occur; if needs be, they will invent a fictitious population from which your particular situation could be considered a random sample so that they can meaningfully talk about long run frequencies. If you ask them a question about a particular situation, they will not give a direct answer, but instead make a statement about this (possibly imaginary) population. Many non-frequentist statisticians will be easily confused by the answer and interpret it as Bayesian probability about the particular situation. However, it is important to note that most Frequentist methods have a Bayesian equivalent that in most circumstances will give essentially the same result, the difference is largely a matter of philosophy, and in practice it is a matter of "horses for courses". As you may have guessed, I am a Bayesian and an engineer. ;o)
Bayesian and frequentist reasoning in plain English Tongue firmly in cheek: A Bayesian defines a "probability" in exactly the same way that most non-statisticians do - namely an indication of the plausibility of a proposition or a situation. If you as
131
Bayesian and frequentist reasoning in plain English
Very crudely I would say that: Frequentist: Sampling is infinite and decision rules can be sharp. Data are a repeatable random sample - there is a frequency. Underlying parameters are fixed i.e. they remain constant during this repeatable sampling process. Bayesian: Unknown quantities are treated probabilistically and the state of the world can always be updated. Data are observed from the realised sample. Parameters are unknown and described probabilistically. It is the data which are fixed. There is a brilliant blog post which gives an indepth example of how a Bayesian and Frequentist would tackle the same problem. Why not answer the problem for yourself and then check? The problem (taken from Panos Ipeirotis' blog): You have a coin that when flipped ends up head with probability $p$ and ends up tail with probability $1-p$. (The value of $p$ is unknown.) Trying to estimate $p$, you flip the coin 100 times. It ends up head 71 times. Then you have to decide on the following event: "In the next two tosses we will get two heads in a row." Would you bet that the event will happen or that it will not happen?
Bayesian and frequentist reasoning in plain English
Very crudely I would say that: Frequentist: Sampling is infinite and decision rules can be sharp. Data are a repeatable random sample - there is a frequency. Underlying parameters are fixed i.e. they
Bayesian and frequentist reasoning in plain English Very crudely I would say that: Frequentist: Sampling is infinite and decision rules can be sharp. Data are a repeatable random sample - there is a frequency. Underlying parameters are fixed i.e. they remain constant during this repeatable sampling process. Bayesian: Unknown quantities are treated probabilistically and the state of the world can always be updated. Data are observed from the realised sample. Parameters are unknown and described probabilistically. It is the data which are fixed. There is a brilliant blog post which gives an indepth example of how a Bayesian and Frequentist would tackle the same problem. Why not answer the problem for yourself and then check? The problem (taken from Panos Ipeirotis' blog): You have a coin that when flipped ends up head with probability $p$ and ends up tail with probability $1-p$. (The value of $p$ is unknown.) Trying to estimate $p$, you flip the coin 100 times. It ends up head 71 times. Then you have to decide on the following event: "In the next two tosses we will get two heads in a row." Would you bet that the event will happen or that it will not happen?
Bayesian and frequentist reasoning in plain English Very crudely I would say that: Frequentist: Sampling is infinite and decision rules can be sharp. Data are a repeatable random sample - there is a frequency. Underlying parameters are fixed i.e. they
132
Bayesian and frequentist reasoning in plain English
Let us say a man rolls a six sided die and it has outcomes 1, 2, 3, 4, 5, or 6. Furthermore, he says that if it lands on a 3, he'll give you a free text book. Then informally: The Frequentist would say that each outcome has an equal 1 in 6 chance of occurring. She views probability as being derived from long run frequency distributions. The Bayesian however would say hang on a second, I know that man, he's David Blaine, a famous trickster! I have a feeling he's up to something. I'm going to say that there's only a 1% chance of it landing on a 3 BUT I'll re-evaluate that beliefe and change it the more times he rolls the die. If I see the other numbers come up equally often, then I'll iteratively increase the chance from 1% to something slightly higher, otherwise I'll reduce it even further. She views probability as degrees of belief in a proposition.
Bayesian and frequentist reasoning in plain English
Let us say a man rolls a six sided die and it has outcomes 1, 2, 3, 4, 5, or 6. Furthermore, he says that if it lands on a 3, he'll give you a free text book. Then informally: The Frequentist would sa
Bayesian and frequentist reasoning in plain English Let us say a man rolls a six sided die and it has outcomes 1, 2, 3, 4, 5, or 6. Furthermore, he says that if it lands on a 3, he'll give you a free text book. Then informally: The Frequentist would say that each outcome has an equal 1 in 6 chance of occurring. She views probability as being derived from long run frequency distributions. The Bayesian however would say hang on a second, I know that man, he's David Blaine, a famous trickster! I have a feeling he's up to something. I'm going to say that there's only a 1% chance of it landing on a 3 BUT I'll re-evaluate that beliefe and change it the more times he rolls the die. If I see the other numbers come up equally often, then I'll iteratively increase the chance from 1% to something slightly higher, otherwise I'll reduce it even further. She views probability as degrees of belief in a proposition.
Bayesian and frequentist reasoning in plain English Let us say a man rolls a six sided die and it has outcomes 1, 2, 3, 4, 5, or 6. Furthermore, he says that if it lands on a 3, he'll give you a free text book. Then informally: The Frequentist would sa
133
Bayesian and frequentist reasoning in plain English
Just a little bit of fun... A Bayesian is one who, vaguely expecting a horse, and catching a glimpse of a donkey, strongly believes he has seen a mule. From this site: http://www2.isye.gatech.edu/~brani/isyebayes/jokes.html and from the same site, a nice essay... "An Intuitive Explanation of Bayes' Theorem" http://yudkowsky.net/rational/bayes
Bayesian and frequentist reasoning in plain English
Just a little bit of fun... A Bayesian is one who, vaguely expecting a horse, and catching a glimpse of a donkey, strongly believes he has seen a mule. From this site: http://www2.isye.gatech.edu/~bra
Bayesian and frequentist reasoning in plain English Just a little bit of fun... A Bayesian is one who, vaguely expecting a horse, and catching a glimpse of a donkey, strongly believes he has seen a mule. From this site: http://www2.isye.gatech.edu/~brani/isyebayes/jokes.html and from the same site, a nice essay... "An Intuitive Explanation of Bayes' Theorem" http://yudkowsky.net/rational/bayes
Bayesian and frequentist reasoning in plain English Just a little bit of fun... A Bayesian is one who, vaguely expecting a horse, and catching a glimpse of a donkey, strongly believes he has seen a mule. From this site: http://www2.isye.gatech.edu/~bra
134
Bayesian and frequentist reasoning in plain English
The Bayesian is asked to make bets, which may include anything from which fly will crawl up a wall faster to which medicine will save most lives, or which prisoners should go to jail. He has a big box with a handle. He knows that if he puts absolutely everything he knows into the box, including his personal opinion, and turns the handle, it will make the best possible decision for him. The frequentist is asked to write reports. He has a big black book of rules. If the situation he is asked to make a report on is covered by his rulebook, he can follow the rules and write a report so carefully worded that it is wrong, at worst, one time in 100 (or one time in 20, or one time in whatever the specification for his report says). The frequentist knows (because he has written reports on it) that the Bayesian sometimes makes bets that, in the worst case, when his personal opinion is wrong, could turn out badly. The frequentist also knows (for the same reason) that if he bets against the Bayesian every time he differs from him, then, over the long run, he will lose.
Bayesian and frequentist reasoning in plain English
The Bayesian is asked to make bets, which may include anything from which fly will crawl up a wall faster to which medicine will save most lives, or which prisoners should go to jail. He has a big box
Bayesian and frequentist reasoning in plain English The Bayesian is asked to make bets, which may include anything from which fly will crawl up a wall faster to which medicine will save most lives, or which prisoners should go to jail. He has a big box with a handle. He knows that if he puts absolutely everything he knows into the box, including his personal opinion, and turns the handle, it will make the best possible decision for him. The frequentist is asked to write reports. He has a big black book of rules. If the situation he is asked to make a report on is covered by his rulebook, he can follow the rules and write a report so carefully worded that it is wrong, at worst, one time in 100 (or one time in 20, or one time in whatever the specification for his report says). The frequentist knows (because he has written reports on it) that the Bayesian sometimes makes bets that, in the worst case, when his personal opinion is wrong, could turn out badly. The frequentist also knows (for the same reason) that if he bets against the Bayesian every time he differs from him, then, over the long run, he will lose.
Bayesian and frequentist reasoning in plain English The Bayesian is asked to make bets, which may include anything from which fly will crawl up a wall faster to which medicine will save most lives, or which prisoners should go to jail. He has a big box
135
Bayesian and frequentist reasoning in plain English
In plain english, I would say that Bayesian and Frequentist reasoning are distinguished by two different ways of answering the question: What is probability? Most differences will essentially boil down to how each answers this question, for it basically defines the domain of valid applications of the theory. Now you can't really give either answer in terms of "plain english", without further generating more questions. For me the answer is (as you could probably guess) probability is logic my "non-plain english" reason for this is that the calculus of propositions is a special case of the calculus of probabilities, if we represent truth by $1$ and falsehood by $0$. Additionally, the calculus of probabilities can be derived from the calculus of propositions. This conforms with the "bayesian" reasoning most closely - although it also extends the bayesian reasoning in applications by providing principles to assign probabilities, in addition to principles to manipulate them. Of course, this leads to the follow up question "what is logic?" for me, the closest thing I could give as an answer to this question is "logic is the common sense judgements of a rational person, with a given set of assumptions" (what is a rational person? etc. etc.). Logic has all the same features that Bayesian reasoning has. For example, logic does not tell you what to assume or what is "absolutely true". It only tells you how the truth of one proposition is related to the truth of another one. You always have to supply a logical system with "axioms" for it to get started on the conclusions. They also has the same limitations in that you can get arbitrary results from contradictory axioms. But "axioms" are nothing but prior probabilities which have been set to $1$. For me, to reject Bayesian reasoning is to reject logic. For if you accept logic, then because Bayesian reasoning "logically flows from logic" (how's that for plain english :P ), you must also accept Bayesian reasoning. For the frequentist reasoning, we have the answer: probability is frequency although I'm not sure "frequency" is a plain english term in the way it is used here - perhaps "proportion" is a better word. I wanted to add into the frequentist answer that the probability of an event is thought to be a real, measurable (observable?) quantity, which exists independently of the person/object who is calculating it. But I couldn't do this in a "plain english" way. So perhaps a "plain english" version of one the difference could be that frequentist reasoning is an attempt at reasoning from "absolute" probabilities, whereas bayesian reasoning is an attempt at reasoning from "relative" probabilities. Another difference is that frequentist foundations are more vague in how you translate the real world problem into the abstract mathematics of the theory. A good example is the use of "random variables" in the theory - they have a precise definition in the abstract world of mathematics, but there is no unambiguous procedure one can use to decide if some observed quantity is or isn't a "random variable". The bayesian way of reasoning, the notion of a "random variable" is not necessary. A probability distribution is assigned to a quantity because it is unknown - which means that it cannot be deduced logically from the information we have. This provides at once a simple connection between the observable quantity and the theory - as "being unknown" is unambiguous. You can also see in the above example a further difference in these two ways of thinking - "random" vs "unknown". "randomness" is phrased in such a way that the "randomness" seems like it is a property of the actual quantity. Conversely, "being unknown" depends on which person you are asking about that quantity - hence it is a property of the statistician doing the analysis. This gives rise to the "objective" versus "subjective" adjectives often attached to each theory. It is easy to show that "randomness" cannot be a property of some standard examples, by simply asking two frequentists who are given different information about the same quantity to decide if its "random". One is the usual Bernoulli Urn: frequentist 1 is blindfolded while drawing, whereas frequentist 2 is standing over the urn, watching frequentist 1 draw the balls from the urn. If the declaration of "randomness" is a property of the balls in the urn, then it cannot depend on the different knowledge of frequentist 1 and 2 - and hence the two frequentist should give the same declaration of "random" or "not random".
Bayesian and frequentist reasoning in plain English
In plain english, I would say that Bayesian and Frequentist reasoning are distinguished by two different ways of answering the question: What is probability? Most differences will essentially boil dow
Bayesian and frequentist reasoning in plain English In plain english, I would say that Bayesian and Frequentist reasoning are distinguished by two different ways of answering the question: What is probability? Most differences will essentially boil down to how each answers this question, for it basically defines the domain of valid applications of the theory. Now you can't really give either answer in terms of "plain english", without further generating more questions. For me the answer is (as you could probably guess) probability is logic my "non-plain english" reason for this is that the calculus of propositions is a special case of the calculus of probabilities, if we represent truth by $1$ and falsehood by $0$. Additionally, the calculus of probabilities can be derived from the calculus of propositions. This conforms with the "bayesian" reasoning most closely - although it also extends the bayesian reasoning in applications by providing principles to assign probabilities, in addition to principles to manipulate them. Of course, this leads to the follow up question "what is logic?" for me, the closest thing I could give as an answer to this question is "logic is the common sense judgements of a rational person, with a given set of assumptions" (what is a rational person? etc. etc.). Logic has all the same features that Bayesian reasoning has. For example, logic does not tell you what to assume or what is "absolutely true". It only tells you how the truth of one proposition is related to the truth of another one. You always have to supply a logical system with "axioms" for it to get started on the conclusions. They also has the same limitations in that you can get arbitrary results from contradictory axioms. But "axioms" are nothing but prior probabilities which have been set to $1$. For me, to reject Bayesian reasoning is to reject logic. For if you accept logic, then because Bayesian reasoning "logically flows from logic" (how's that for plain english :P ), you must also accept Bayesian reasoning. For the frequentist reasoning, we have the answer: probability is frequency although I'm not sure "frequency" is a plain english term in the way it is used here - perhaps "proportion" is a better word. I wanted to add into the frequentist answer that the probability of an event is thought to be a real, measurable (observable?) quantity, which exists independently of the person/object who is calculating it. But I couldn't do this in a "plain english" way. So perhaps a "plain english" version of one the difference could be that frequentist reasoning is an attempt at reasoning from "absolute" probabilities, whereas bayesian reasoning is an attempt at reasoning from "relative" probabilities. Another difference is that frequentist foundations are more vague in how you translate the real world problem into the abstract mathematics of the theory. A good example is the use of "random variables" in the theory - they have a precise definition in the abstract world of mathematics, but there is no unambiguous procedure one can use to decide if some observed quantity is or isn't a "random variable". The bayesian way of reasoning, the notion of a "random variable" is not necessary. A probability distribution is assigned to a quantity because it is unknown - which means that it cannot be deduced logically from the information we have. This provides at once a simple connection between the observable quantity and the theory - as "being unknown" is unambiguous. You can also see in the above example a further difference in these two ways of thinking - "random" vs "unknown". "randomness" is phrased in such a way that the "randomness" seems like it is a property of the actual quantity. Conversely, "being unknown" depends on which person you are asking about that quantity - hence it is a property of the statistician doing the analysis. This gives rise to the "objective" versus "subjective" adjectives often attached to each theory. It is easy to show that "randomness" cannot be a property of some standard examples, by simply asking two frequentists who are given different information about the same quantity to decide if its "random". One is the usual Bernoulli Urn: frequentist 1 is blindfolded while drawing, whereas frequentist 2 is standing over the urn, watching frequentist 1 draw the balls from the urn. If the declaration of "randomness" is a property of the balls in the urn, then it cannot depend on the different knowledge of frequentist 1 and 2 - and hence the two frequentist should give the same declaration of "random" or "not random".
Bayesian and frequentist reasoning in plain English In plain english, I would say that Bayesian and Frequentist reasoning are distinguished by two different ways of answering the question: What is probability? Most differences will essentially boil dow
136
Bayesian and frequentist reasoning in plain English
In reality, I think much of the philosophy surrounding the issue is just grandstanding. That's not to dismiss the debate, but it is a word of caution. Sometimes, practical matters take priority - I'll give an example below. Also, you could just as easily argue that there are more than two approaches: Neyman-Pearson ('frequentist') Likelihood-based approaches Fully Bayesian A senior colleague recently reminded me that "many people in common language talk about frequentist and Bayesian. I think a more valid distinction is likelihood-based and frequentist. Both maximum likelihood and Bayesian methods adhere to the likelihood principle whereas frequentist methods don't." I'll start off with a very simple practical example: We have a patient. The patient is either healthy(H) or sick(S). We will perform a test on the patient, and the result will either be Positive(+) or Negative(-). If the patient is sick, they will always get a Positive result. We'll call this the correct(C) result and say that $$ P(+ | S ) = 1 $$ or $$ P(Correct | S) = 1 $$ If the patient is healthy, the test will be negative 95% of the time, but there will be some false positives. $$ P(- | H) = 0.95 $$ $$ P(+ | H) = 0.05 $$ In other works, the probability of the test being Correct, for Healthy people, is 95%. So, the test is either 100% accurate or 95% accurate, depending on whether the patient is healthy or sick. Taken together, this means the test is at least 95% accurate. So far so good. Those are the statements that would be make by a frequentist. Those statements are quite simple to understand and are true. There's no need to waffle about a 'frequentist interpretation'. But, things get interesting when you try to turn things around. Given the test result, what can you learn about the health of the patient? Given a negative test result, the patient is obviously healthy, as there are no false negatives. But we must also consider the case where the test is positive. Was the test positive because the patient was actually sick, or was it a false positive? This is where the frequentist and Bayesian diverge. Everybody will agree that this cannot be answered at the moment. The frequentist will refuse to answer. The Bayesian will be prepared to give you an answer, but you'll have to give the Bayesian a prior first - i.e. tell it what proportion of the patients are sick. To recap, the following statements are true: For healthy patients, the test is very accurate. For sick patients, the test is very accurate. If you are satisfied with statements such as that, then you are using frequentist interpretations. This might change from project to project, depending on what sort of problems you're looking at. But you might want to make different statements and answer the following question: For those patients that got a positive test result, how accurate is the test? This requires a prior and a Bayesian approach. Note also that this is the only question of interest to the doctor. The doctor will say "I know that the patients will either get a positive result or a negative result. I also now that the negative result means the patient is healthy and can be send home. The only patients that interest me now are those that got a positive result -- are they sick?." To summarize: In examples such as this, the Bayesian will agree with everything said by the frequentist. But the Bayesian will argue that the frequentist's statements, while true, are not very useful; and will argue that the useful questions can only be answered with a prior. A frequentist will consider each possible value of the parameter (H or S) in turn and ask "if the parameter is equal to this value, what is the probability of my test being correct?" A Bayesian will instead consider each possible observed value (+ or -) in turn and ask "If I imagine I have just observed that value, what does that tell me about the conditional probability of H-versus-S?"
Bayesian and frequentist reasoning in plain English
In reality, I think much of the philosophy surrounding the issue is just grandstanding. That's not to dismiss the debate, but it is a word of caution. Sometimes, practical matters take priority - I'
Bayesian and frequentist reasoning in plain English In reality, I think much of the philosophy surrounding the issue is just grandstanding. That's not to dismiss the debate, but it is a word of caution. Sometimes, practical matters take priority - I'll give an example below. Also, you could just as easily argue that there are more than two approaches: Neyman-Pearson ('frequentist') Likelihood-based approaches Fully Bayesian A senior colleague recently reminded me that "many people in common language talk about frequentist and Bayesian. I think a more valid distinction is likelihood-based and frequentist. Both maximum likelihood and Bayesian methods adhere to the likelihood principle whereas frequentist methods don't." I'll start off with a very simple practical example: We have a patient. The patient is either healthy(H) or sick(S). We will perform a test on the patient, and the result will either be Positive(+) or Negative(-). If the patient is sick, they will always get a Positive result. We'll call this the correct(C) result and say that $$ P(+ | S ) = 1 $$ or $$ P(Correct | S) = 1 $$ If the patient is healthy, the test will be negative 95% of the time, but there will be some false positives. $$ P(- | H) = 0.95 $$ $$ P(+ | H) = 0.05 $$ In other works, the probability of the test being Correct, for Healthy people, is 95%. So, the test is either 100% accurate or 95% accurate, depending on whether the patient is healthy or sick. Taken together, this means the test is at least 95% accurate. So far so good. Those are the statements that would be make by a frequentist. Those statements are quite simple to understand and are true. There's no need to waffle about a 'frequentist interpretation'. But, things get interesting when you try to turn things around. Given the test result, what can you learn about the health of the patient? Given a negative test result, the patient is obviously healthy, as there are no false negatives. But we must also consider the case where the test is positive. Was the test positive because the patient was actually sick, or was it a false positive? This is where the frequentist and Bayesian diverge. Everybody will agree that this cannot be answered at the moment. The frequentist will refuse to answer. The Bayesian will be prepared to give you an answer, but you'll have to give the Bayesian a prior first - i.e. tell it what proportion of the patients are sick. To recap, the following statements are true: For healthy patients, the test is very accurate. For sick patients, the test is very accurate. If you are satisfied with statements such as that, then you are using frequentist interpretations. This might change from project to project, depending on what sort of problems you're looking at. But you might want to make different statements and answer the following question: For those patients that got a positive test result, how accurate is the test? This requires a prior and a Bayesian approach. Note also that this is the only question of interest to the doctor. The doctor will say "I know that the patients will either get a positive result or a negative result. I also now that the negative result means the patient is healthy and can be send home. The only patients that interest me now are those that got a positive result -- are they sick?." To summarize: In examples such as this, the Bayesian will agree with everything said by the frequentist. But the Bayesian will argue that the frequentist's statements, while true, are not very useful; and will argue that the useful questions can only be answered with a prior. A frequentist will consider each possible value of the parameter (H or S) in turn and ask "if the parameter is equal to this value, what is the probability of my test being correct?" A Bayesian will instead consider each possible observed value (+ or -) in turn and ask "If I imagine I have just observed that value, what does that tell me about the conditional probability of H-versus-S?"
Bayesian and frequentist reasoning in plain English In reality, I think much of the philosophy surrounding the issue is just grandstanding. That's not to dismiss the debate, but it is a word of caution. Sometimes, practical matters take priority - I'
137
Bayesian and frequentist reasoning in plain English
Bayesian and frequentist statistics are compatible in that they can be understood as two limiting cases of assessing the probability of future events based on past events and an assumed model, if one admits that in the limit of a very large number of observations, no uncertainty about the system remains, and that in this sense a very large number of observations is equal to knowing the parameters of the model. Assume we have made some observations, e.g., outcome of 10 coin flips. In Bayesian statistics, you start from what you have observed and then you assess the probability of future observations or model parameters. In frequentist statistics, you start from an idea (hypothesis) of what is true by assuming scenarios of a large number of observations that have been made, e.g., coin is unbiased and gives 50% heads up, if you throw it many many times. Based on these scenarios of a large number of observations (=hypothesis), you assess the frequency of making observations like the one you did, i.e.,frequency of different outcomes of 10 coin flips. It is only then that you take your actual outcome, compare it to the frequency of possible outcomes, and decide whether the outcome belongs to those that are expected to occur with high frequency. If this is the case you conclude that the observation made does not contradict your scenarios (=hypothesis). Otherwise, you conclude that the observation made is incompatible with your scenarios, and you reject the hypothesis. Thus Bayesian statistics starts from what has been observed and assesses possible future outcomes. Frequentist statistics starts with an abstract experiment of what would be observed if one assumes something, and only then compares the outcomes of the abstract experiment with what was actually observed. Otherwise the two approaches are compatible. They both assess the probability of future observations based on some observations made or hypothesized. I started to write this up in a more formal way: Positioning Bayesian inference as a particular application of frequentist inference and vice versa. figshare. http://dx.doi.org/10.6084/m9.figshare.867707 The manuscript is new. If you happen to read it, and have comments, please let me know.
Bayesian and frequentist reasoning in plain English
Bayesian and frequentist statistics are compatible in that they can be understood as two limiting cases of assessing the probability of future events based on past events and an assumed model, if one
Bayesian and frequentist reasoning in plain English Bayesian and frequentist statistics are compatible in that they can be understood as two limiting cases of assessing the probability of future events based on past events and an assumed model, if one admits that in the limit of a very large number of observations, no uncertainty about the system remains, and that in this sense a very large number of observations is equal to knowing the parameters of the model. Assume we have made some observations, e.g., outcome of 10 coin flips. In Bayesian statistics, you start from what you have observed and then you assess the probability of future observations or model parameters. In frequentist statistics, you start from an idea (hypothesis) of what is true by assuming scenarios of a large number of observations that have been made, e.g., coin is unbiased and gives 50% heads up, if you throw it many many times. Based on these scenarios of a large number of observations (=hypothesis), you assess the frequency of making observations like the one you did, i.e.,frequency of different outcomes of 10 coin flips. It is only then that you take your actual outcome, compare it to the frequency of possible outcomes, and decide whether the outcome belongs to those that are expected to occur with high frequency. If this is the case you conclude that the observation made does not contradict your scenarios (=hypothesis). Otherwise, you conclude that the observation made is incompatible with your scenarios, and you reject the hypothesis. Thus Bayesian statistics starts from what has been observed and assesses possible future outcomes. Frequentist statistics starts with an abstract experiment of what would be observed if one assumes something, and only then compares the outcomes of the abstract experiment with what was actually observed. Otherwise the two approaches are compatible. They both assess the probability of future observations based on some observations made or hypothesized. I started to write this up in a more formal way: Positioning Bayesian inference as a particular application of frequentist inference and vice versa. figshare. http://dx.doi.org/10.6084/m9.figshare.867707 The manuscript is new. If you happen to read it, and have comments, please let me know.
Bayesian and frequentist reasoning in plain English Bayesian and frequentist statistics are compatible in that they can be understood as two limiting cases of assessing the probability of future events based on past events and an assumed model, if one
138
Bayesian and frequentist reasoning in plain English
I would say that they look at probability in different ways. The Bayesian is subjective and uses a priori beliefs to define a prior probability distribution on the possible values of the unknown parameters. So he relies on a theory of probability like deFinetti's. The frequentist see probability as something that has to do with a limiting frequency based on an observed proportion. This is in line with the theory of probability as developed by Kolmogorov and von Mises. A frequentist does parametric inference using just the likelihood function. A Bayesian takes that and multiplies to by a prior and normalizes it to get the posterior distribution that he uses for inference.
Bayesian and frequentist reasoning in plain English
I would say that they look at probability in different ways. The Bayesian is subjective and uses a priori beliefs to define a prior probability distribution on the possible values of the unknown para
Bayesian and frequentist reasoning in plain English I would say that they look at probability in different ways. The Bayesian is subjective and uses a priori beliefs to define a prior probability distribution on the possible values of the unknown parameters. So he relies on a theory of probability like deFinetti's. The frequentist see probability as something that has to do with a limiting frequency based on an observed proportion. This is in line with the theory of probability as developed by Kolmogorov and von Mises. A frequentist does parametric inference using just the likelihood function. A Bayesian takes that and multiplies to by a prior and normalizes it to get the posterior distribution that he uses for inference.
Bayesian and frequentist reasoning in plain English I would say that they look at probability in different ways. The Bayesian is subjective and uses a priori beliefs to define a prior probability distribution on the possible values of the unknown para
139
Bayesian and frequentist reasoning in plain English
The simplest and clearest explanation I've seen, from Larry Wasserman's notes on Statistical Machine Learning (with disclaimer: "at the risk of oversimplifying"): Frequentist versus Bayesian Methods In frequentist inference, probabilities are interpreted as long run frequencies. The goal is to create procedures with long run frequency guarantees. In Bayesian inference, probabilities are interpreted as subjective degrees of belief. The goal is to state and analyze your beliefs. What's tricky is that we work with two different interpretations of probability which can get philosophical. For example, if I say "this coin has a 1/2 probability of landing heads", what does that mean? The frequentist viewpoint is that if we performed many coin flips, then the counts ("frequencies") of heads divided by the total number of flips should more or less get closer and closer to 1/2. There is nothing subjective about this which can be viewed as a good thing, however we can't really perform infinite flips and in some cases we can't repeat the experiment at all, so an argument about limits or long run frequencies might be in some ways unsatisfactory. On the other hand, the Bayesian viewpoint is subjective, in that we view probability as some kind of "degree of belief", or "gambling odds" if we specifically use de Finetti's interpretation. For example, two people may come into the coin flipping experiment with different beliefs about what they believe about the coin (prior probability). After the experiment which has collected data/evidence and the people have updated their beliefs in accordance with Bayes' theorem, they leave with different ideas of what the posterior probability of the coin is, and both people can justify their beliefs as "logical"/"rational"/"coherent" (depending on the exact flavor of Bayesian interpretation). In practice, statisticians can use either kind of methods as long as they are careful with their assumptions and conclusions. Nowadays Bayesian methods are becoming increasingly popular with better computers and algorithms like MCMC. Also, in finite dimensional models, Bayesian inference may have same guarantees of consistency and rate of convergence as frequentist models. I don't think there is any way around really understanding Bayesian and frequentist reasoning without confronting (or at least acknowledging) the interpretations of probability.
Bayesian and frequentist reasoning in plain English
The simplest and clearest explanation I've seen, from Larry Wasserman's notes on Statistical Machine Learning (with disclaimer: "at the risk of oversimplifying"): Frequentist versus Bayesian Methods
Bayesian and frequentist reasoning in plain English The simplest and clearest explanation I've seen, from Larry Wasserman's notes on Statistical Machine Learning (with disclaimer: "at the risk of oversimplifying"): Frequentist versus Bayesian Methods In frequentist inference, probabilities are interpreted as long run frequencies. The goal is to create procedures with long run frequency guarantees. In Bayesian inference, probabilities are interpreted as subjective degrees of belief. The goal is to state and analyze your beliefs. What's tricky is that we work with two different interpretations of probability which can get philosophical. For example, if I say "this coin has a 1/2 probability of landing heads", what does that mean? The frequentist viewpoint is that if we performed many coin flips, then the counts ("frequencies") of heads divided by the total number of flips should more or less get closer and closer to 1/2. There is nothing subjective about this which can be viewed as a good thing, however we can't really perform infinite flips and in some cases we can't repeat the experiment at all, so an argument about limits or long run frequencies might be in some ways unsatisfactory. On the other hand, the Bayesian viewpoint is subjective, in that we view probability as some kind of "degree of belief", or "gambling odds" if we specifically use de Finetti's interpretation. For example, two people may come into the coin flipping experiment with different beliefs about what they believe about the coin (prior probability). After the experiment which has collected data/evidence and the people have updated their beliefs in accordance with Bayes' theorem, they leave with different ideas of what the posterior probability of the coin is, and both people can justify their beliefs as "logical"/"rational"/"coherent" (depending on the exact flavor of Bayesian interpretation). In practice, statisticians can use either kind of methods as long as they are careful with their assumptions and conclusions. Nowadays Bayesian methods are becoming increasingly popular with better computers and algorithms like MCMC. Also, in finite dimensional models, Bayesian inference may have same guarantees of consistency and rate of convergence as frequentist models. I don't think there is any way around really understanding Bayesian and frequentist reasoning without confronting (or at least acknowledging) the interpretations of probability.
Bayesian and frequentist reasoning in plain English The simplest and clearest explanation I've seen, from Larry Wasserman's notes on Statistical Machine Learning (with disclaimer: "at the risk of oversimplifying"): Frequentist versus Bayesian Methods
140
Bayesian and frequentist reasoning in plain English
I've attempted a side-by-side comparison of the two schools of thought here and have more background information here.
Bayesian and frequentist reasoning in plain English
I've attempted a side-by-side comparison of the two schools of thought here and have more background information here.
Bayesian and frequentist reasoning in plain English I've attempted a side-by-side comparison of the two schools of thought here and have more background information here.
Bayesian and frequentist reasoning in plain English I've attempted a side-by-side comparison of the two schools of thought here and have more background information here.
141
Bayesian and frequentist reasoning in plain English
The way I answer this question is that frequentists compare the data they see to what they expected. That is, they have a mental model on how frequent something should happen, and then see data and how often it did happen. i.e. how likely is the data they have seen given the model they chose. Bayesian people, on the other hand, combine their mental models. That is, they have a model based on their previous experiences that tells them what they think the data should look like, and then they combine this with the data they observe to settle upon some ``posterior'' belief. i.e., they find the probability the model they seek to choose is valid given the data they have observed.
Bayesian and frequentist reasoning in plain English
The way I answer this question is that frequentists compare the data they see to what they expected. That is, they have a mental model on how frequent something should happen, and then see data and ho
Bayesian and frequentist reasoning in plain English The way I answer this question is that frequentists compare the data they see to what they expected. That is, they have a mental model on how frequent something should happen, and then see data and how often it did happen. i.e. how likely is the data they have seen given the model they chose. Bayesian people, on the other hand, combine their mental models. That is, they have a model based on their previous experiences that tells them what they think the data should look like, and then they combine this with the data they observe to settle upon some ``posterior'' belief. i.e., they find the probability the model they seek to choose is valid given the data they have observed.
Bayesian and frequentist reasoning in plain English The way I answer this question is that frequentists compare the data they see to what they expected. That is, they have a mental model on how frequent something should happen, and then see data and ho
142
Bayesian and frequentist reasoning in plain English
In short plain English as follows: In Bayesian, parameters vary and data are fixed In Bayesian, $P(\theta|X)=\frac{P(X|\theta)P(\theta)}{P(X)}$ where $P(\theta|X)$ means parameters vary and data are fixed. In frequentist, parameters are fixed and data vary In frequentist, $P(\theta|X)=P(X|\theta)$ where $P(X|\theta)$ means parameters are fixed and data vary. References: https://stats.stackexchange.com/a/513020/103153 https://math.stackexchange.com/a/2126820/351322
Bayesian and frequentist reasoning in plain English
In short plain English as follows: In Bayesian, parameters vary and data are fixed In Bayesian, $P(\theta|X)=\frac{P(X|\theta)P(\theta)}{P(X)}$ where $P(\theta|X)$ means parameters vary and data are
Bayesian and frequentist reasoning in plain English In short plain English as follows: In Bayesian, parameters vary and data are fixed In Bayesian, $P(\theta|X)=\frac{P(X|\theta)P(\theta)}{P(X)}$ where $P(\theta|X)$ means parameters vary and data are fixed. In frequentist, parameters are fixed and data vary In frequentist, $P(\theta|X)=P(X|\theta)$ where $P(X|\theta)$ means parameters are fixed and data vary. References: https://stats.stackexchange.com/a/513020/103153 https://math.stackexchange.com/a/2126820/351322
Bayesian and frequentist reasoning in plain English In short plain English as follows: In Bayesian, parameters vary and data are fixed In Bayesian, $P(\theta|X)=\frac{P(X|\theta)P(\theta)}{P(X)}$ where $P(\theta|X)$ means parameters vary and data are
143
What is the difference between fixed effect, random effect and mixed effect models?
Statistician Andrew Gelman says that the terms 'fixed effect' and 'random effect' have variable meanings depending on who uses them. Perhaps you can pick out which one of the 5 definitions applies to your case. In general it may be better to either look for equations which describe the probability model the authors are using (when reading) or write out the full probability model you want to use (when writing). Here we outline five definitions that we have seen: Fixed effects are constant across individuals, and random effects vary. For example, in a growth study, a model with random intercepts $a_i$ and fixed slope $b$ corresponds to parallel lines for different individuals $i$, or the model $y_{it} = a_i + b t$. Kreft and De Leeuw (1998) thus distinguish between fixed and random coefficients. Effects are fixed if they are interesting in themselves or random if there is interest in the underlying population. Searle, Casella, and McCulloch (1992, Section 1.4) explore this distinction in depth. “When a sample exhausts the population, the corresponding variable is fixed; when the sample is a small (i.e., negligible) part of the population the corresponding variable is random.” (Green and Tukey, 1960) “If an effect is assumed to be a realized value of a random variable, it is called a random effect.” (LaMotte, 1983) Fixed effects are estimated using least squares (or, more generally, maximum likelihood) and random effects are estimated with shrinkage (“linear unbiased prediction” in the terminology of Robinson, 1991). This definition is standard in the multilevel modeling literature (see, for example, Snijders and Bosker, 1999, Section 4.2) and in econometrics. [Gelman, 2004, Analysis of variance—why it is more important than ever. The Annals of Statistics.]
What is the difference between fixed effect, random effect and mixed effect models?
Statistician Andrew Gelman says that the terms 'fixed effect' and 'random effect' have variable meanings depending on who uses them. Perhaps you can pick out which one of the 5 definitions applies to
What is the difference between fixed effect, random effect and mixed effect models? Statistician Andrew Gelman says that the terms 'fixed effect' and 'random effect' have variable meanings depending on who uses them. Perhaps you can pick out which one of the 5 definitions applies to your case. In general it may be better to either look for equations which describe the probability model the authors are using (when reading) or write out the full probability model you want to use (when writing). Here we outline five definitions that we have seen: Fixed effects are constant across individuals, and random effects vary. For example, in a growth study, a model with random intercepts $a_i$ and fixed slope $b$ corresponds to parallel lines for different individuals $i$, or the model $y_{it} = a_i + b t$. Kreft and De Leeuw (1998) thus distinguish between fixed and random coefficients. Effects are fixed if they are interesting in themselves or random if there is interest in the underlying population. Searle, Casella, and McCulloch (1992, Section 1.4) explore this distinction in depth. “When a sample exhausts the population, the corresponding variable is fixed; when the sample is a small (i.e., negligible) part of the population the corresponding variable is random.” (Green and Tukey, 1960) “If an effect is assumed to be a realized value of a random variable, it is called a random effect.” (LaMotte, 1983) Fixed effects are estimated using least squares (or, more generally, maximum likelihood) and random effects are estimated with shrinkage (“linear unbiased prediction” in the terminology of Robinson, 1991). This definition is standard in the multilevel modeling literature (see, for example, Snijders and Bosker, 1999, Section 4.2) and in econometrics. [Gelman, 2004, Analysis of variance—why it is more important than ever. The Annals of Statistics.]
What is the difference between fixed effect, random effect and mixed effect models? Statistician Andrew Gelman says that the terms 'fixed effect' and 'random effect' have variable meanings depending on who uses them. Perhaps you can pick out which one of the 5 definitions applies to
144
What is the difference between fixed effect, random effect and mixed effect models?
There are good books on this such as Gelman and Hill. What follows is essentially a summary of their perspective. First of all, you should not get too caught up in the terminology. In statistics, jargon should never be used as a substitute for a mathematical understanding of the models themselves. That is especially true for random and mixed effects models. "Mixed" just means the model has both fixed and random effects, so let's focus on the difference between fixed and random. Random versus Fixed Effects Let's say you have a model with a categorical predictor, which divides your observations into groups according to the category values.* The model coefficients, or "effects", associated to that predictor can be either fixed or random. The most important practical difference between the two is this: Random effects are estimated with partial pooling, while fixed effects are not. Partial pooling means that, if you have few data points in a group, the group's effect estimate will be based partially on the more abundant data from other groups. This can be a nice compromise between estimating an effect by completely pooling all groups, which masks group-level variation, and estimating an effect for all groups completely separately, which could give poor estimates for low-sample groups. Random effects are simply the extension of the partial pooling technique as a general-purpose statistical model. This enables principled application of the idea to a wide variety of situations, including multiple predictors, mixed continuous and categorical variables, and complex correlation structures. (But with great power comes great responsibility: the complexity of modeling and inference is substantially increased, and can give rise to subtle biases that require considerable sophistication to avoid.) To motivate the random effects model, ask yourself: why would you partial pool? Probably because you think the little subgroups are part of some bigger group with a common mean effect. The subgroup means can deviate a bit from the big group mean, but not by an arbitrary amount. To formalize that idea, we posit that the deviations follow a distribution, typically Gaussian. That's where the "random" in random effects comes in: we're assuming the deviations of subgroups from a parent follow the distribution of a random variable. Once you have this idea in mind, the mixed-effects model equations follow naturally. Unfortunately, users of mixed effect models often have false preconceptions about what random effects are and how they differ from fixed effects. People hear "random" and think it means something very special about the system being modeled, like fixed effects have to be used when something is "fixed" while random effects have to be used when something is "randomly sampled". But there's nothing particularly random about assuming that model coefficients come from a distribution; it's just a soft constraint, similar to the $\ell_2$ penalty applied to model coefficients in ridge regression. There are many situations when you might or might not want to use random effects, and they don't necessarily have much to do with the distinction between "fixed" and "random" quantities. Unfortunately, the concept confusion caused by these terms has led to a profusion of conflicting definitions. Of the five definitions at this link, only #4 is completely correct in the general case, but it's also completely uninformative. You have to read entire papers and books (or failing that, this post) to understand what that definition implies in practical work. Example Let's look at a case where random effects modeling might be useful. Suppose you want to estimate average US household income by ZIP code. You have a large dataset containing observations of households' incomes and ZIP codes. Some ZIP codes are well represented in the dataset, but others have only a couple households. For your initial model you would most likely take the mean income in each ZIP. This will work well when you have lots of data for a ZIP, but the estimates for your poorly sampled ZIPs will suffer from high variance. You can mitigate this by using a shrinkage estimator (aka partial pooling), which will push extreme values towards the mean income across all ZIP codes. But how much shrinkage/pooling should you do for a particular ZIP? Intuitively, it should depend on the following: How many observations you have in that ZIP How many observations you have overall The individual-level mean and variance of household income across all ZIP codes The group-level variance in mean household income across all ZIP codes If you model ZIP code as a random effect, the mean income estimate in all ZIP codes will be subjected to a statistically well-founded shrinkage, taking into account all the factors above. The best part is that random and mixed effects models automatically handle (4), the variability estimation, for all random effects in the model. This is harder than it seems at first glance: you could try the variance of the sample mean for each ZIP, but this will be biased high, because some of the variance between estimates for different ZIPs is just sampling variance. In a random effects model, the inference process accounts for sampling variance and shrinks the variance estimate accordingly. Having accounted for (1)-(4), a random/mixed effects model is able to determine the appropriate shrinkage for low-sample groups. It can also handle much more complicated models with many different predictors. Relationship to Hierarchical Bayesian Modeling If this sounds like hierarchical Bayesian modeling to you, you're right - it is a close relative but not identical. Mixed effects models are hierarchical in that they posit distributions for latent, unobserved parameters, but they are typically not fully Bayesian because the top-level hyperparameters will not be given proper priors. For example, in the above example we would most likely treat the mean income in a given ZIP as a sample from a normal distribution, with unknown mean and sigma to be estimated by the mixed-effects fitting process. However, a (non-Bayesian) mixed effects model will typically not have a prior on the unknown mean and sigma, so it's not fully Bayesian. That said, with a decent-sized data set, the standard mixed effects model and the fully Bayesian variant will often give very similar results. *While many treatments of this topic focus on a narrow definition of "group", the concept is in fact very flexible: it is just a set of observations that share a common property. A group could be composed of multiple observations of a single person, or multiple people in a school, or multiple schools in a district, or multiple varieties of a single kind of fruit, or multiple kinds of vegetable from the same harvest, or multiple harvests of the same kind of vegetable, etc. Any categorical variable can be used as a grouping variable.
What is the difference between fixed effect, random effect and mixed effect models?
There are good books on this such as Gelman and Hill. What follows is essentially a summary of their perspective. First of all, you should not get too caught up in the terminology. In statistics, jarg
What is the difference between fixed effect, random effect and mixed effect models? There are good books on this such as Gelman and Hill. What follows is essentially a summary of their perspective. First of all, you should not get too caught up in the terminology. In statistics, jargon should never be used as a substitute for a mathematical understanding of the models themselves. That is especially true for random and mixed effects models. "Mixed" just means the model has both fixed and random effects, so let's focus on the difference between fixed and random. Random versus Fixed Effects Let's say you have a model with a categorical predictor, which divides your observations into groups according to the category values.* The model coefficients, or "effects", associated to that predictor can be either fixed or random. The most important practical difference between the two is this: Random effects are estimated with partial pooling, while fixed effects are not. Partial pooling means that, if you have few data points in a group, the group's effect estimate will be based partially on the more abundant data from other groups. This can be a nice compromise between estimating an effect by completely pooling all groups, which masks group-level variation, and estimating an effect for all groups completely separately, which could give poor estimates for low-sample groups. Random effects are simply the extension of the partial pooling technique as a general-purpose statistical model. This enables principled application of the idea to a wide variety of situations, including multiple predictors, mixed continuous and categorical variables, and complex correlation structures. (But with great power comes great responsibility: the complexity of modeling and inference is substantially increased, and can give rise to subtle biases that require considerable sophistication to avoid.) To motivate the random effects model, ask yourself: why would you partial pool? Probably because you think the little subgroups are part of some bigger group with a common mean effect. The subgroup means can deviate a bit from the big group mean, but not by an arbitrary amount. To formalize that idea, we posit that the deviations follow a distribution, typically Gaussian. That's where the "random" in random effects comes in: we're assuming the deviations of subgroups from a parent follow the distribution of a random variable. Once you have this idea in mind, the mixed-effects model equations follow naturally. Unfortunately, users of mixed effect models often have false preconceptions about what random effects are and how they differ from fixed effects. People hear "random" and think it means something very special about the system being modeled, like fixed effects have to be used when something is "fixed" while random effects have to be used when something is "randomly sampled". But there's nothing particularly random about assuming that model coefficients come from a distribution; it's just a soft constraint, similar to the $\ell_2$ penalty applied to model coefficients in ridge regression. There are many situations when you might or might not want to use random effects, and they don't necessarily have much to do with the distinction between "fixed" and "random" quantities. Unfortunately, the concept confusion caused by these terms has led to a profusion of conflicting definitions. Of the five definitions at this link, only #4 is completely correct in the general case, but it's also completely uninformative. You have to read entire papers and books (or failing that, this post) to understand what that definition implies in practical work. Example Let's look at a case where random effects modeling might be useful. Suppose you want to estimate average US household income by ZIP code. You have a large dataset containing observations of households' incomes and ZIP codes. Some ZIP codes are well represented in the dataset, but others have only a couple households. For your initial model you would most likely take the mean income in each ZIP. This will work well when you have lots of data for a ZIP, but the estimates for your poorly sampled ZIPs will suffer from high variance. You can mitigate this by using a shrinkage estimator (aka partial pooling), which will push extreme values towards the mean income across all ZIP codes. But how much shrinkage/pooling should you do for a particular ZIP? Intuitively, it should depend on the following: How many observations you have in that ZIP How many observations you have overall The individual-level mean and variance of household income across all ZIP codes The group-level variance in mean household income across all ZIP codes If you model ZIP code as a random effect, the mean income estimate in all ZIP codes will be subjected to a statistically well-founded shrinkage, taking into account all the factors above. The best part is that random and mixed effects models automatically handle (4), the variability estimation, for all random effects in the model. This is harder than it seems at first glance: you could try the variance of the sample mean for each ZIP, but this will be biased high, because some of the variance between estimates for different ZIPs is just sampling variance. In a random effects model, the inference process accounts for sampling variance and shrinks the variance estimate accordingly. Having accounted for (1)-(4), a random/mixed effects model is able to determine the appropriate shrinkage for low-sample groups. It can also handle much more complicated models with many different predictors. Relationship to Hierarchical Bayesian Modeling If this sounds like hierarchical Bayesian modeling to you, you're right - it is a close relative but not identical. Mixed effects models are hierarchical in that they posit distributions for latent, unobserved parameters, but they are typically not fully Bayesian because the top-level hyperparameters will not be given proper priors. For example, in the above example we would most likely treat the mean income in a given ZIP as a sample from a normal distribution, with unknown mean and sigma to be estimated by the mixed-effects fitting process. However, a (non-Bayesian) mixed effects model will typically not have a prior on the unknown mean and sigma, so it's not fully Bayesian. That said, with a decent-sized data set, the standard mixed effects model and the fully Bayesian variant will often give very similar results. *While many treatments of this topic focus on a narrow definition of "group", the concept is in fact very flexible: it is just a set of observations that share a common property. A group could be composed of multiple observations of a single person, or multiple people in a school, or multiple schools in a district, or multiple varieties of a single kind of fruit, or multiple kinds of vegetable from the same harvest, or multiple harvests of the same kind of vegetable, etc. Any categorical variable can be used as a grouping variable.
What is the difference between fixed effect, random effect and mixed effect models? There are good books on this such as Gelman and Hill. What follows is essentially a summary of their perspective. First of all, you should not get too caught up in the terminology. In statistics, jarg
145
What is the difference between fixed effect, random effect and mixed effect models?
I have written about this in a book chapter on mixed models (chapter 13 in Fox, Negrete-Yankelevich, and Sosa 2014); the relevant pages (pp. 311-315) are available on Google Books. I think the question reduces to "what are the definitions of fixed and random effects?" (a "mixed model" is just a model that contains both). My discussion says a bit less about their formal definition (for which I would defer to the Gelman paper linked by @JohnSalvatier's answer above) and more about their practical properties and utility. Here are some excerpts: The traditional view of random effects is as a way to do correct statistical tests when some observations are correlated. We can also think of random effects as a way to combine information from different levels within a grouping variable. Random effects are especially useful when we have (1) lots of levels (e.g., many species or blocks), (2) relatively little data on each level (although we need multiple samples from most of the levels), and (3) uneven sampling across levels (box 13.1). Frequentists and Bayesians define random effects somewhat differently, which affects the way they use them. Frequentists define random effects as categorical variables whose levels are chosen at random from a larger population, e.g., species chosen at random from a list of endemic species. Bayesians define random effects as sets of variables whose parameters are [all] drawn from [the same] distribution. The frequentist definition is philosophically coherent, and you will encounter researchers (including reviewers and supervisors) who insist on it, but it can be practically problematic. For example, it implies that you can’t use species as random effect when you have observed all of the species at your field site—since the list of species is not a sample from a larger population—or use year as a random effect, since researchers rarely run an experiment in randomly sampled years—they usually use either a series of consecutive years, or the haphazard set of years when they could get into the field. Random effects can also be described as predictor variables where you are interested in making inferences about the distribution of values (i.e., the variance among the values of the response at different levels) rather than in testing the differences of values between particular levels. People sometimes say that random effects are “factors that you aren’t interested in.” This is not always true. While it is often the case in ecological experiments (where variation among sites is usually just a nuisance), it is sometimes of great interest, for example in evolutionary studies where the variation among genotypes is the raw material for natural selection, or in demographic studies where among-year variation lowers long-term growth rates. In some cases fixed effects are also used to control for uninteresting variation, e.g., using mass as a covariate to control for effects of body size. You will also hear that “you can’t say anything about the (predicted) value of a conditional mode.” This is not true either—you can’t formally test a null hypothesis that the value is equal to zero, or that the values of two different levels are equal, but it is still perfectly sensible to look at the predicted value, and even to compute a standard error of the predicted value (e.g., see the error bars around the conditional modes in figure 13.1). The Bayesian framework has a simpler definition of random effects. Under a Bayesian approach, a fixed effect is one where we estimate each parameter (e.g., the mean for each species within a genus) independently (with independently specified priors), while for a random effect the parameters for each level are modeled as being drawn from a distribution (usually Normal); in standard statistical notation, $\textrm{species_mean} \sim {\cal N}(\textrm{genus_mean}, \sigma^2_{\textrm{species}})$. I said above that random effects are most useful when the grouping variable has many measured levels. Conversely, random effects are generally ineffective when the grouping variable has too few levels. You usually can’t use random effects when the grouping variable has fewer than five levels, and random effects variance estimates are unstable with fewer than eight levels, because you are trying to estimate a variance from a very small sample.
What is the difference between fixed effect, random effect and mixed effect models?
I have written about this in a book chapter on mixed models (chapter 13 in Fox, Negrete-Yankelevich, and Sosa 2014); the relevant pages (pp. 311-315) are available on Google Books. I think the questi
What is the difference between fixed effect, random effect and mixed effect models? I have written about this in a book chapter on mixed models (chapter 13 in Fox, Negrete-Yankelevich, and Sosa 2014); the relevant pages (pp. 311-315) are available on Google Books. I think the question reduces to "what are the definitions of fixed and random effects?" (a "mixed model" is just a model that contains both). My discussion says a bit less about their formal definition (for which I would defer to the Gelman paper linked by @JohnSalvatier's answer above) and more about their practical properties and utility. Here are some excerpts: The traditional view of random effects is as a way to do correct statistical tests when some observations are correlated. We can also think of random effects as a way to combine information from different levels within a grouping variable. Random effects are especially useful when we have (1) lots of levels (e.g., many species or blocks), (2) relatively little data on each level (although we need multiple samples from most of the levels), and (3) uneven sampling across levels (box 13.1). Frequentists and Bayesians define random effects somewhat differently, which affects the way they use them. Frequentists define random effects as categorical variables whose levels are chosen at random from a larger population, e.g., species chosen at random from a list of endemic species. Bayesians define random effects as sets of variables whose parameters are [all] drawn from [the same] distribution. The frequentist definition is philosophically coherent, and you will encounter researchers (including reviewers and supervisors) who insist on it, but it can be practically problematic. For example, it implies that you can’t use species as random effect when you have observed all of the species at your field site—since the list of species is not a sample from a larger population—or use year as a random effect, since researchers rarely run an experiment in randomly sampled years—they usually use either a series of consecutive years, or the haphazard set of years when they could get into the field. Random effects can also be described as predictor variables where you are interested in making inferences about the distribution of values (i.e., the variance among the values of the response at different levels) rather than in testing the differences of values between particular levels. People sometimes say that random effects are “factors that you aren’t interested in.” This is not always true. While it is often the case in ecological experiments (where variation among sites is usually just a nuisance), it is sometimes of great interest, for example in evolutionary studies where the variation among genotypes is the raw material for natural selection, or in demographic studies where among-year variation lowers long-term growth rates. In some cases fixed effects are also used to control for uninteresting variation, e.g., using mass as a covariate to control for effects of body size. You will also hear that “you can’t say anything about the (predicted) value of a conditional mode.” This is not true either—you can’t formally test a null hypothesis that the value is equal to zero, or that the values of two different levels are equal, but it is still perfectly sensible to look at the predicted value, and even to compute a standard error of the predicted value (e.g., see the error bars around the conditional modes in figure 13.1). The Bayesian framework has a simpler definition of random effects. Under a Bayesian approach, a fixed effect is one where we estimate each parameter (e.g., the mean for each species within a genus) independently (with independently specified priors), while for a random effect the parameters for each level are modeled as being drawn from a distribution (usually Normal); in standard statistical notation, $\textrm{species_mean} \sim {\cal N}(\textrm{genus_mean}, \sigma^2_{\textrm{species}})$. I said above that random effects are most useful when the grouping variable has many measured levels. Conversely, random effects are generally ineffective when the grouping variable has too few levels. You usually can’t use random effects when the grouping variable has fewer than five levels, and random effects variance estimates are unstable with fewer than eight levels, because you are trying to estimate a variance from a very small sample.
What is the difference between fixed effect, random effect and mixed effect models? I have written about this in a book chapter on mixed models (chapter 13 in Fox, Negrete-Yankelevich, and Sosa 2014); the relevant pages (pp. 311-315) are available on Google Books. I think the questi
146
What is the difference between fixed effect, random effect and mixed effect models?
Fixed effect: Something the experimenter directly manipulates and is often repeatable, e.g., drug administration - one group gets drug, one group gets placebo. Random effect: Source of random variation / experimental units e.g., individuals drawn (at random) from a population for a clinical trial. Random effects estimates the variability Mixed effect: Includes both, the fixed effect in these cases are estimating the population level coefficients, while the random effects can account for individual differences in response to an effect, e.g., each person receives both the drug and placebo on different occasions, the fixed effect estimates the effect of drug, the random effects terms would allow for each person to respond to the drug differently. General categories of mixed effects - repeated measures, longitudinal, hierarchical, split-plot.
What is the difference between fixed effect, random effect and mixed effect models?
Fixed effect: Something the experimenter directly manipulates and is often repeatable, e.g., drug administration - one group gets drug, one group gets placebo. Random effect: Source of random variatio
What is the difference between fixed effect, random effect and mixed effect models? Fixed effect: Something the experimenter directly manipulates and is often repeatable, e.g., drug administration - one group gets drug, one group gets placebo. Random effect: Source of random variation / experimental units e.g., individuals drawn (at random) from a population for a clinical trial. Random effects estimates the variability Mixed effect: Includes both, the fixed effect in these cases are estimating the population level coefficients, while the random effects can account for individual differences in response to an effect, e.g., each person receives both the drug and placebo on different occasions, the fixed effect estimates the effect of drug, the random effects terms would allow for each person to respond to the drug differently. General categories of mixed effects - repeated measures, longitudinal, hierarchical, split-plot.
What is the difference between fixed effect, random effect and mixed effect models? Fixed effect: Something the experimenter directly manipulates and is often repeatable, e.g., drug administration - one group gets drug, one group gets placebo. Random effect: Source of random variatio
147
What is the difference between fixed effect, random effect and mixed effect models?
Econometric perspective I came to this question from here, a possible duplicate. There are several excellent answers already, but as stated in the accepted answer, there are many different (but related) uses of the term, so it might be valuable to give the perspective as employed in econometrics, which does not yet seem fully addressed here. Consider a linear panel data model: $$ y_{it}=X_{it}\delta+\alpha_i+\eta_{it}, $$ the so-called error component model. Here, $\alpha_i$ is what is sometimes called individual-specific heterogeneity, the error component that is constant over time. The other error component $\eta_{it}$ is "idiosyncratic", varying both over units and over time. A reason to use a random effects approach is that the presence of $\alpha_i$ will lead to an error covariance matrix that is not "spherical" (so not a multiple of the identity matrix), so that a GLS-type approach like random effects will be more efficient than OLS). If, however, the $\alpha_i$ correlate with the regressors $X_{it}$ - as will be the case in many typical applications - one of the underlying assumptions for consistency of the standard textbook (at least what is standard in econometric textbooks) random effects estimator, viz. $Cov(\alpha_i,X_{it})=0$, is violated. Then, a fixed effect approach which effectively fits such intercepts will be more convincing. The following figure aims to illustrate this point. The raw correlation between $y$ and $X$ is positive. But, the observations belonging to one unit (color) exhibit a negative relationship - this is what we would like to identify, because this is the reaction of $y_{it}$ to a change in $X_{it}$. Also, there is correlation between the $\alpha_i$ and $X_{it}$: If the former are individual-specific intercepts (i.e., expected values for unit $i$ when $X_{it}=0$), we see that the intercept for, e.g., the lightblue panel unit is much smaller than that for the brown unit. At the same time, the lightblue panel unit has much smaller regressor values $X_{it}$. So, pooled OLS would be the wrong strategy here, because it would result in a positive esimate of $\delta$, as this estimator basically ignores the colors. RE would also be biased, being a weighted version of FE and the between estimator, which regresses the "time"-averages over $t$ onto each other. The latter however also requires lack of correlation of $\alpha_i$ and $X_{it}$. This bias however vanishes as $T$, the number of time periods per unit (m in the code below), increases, as the weight on FE then tends to one (see e.g. Hsiao, Analysis of Panel Data, Sec. 3.3.2). Here is the code that generates the data and which produces a positive RE estimate and a "correct", negative FE estimate. (That said, the RE estimates will also often be negative for other seeds, see above.) library(Jmisc) library(plm) library(RColorBrewer) # FE illustration set.seed(324) m = 8 n = 12 step = 5 alpha = runif(n,seq(0,step*n,by=step),seq(step,step*n+step,by=step)) beta = -1 y = X = matrix(NA,nrow=m,ncol=n) for (i in 1:n) { X[,i] = runif(m,i,i+1) X[,i] = rnorm(m,i) y[,i] = alpha[i] + X[,i]*beta + rnorm(m,sd=.75) } stackX = as.vector(X) stackY = as.vector(y) darkcols <- brewer.pal(12, "Paired") plot(stackX,stackY,col=rep(darkcols,each=m),pch=19) unit = rep(1:n,each=m) # first two columns are for plm to understand the panel structure paneldata = data.frame(unit,rep(1:m,n),stackY,stackX) fe <- plm(stackY~stackX, data = paneldata, model = "within") re <- plm(stackY~stackX, data = paneldata, model = "random") The output: > fe Model Formula: stackY ~ stackX Coefficients: stackX -1.0451 > re Model Formula: stackY ~ stackX Coefficients: (Intercept) stackX 18.34586 0.77031
What is the difference between fixed effect, random effect and mixed effect models?
Econometric perspective I came to this question from here, a possible duplicate. There are several excellent answers already, but as stated in the accepted answer, there are many different (but relate
What is the difference between fixed effect, random effect and mixed effect models? Econometric perspective I came to this question from here, a possible duplicate. There are several excellent answers already, but as stated in the accepted answer, there are many different (but related) uses of the term, so it might be valuable to give the perspective as employed in econometrics, which does not yet seem fully addressed here. Consider a linear panel data model: $$ y_{it}=X_{it}\delta+\alpha_i+\eta_{it}, $$ the so-called error component model. Here, $\alpha_i$ is what is sometimes called individual-specific heterogeneity, the error component that is constant over time. The other error component $\eta_{it}$ is "idiosyncratic", varying both over units and over time. A reason to use a random effects approach is that the presence of $\alpha_i$ will lead to an error covariance matrix that is not "spherical" (so not a multiple of the identity matrix), so that a GLS-type approach like random effects will be more efficient than OLS). If, however, the $\alpha_i$ correlate with the regressors $X_{it}$ - as will be the case in many typical applications - one of the underlying assumptions for consistency of the standard textbook (at least what is standard in econometric textbooks) random effects estimator, viz. $Cov(\alpha_i,X_{it})=0$, is violated. Then, a fixed effect approach which effectively fits such intercepts will be more convincing. The following figure aims to illustrate this point. The raw correlation between $y$ and $X$ is positive. But, the observations belonging to one unit (color) exhibit a negative relationship - this is what we would like to identify, because this is the reaction of $y_{it}$ to a change in $X_{it}$. Also, there is correlation between the $\alpha_i$ and $X_{it}$: If the former are individual-specific intercepts (i.e., expected values for unit $i$ when $X_{it}=0$), we see that the intercept for, e.g., the lightblue panel unit is much smaller than that for the brown unit. At the same time, the lightblue panel unit has much smaller regressor values $X_{it}$. So, pooled OLS would be the wrong strategy here, because it would result in a positive esimate of $\delta$, as this estimator basically ignores the colors. RE would also be biased, being a weighted version of FE and the between estimator, which regresses the "time"-averages over $t$ onto each other. The latter however also requires lack of correlation of $\alpha_i$ and $X_{it}$. This bias however vanishes as $T$, the number of time periods per unit (m in the code below), increases, as the weight on FE then tends to one (see e.g. Hsiao, Analysis of Panel Data, Sec. 3.3.2). Here is the code that generates the data and which produces a positive RE estimate and a "correct", negative FE estimate. (That said, the RE estimates will also often be negative for other seeds, see above.) library(Jmisc) library(plm) library(RColorBrewer) # FE illustration set.seed(324) m = 8 n = 12 step = 5 alpha = runif(n,seq(0,step*n,by=step),seq(step,step*n+step,by=step)) beta = -1 y = X = matrix(NA,nrow=m,ncol=n) for (i in 1:n) { X[,i] = runif(m,i,i+1) X[,i] = rnorm(m,i) y[,i] = alpha[i] + X[,i]*beta + rnorm(m,sd=.75) } stackX = as.vector(X) stackY = as.vector(y) darkcols <- brewer.pal(12, "Paired") plot(stackX,stackY,col=rep(darkcols,each=m),pch=19) unit = rep(1:n,each=m) # first two columns are for plm to understand the panel structure paneldata = data.frame(unit,rep(1:m,n),stackY,stackX) fe <- plm(stackY~stackX, data = paneldata, model = "within") re <- plm(stackY~stackX, data = paneldata, model = "random") The output: > fe Model Formula: stackY ~ stackX Coefficients: stackX -1.0451 > re Model Formula: stackY ~ stackX Coefficients: (Intercept) stackX 18.34586 0.77031
What is the difference between fixed effect, random effect and mixed effect models? Econometric perspective I came to this question from here, a possible duplicate. There are several excellent answers already, but as stated in the accepted answer, there are many different (but relate
148
What is the difference between fixed effect, random effect and mixed effect models?
The distinction is only meaningful in the context of non-Bayesian statistics. In Bayesian statistics, all model parameters are "random".
What is the difference between fixed effect, random effect and mixed effect models?
The distinction is only meaningful in the context of non-Bayesian statistics. In Bayesian statistics, all model parameters are "random".
What is the difference between fixed effect, random effect and mixed effect models? The distinction is only meaningful in the context of non-Bayesian statistics. In Bayesian statistics, all model parameters are "random".
What is the difference between fixed effect, random effect and mixed effect models? The distinction is only meaningful in the context of non-Bayesian statistics. In Bayesian statistics, all model parameters are "random".
149
What is the difference between fixed effect, random effect and mixed effect models?
In econometrics, the terms are typically applied in generalized linear models, where the model is of the form $$y_{it} = g(x_{it} \beta + \alpha_i + u_{it}). $$ Random effects: When $\alpha_i \perp u_{it}$, Fixed effects: When $\alpha_i \not \perp u_{it}$. In linear models, the presence of a random effect does not result in inconsistency of the OLS estimator. However, using a random effects estimator (like feasible generalized least squares) will result in a more efficient estimator. In non-linear models, such as probit, tobit, ..., the presence of a random effect will, in general, result in an inconsistent estimator. Using a random effects estimator will then restore consistency. For both linear and non-linear models, fixed effects results in a bias. However, in linear models there are transformations that can be used (such as first differences or demeaning), where OLS on the transformed data will result in consistent estimates. For non-linear models, there are a few exceptions where transformations exist, fixed effects logit being one example. Example: Random effects probit. Suppose $$ y^*_{it} = x_{it} \beta + \alpha_i + u_{it}, \quad \alpha_i \sim \mathcal{N}(0,\sigma_\alpha^2), u_{it} \sim \mathcal{N}(0,1). $$ and the observed outcome is $$ y_{it} = \mathbb{1}(y^*_{it} > 0). $$ The Pooled maximum likelihood estimator minimizes the sample average of $$ \hat{\beta} = \arg \min_\beta N^{-1} \sum_{i=1}^N \log \prod_{t=1}^T [G(x_{it}\beta)]^{y_{it}} [1 - G(x_{it}\beta)] ^{1-y_{it}}. $$ Of course, here the log and the product simplify, but for pedagogical reasons, this makes the equation more comparable to the random effects estimator, which has the form $$ \hat{\beta} = \arg \min_\beta N^{-1} \sum_{i=1}^N \log \int \prod_{t=1}^T [G(x_{it}\beta + \sigma_\alpha a)]^{y_{it}} [1 - G(x_{it}\beta + \sigma_\alpha a )] ^{1-y_{it}} \phi(a) \mathrm{d}a. $$ We can for example approximate the integral by randomization by taking $R$ draws of random normals and evaluating the likelihood for each. $$ \hat{\beta} = \arg \min_\beta N^{-1} \sum_{i=1}^N \log R^{-1} \sum_{r=1}^R \prod_{t=1}^T [G(x_{it}\beta + \sigma_\alpha a_r)]^{y_{it}} [1 - G(x_{it}\beta + \sigma_\alpha a )] ^{1-y_{it}},\quad a_r \sim \mathcal{N}(0,1). $$ The intuition is the following: we don't know what type, $\alpha_i$, each observation is. Instead, we evaluate the product of likelihoods over time for a sequence of draws. The most likely type for observation $i$ will have the highest likelihood in all periods and will therefore dominate the likelihood contribution for that $T$-sequence of observations.
What is the difference between fixed effect, random effect and mixed effect models?
In econometrics, the terms are typically applied in generalized linear models, where the model is of the form $$y_{it} = g(x_{it} \beta + \alpha_i + u_{it}). $$ Random effects: When $\alpha_i \perp
What is the difference between fixed effect, random effect and mixed effect models? In econometrics, the terms are typically applied in generalized linear models, where the model is of the form $$y_{it} = g(x_{it} \beta + \alpha_i + u_{it}). $$ Random effects: When $\alpha_i \perp u_{it}$, Fixed effects: When $\alpha_i \not \perp u_{it}$. In linear models, the presence of a random effect does not result in inconsistency of the OLS estimator. However, using a random effects estimator (like feasible generalized least squares) will result in a more efficient estimator. In non-linear models, such as probit, tobit, ..., the presence of a random effect will, in general, result in an inconsistent estimator. Using a random effects estimator will then restore consistency. For both linear and non-linear models, fixed effects results in a bias. However, in linear models there are transformations that can be used (such as first differences or demeaning), where OLS on the transformed data will result in consistent estimates. For non-linear models, there are a few exceptions where transformations exist, fixed effects logit being one example. Example: Random effects probit. Suppose $$ y^*_{it} = x_{it} \beta + \alpha_i + u_{it}, \quad \alpha_i \sim \mathcal{N}(0,\sigma_\alpha^2), u_{it} \sim \mathcal{N}(0,1). $$ and the observed outcome is $$ y_{it} = \mathbb{1}(y^*_{it} > 0). $$ The Pooled maximum likelihood estimator minimizes the sample average of $$ \hat{\beta} = \arg \min_\beta N^{-1} \sum_{i=1}^N \log \prod_{t=1}^T [G(x_{it}\beta)]^{y_{it}} [1 - G(x_{it}\beta)] ^{1-y_{it}}. $$ Of course, here the log and the product simplify, but for pedagogical reasons, this makes the equation more comparable to the random effects estimator, which has the form $$ \hat{\beta} = \arg \min_\beta N^{-1} \sum_{i=1}^N \log \int \prod_{t=1}^T [G(x_{it}\beta + \sigma_\alpha a)]^{y_{it}} [1 - G(x_{it}\beta + \sigma_\alpha a )] ^{1-y_{it}} \phi(a) \mathrm{d}a. $$ We can for example approximate the integral by randomization by taking $R$ draws of random normals and evaluating the likelihood for each. $$ \hat{\beta} = \arg \min_\beta N^{-1} \sum_{i=1}^N \log R^{-1} \sum_{r=1}^R \prod_{t=1}^T [G(x_{it}\beta + \sigma_\alpha a_r)]^{y_{it}} [1 - G(x_{it}\beta + \sigma_\alpha a )] ^{1-y_{it}},\quad a_r \sim \mathcal{N}(0,1). $$ The intuition is the following: we don't know what type, $\alpha_i$, each observation is. Instead, we evaluate the product of likelihoods over time for a sequence of draws. The most likely type for observation $i$ will have the highest likelihood in all periods and will therefore dominate the likelihood contribution for that $T$-sequence of observations.
What is the difference between fixed effect, random effect and mixed effect models? In econometrics, the terms are typically applied in generalized linear models, where the model is of the form $$y_{it} = g(x_{it} \beta + \alpha_i + u_{it}). $$ Random effects: When $\alpha_i \perp
150
What is the difference between fixed effect, random effect and mixed effect models?
Not really a formal definition, but I like the following slides: Mixed models and why sociolinguists should use them, from Daniel Ezra Johnson. A brief recap' is offered on slide 4. Although it mostly focused on psycholinguistic studies, it is very useful as a first step.
What is the difference between fixed effect, random effect and mixed effect models?
Not really a formal definition, but I like the following slides: Mixed models and why sociolinguists should use them, from Daniel Ezra Johnson. A brief recap' is offered on slide 4. Although it mostly
What is the difference between fixed effect, random effect and mixed effect models? Not really a formal definition, but I like the following slides: Mixed models and why sociolinguists should use them, from Daniel Ezra Johnson. A brief recap' is offered on slide 4. Although it mostly focused on psycholinguistic studies, it is very useful as a first step.
What is the difference between fixed effect, random effect and mixed effect models? Not really a formal definition, but I like the following slides: Mixed models and why sociolinguists should use them, from Daniel Ezra Johnson. A brief recap' is offered on slide 4. Although it mostly
151
What is the difference between fixed effect, random effect and mixed effect models?
Another very practical perspective on random and fixed effects models comes from econometrics when doing linear regressions on panel data. If you’re estimating the association between an explanatory variable and an outcome variable in a dataset with multiple samples per individual / group, this is the framework you want to use. A good example of panel data is yearly measurements from a set of individuals of: $gender_i$ (gender of the $i$th person) ${\Delta}weight_{it}$ (weight change during year $t$ for person $i$) $exercise_{it}$ (average daily exercise during year $t$ for person $i$) If we’re trying to understand the relationship between exercise and weight change, we’ll set up the following regression: ${\Delta}weight_{it} = \beta_0$$exercise_{it} + \beta_1gender_i + \alpha_i + \epsilon_{it}$ $\beta_0$ is the quantity of interest $\beta_1$ is not interesting, we're just controlling for gender with it $\alpha_i$ is the per-individual intercept $\epsilon_{it}$ is the error term In a setup like this there is the risk of endogeneity. This can happen when unmeasured variables (such as marital status) are associated with both exercise and weight change. As explained on p.16 in this Princeton lecture, a random effects (AKA mixed effects) model is more efficient than a fixed effects model. However, it will incorrectly attribute some of the effect of the unmeasured variable on weight change to exercise, producing an incorrect $\beta_0$ and potentially a higher statistical significance than is valid. In this case the random effects model is not a consistent estimator of $\beta_0$. A fixed effects model (in its most basic form) controls for any unmeasured variables that are constant over time but vary between individuals by explicitly including a separate intercept term for each individual ($\alpha_i$) in the regression equation. In our example, it will automatically control for confounding effects from gender, as well as any unmeasured confounders (marital status, socioeconomic status, educational attainment, etc…). In fact, gender cannot be included in the regression and $\beta_1$ cannot be estimated by a fixed effects model, since $gender_i$ is collinear with the $\alpha_i$'s. So, the key question is to determine which model is appropriate. The answer is the Hausman Test. To use it we perform both the fixed and random effects regression, and then apply the Hausman Test to see if their coefficient estimates diverge significantly. If they diverge, endogeneity is at play and a fixed effects model is the best choice. Otherwise, we’ll go with random effects.
What is the difference between fixed effect, random effect and mixed effect models?
Another very practical perspective on random and fixed effects models comes from econometrics when doing linear regressions on panel data. If you’re estimating the association between an explanatory v
What is the difference between fixed effect, random effect and mixed effect models? Another very practical perspective on random and fixed effects models comes from econometrics when doing linear regressions on panel data. If you’re estimating the association between an explanatory variable and an outcome variable in a dataset with multiple samples per individual / group, this is the framework you want to use. A good example of panel data is yearly measurements from a set of individuals of: $gender_i$ (gender of the $i$th person) ${\Delta}weight_{it}$ (weight change during year $t$ for person $i$) $exercise_{it}$ (average daily exercise during year $t$ for person $i$) If we’re trying to understand the relationship between exercise and weight change, we’ll set up the following regression: ${\Delta}weight_{it} = \beta_0$$exercise_{it} + \beta_1gender_i + \alpha_i + \epsilon_{it}$ $\beta_0$ is the quantity of interest $\beta_1$ is not interesting, we're just controlling for gender with it $\alpha_i$ is the per-individual intercept $\epsilon_{it}$ is the error term In a setup like this there is the risk of endogeneity. This can happen when unmeasured variables (such as marital status) are associated with both exercise and weight change. As explained on p.16 in this Princeton lecture, a random effects (AKA mixed effects) model is more efficient than a fixed effects model. However, it will incorrectly attribute some of the effect of the unmeasured variable on weight change to exercise, producing an incorrect $\beta_0$ and potentially a higher statistical significance than is valid. In this case the random effects model is not a consistent estimator of $\beta_0$. A fixed effects model (in its most basic form) controls for any unmeasured variables that are constant over time but vary between individuals by explicitly including a separate intercept term for each individual ($\alpha_i$) in the regression equation. In our example, it will automatically control for confounding effects from gender, as well as any unmeasured confounders (marital status, socioeconomic status, educational attainment, etc…). In fact, gender cannot be included in the regression and $\beta_1$ cannot be estimated by a fixed effects model, since $gender_i$ is collinear with the $\alpha_i$'s. So, the key question is to determine which model is appropriate. The answer is the Hausman Test. To use it we perform both the fixed and random effects regression, and then apply the Hausman Test to see if their coefficient estimates diverge significantly. If they diverge, endogeneity is at play and a fixed effects model is the best choice. Otherwise, we’ll go with random effects.
What is the difference between fixed effect, random effect and mixed effect models? Another very practical perspective on random and fixed effects models comes from econometrics when doing linear regressions on panel data. If you’re estimating the association between an explanatory v
152
Explaining to laypeople why bootstrapping works
fwiw the medium length version I usually give goes like this: You want to ask a question of a population but you can't. So you take a sample and ask the question of it instead. Now, how confident you should be that the sample answer is close to the population answer obviously depends on the structure of population. One way you might learn about this is to take samples from the population again and again, ask them the question, and see how variable the sample answers tended to be. Since this isn't possible you can either make some assumptions about the shape of the population, or you can use the information in the sample you actually have to learn about it. Imagine you decide to make assumptions, e.g. that it is Normal, or Bernoulli or some other convenient fiction. Following the previous strategy you could again learn about how much the answer to your question when asked of a sample might vary depending on which particular sample you happened to get by repeatedly generating samples of the same size as the one you have and asking them the same question. That would be straightforward to the extent that you chose computationally convenient assumptions. (Indeed particularly convenient assumptions plus non-trivial math may allow you to bypass the sampling part altogether, but we will deliberately ignore that here.) This seems like a good idea provided you are happy to make the assumptions. Imagine you are not. An alternative is to take the sample you have and sample from it instead. You can do this because the sample you have is also a population, just a very small discrete one; it looks like the histogram of your data. Sampling 'with replacement' is just a convenient way to treat the sample like it's a population and to sample from it in a way that reflects its shape. This is a reasonable thing to do because not only is the sample you have the best, indeed the only information you have about what the population actually looks like, but also because most samples will, if they're randomly chosen, look quite like the population they came from. Consequently it is likely that yours does too. For intuition it is important to think about how you could learn about variability by aggregating sampled information that is generated in various ways and on various assumptions. Completely ignoring the possibility of closed form mathematical solutions is important to get clear about this.
Explaining to laypeople why bootstrapping works
fwiw the medium length version I usually give goes like this: You want to ask a question of a population but you can't. So you take a sample and ask the question of it instead. Now, how confident yo
Explaining to laypeople why bootstrapping works fwiw the medium length version I usually give goes like this: You want to ask a question of a population but you can't. So you take a sample and ask the question of it instead. Now, how confident you should be that the sample answer is close to the population answer obviously depends on the structure of population. One way you might learn about this is to take samples from the population again and again, ask them the question, and see how variable the sample answers tended to be. Since this isn't possible you can either make some assumptions about the shape of the population, or you can use the information in the sample you actually have to learn about it. Imagine you decide to make assumptions, e.g. that it is Normal, or Bernoulli or some other convenient fiction. Following the previous strategy you could again learn about how much the answer to your question when asked of a sample might vary depending on which particular sample you happened to get by repeatedly generating samples of the same size as the one you have and asking them the same question. That would be straightforward to the extent that you chose computationally convenient assumptions. (Indeed particularly convenient assumptions plus non-trivial math may allow you to bypass the sampling part altogether, but we will deliberately ignore that here.) This seems like a good idea provided you are happy to make the assumptions. Imagine you are not. An alternative is to take the sample you have and sample from it instead. You can do this because the sample you have is also a population, just a very small discrete one; it looks like the histogram of your data. Sampling 'with replacement' is just a convenient way to treat the sample like it's a population and to sample from it in a way that reflects its shape. This is a reasonable thing to do because not only is the sample you have the best, indeed the only information you have about what the population actually looks like, but also because most samples will, if they're randomly chosen, look quite like the population they came from. Consequently it is likely that yours does too. For intuition it is important to think about how you could learn about variability by aggregating sampled information that is generated in various ways and on various assumptions. Completely ignoring the possibility of closed form mathematical solutions is important to get clear about this.
Explaining to laypeople why bootstrapping works fwiw the medium length version I usually give goes like this: You want to ask a question of a population but you can't. So you take a sample and ask the question of it instead. Now, how confident yo
153
Explaining to laypeople why bootstrapping works
+1 to @ConjugatePrior, I just want to bring out one point which is implicit in his answer. The question asks, "if we are resampling from our sample, how is it that we are learning something about the population rather than only about the sample?" Resampling is not done to provide an estimate of the population distribution--we take our sample itself as a model of the population. Rather, resampling is done to provide an estimate of the sampling distribution of the sample statistic in question.
Explaining to laypeople why bootstrapping works
+1 to @ConjugatePrior, I just want to bring out one point which is implicit in his answer. The question asks, "if we are resampling from our sample, how is it that we are learning something about the
Explaining to laypeople why bootstrapping works +1 to @ConjugatePrior, I just want to bring out one point which is implicit in his answer. The question asks, "if we are resampling from our sample, how is it that we are learning something about the population rather than only about the sample?" Resampling is not done to provide an estimate of the population distribution--we take our sample itself as a model of the population. Rather, resampling is done to provide an estimate of the sampling distribution of the sample statistic in question.
Explaining to laypeople why bootstrapping works +1 to @ConjugatePrior, I just want to bring out one point which is implicit in his answer. The question asks, "if we are resampling from our sample, how is it that we are learning something about the
154
Explaining to laypeople why bootstrapping works
This is probably a more technical explanation aimed at people who understand some statistics and mathematics (calculus, at least). Here's a slide from a course on survey bootstraps that I taught some while ago: Some explanations are needed, of course. $T$ is the procedure to obtain the statistic from the existing data (or, to be technically precise, a functional from the distribution function to real numbers; e.g., the mean is $E[X]=\int x {\rm d}F$, where for the sample distribution function $F_n()$, the ${\rm d}F$ is understood as a point mass at a sample point). In the population, denoted by $F()$, application of $T$ gives the parameter of interest $\theta$. Now, we've taken a sample (the first arrow on the top), and have the empirical distribution function $F_n()$ -- we apply $T$ to it to obtain the estimate $\hat\theta_n$. How far is it from $\theta$, we wonder? What is the distribution that the random quantity $\hat\theta_n$ may have around $\theta$? This is the question mark in the lower left of the diagram, and this is the question the bootstrap tries to answer. To restate gung's point, this is not the question about the population, but the question about a particular statistic and its distribution. If we could repeat our sampling procedure, we could get that distribution and learn more. Well, that usually is beyond our capabilities. However, if $F_n$ is close enough to $F$, in a suitable sense, and the mapping $T$ is smooth enough, i.e., if we take small deviations from $F()$, the results will be mapped to numbers close to $\theta$, we can hope that the bootstrap procedure will work. Namely, we pretend that our distribution is $F_n()$ rather than $F()$, and with that we can entertain all possible samples -- and there will be $n^n$ such samples, which is only practical for $n\le 5$. Let me repeat again: the bootstrap works to create the sampling distribution of $\hat\theta_n^*$ around the "true" parameter $\hat\theta_n$, and we hope that with the two above conditions, this sampling distribution is informative about the sampling distribution of $\hat\theta_n$ around $\theta$: $$ \hat\theta_n^* \mbox{ to } \hat\theta_n \mbox{ is like } \hat\theta_n \mbox{ to } \theta $$ Now, instead of just going one way along the arrows, and losing some information/accuracy along these arrows, we can go back and say something about variability of $\hat\theta_n^*$ around $\hat\theta_n$. The above conditions are spelled out it utmost technicality in Hall's The Bootstrap and Edgeworth Expansion (1992) book. The understanding of calculus that I said may be required as a prerequisite to staring at this slide is the second assumption concerning smoothness: in more formal language, the functional $T$ must possess a weak derivative. The first condition is, of course, an asymptotic statement: the larger your sample, the closer $F_n$ should become to $F$; and the distances from $\hat\theta_n^*$ to $\hat \theta_n$ should be the same order of magnitude as those from $\hat\theta_n$ to $\theta$. These conditions may break, and they do break (Canty et al 2006 CJS) in a number of practical situations with weird enough statistics and/or sampling schemes that do not produce empirical distributions that are close enough to $F$. Now, where does that 1000 samples, or whatever the magic number might be, comes from? It comes from our inability to draw all $n^n$ samples, so we just take a random subset of these. The right most "simulate" arrow states another approximation that we are making on our way to get the distribution of $\hat\theta_n$ around $\theta$, and that is to say that our Monte Carlo simulated distribution of $\hat\theta_n^{(*r)}$ is a good enough approximation of the complete bootstrap distribution of $\hat\theta_n^*$ around $\hat\theta_n$.
Explaining to laypeople why bootstrapping works
This is probably a more technical explanation aimed at people who understand some statistics and mathematics (calculus, at least). Here's a slide from a course on survey bootstraps that I taught some
Explaining to laypeople why bootstrapping works This is probably a more technical explanation aimed at people who understand some statistics and mathematics (calculus, at least). Here's a slide from a course on survey bootstraps that I taught some while ago: Some explanations are needed, of course. $T$ is the procedure to obtain the statistic from the existing data (or, to be technically precise, a functional from the distribution function to real numbers; e.g., the mean is $E[X]=\int x {\rm d}F$, where for the sample distribution function $F_n()$, the ${\rm d}F$ is understood as a point mass at a sample point). In the population, denoted by $F()$, application of $T$ gives the parameter of interest $\theta$. Now, we've taken a sample (the first arrow on the top), and have the empirical distribution function $F_n()$ -- we apply $T$ to it to obtain the estimate $\hat\theta_n$. How far is it from $\theta$, we wonder? What is the distribution that the random quantity $\hat\theta_n$ may have around $\theta$? This is the question mark in the lower left of the diagram, and this is the question the bootstrap tries to answer. To restate gung's point, this is not the question about the population, but the question about a particular statistic and its distribution. If we could repeat our sampling procedure, we could get that distribution and learn more. Well, that usually is beyond our capabilities. However, if $F_n$ is close enough to $F$, in a suitable sense, and the mapping $T$ is smooth enough, i.e., if we take small deviations from $F()$, the results will be mapped to numbers close to $\theta$, we can hope that the bootstrap procedure will work. Namely, we pretend that our distribution is $F_n()$ rather than $F()$, and with that we can entertain all possible samples -- and there will be $n^n$ such samples, which is only practical for $n\le 5$. Let me repeat again: the bootstrap works to create the sampling distribution of $\hat\theta_n^*$ around the "true" parameter $\hat\theta_n$, and we hope that with the two above conditions, this sampling distribution is informative about the sampling distribution of $\hat\theta_n$ around $\theta$: $$ \hat\theta_n^* \mbox{ to } \hat\theta_n \mbox{ is like } \hat\theta_n \mbox{ to } \theta $$ Now, instead of just going one way along the arrows, and losing some information/accuracy along these arrows, we can go back and say something about variability of $\hat\theta_n^*$ around $\hat\theta_n$. The above conditions are spelled out it utmost technicality in Hall's The Bootstrap and Edgeworth Expansion (1992) book. The understanding of calculus that I said may be required as a prerequisite to staring at this slide is the second assumption concerning smoothness: in more formal language, the functional $T$ must possess a weak derivative. The first condition is, of course, an asymptotic statement: the larger your sample, the closer $F_n$ should become to $F$; and the distances from $\hat\theta_n^*$ to $\hat \theta_n$ should be the same order of magnitude as those from $\hat\theta_n$ to $\theta$. These conditions may break, and they do break (Canty et al 2006 CJS) in a number of practical situations with weird enough statistics and/or sampling schemes that do not produce empirical distributions that are close enough to $F$. Now, where does that 1000 samples, or whatever the magic number might be, comes from? It comes from our inability to draw all $n^n$ samples, so we just take a random subset of these. The right most "simulate" arrow states another approximation that we are making on our way to get the distribution of $\hat\theta_n$ around $\theta$, and that is to say that our Monte Carlo simulated distribution of $\hat\theta_n^{(*r)}$ is a good enough approximation of the complete bootstrap distribution of $\hat\theta_n^*$ around $\hat\theta_n$.
Explaining to laypeople why bootstrapping works This is probably a more technical explanation aimed at people who understand some statistics and mathematics (calculus, at least). Here's a slide from a course on survey bootstraps that I taught some
155
Explaining to laypeople why bootstrapping works
I am answering this question because I agree that this is a difficult thing to do and there are many misconceptions. Efron and Diaconis attempted to do that in their 1983 Scientific American article and in my view they failed. There are several books out now devoted to the bootstrap that do a good job. Efron and Tibshirani do a great job in their article in Statistical Science in 1986. I tried especially hard to make bootstrap accessible to practitioner's in my bootstrap methods book and my introdcution to bootstrap with applications to R. Hall's book is great but very advanced and theoretical. Tim Hesterberg has written a great supplemental chapter to one of David Moore's introductory statistics books. The late Clifford Lunneborg had a nice book. Chihara and Hesterberg recently came out with an intermediate level mathematical statistics book that covers the bootstrap and other resampling methods. Even advanced books like Lahiri's or Shao and Tu's give good conceptual explanations. Manly does well with his book that covers permutations and the bootstrap There is no reason to be puzzled about the bootstrap anymore. It is important to keep in mind that the bootstrap depends on the bootstrap principle "Sampling with replacement behaves on the original sample the way the original sample behaves on a population. There are examples where this principle fails. It is important to know that the bootstrap is not the answer to every statistical problem. Here are amazon links to all the books I mentioned and more. Mathematical Statistics with Resampling and R Bootstrap Methods and their Application Bootstrap Methods: A Guide for Practitioners and Researchers An Introduction to Bootstrap Methods with Applications to R Resampling Methods for Dependent Data Randomization, Bootstrap and Monte Carlo Methods in Biology An Introduction to the Bootstrap The Practice of Business Statistics Companion Chapter 18: Bootstrap Methods and Permutation Tests Data Analysis by Resampling: Concepts and Applications The Jackknife, the Bootstrap, and Other Resampling Plans The Jackknife and Bootstrap Permutation, Parametric, and Bootstrap Tests of Hypotheses The Bootstrap and Edgeworth Expansion
Explaining to laypeople why bootstrapping works
I am answering this question because I agree that this is a difficult thing to do and there are many misconceptions. Efron and Diaconis attempted to do that in their 1983 Scientific American article
Explaining to laypeople why bootstrapping works I am answering this question because I agree that this is a difficult thing to do and there are many misconceptions. Efron and Diaconis attempted to do that in their 1983 Scientific American article and in my view they failed. There are several books out now devoted to the bootstrap that do a good job. Efron and Tibshirani do a great job in their article in Statistical Science in 1986. I tried especially hard to make bootstrap accessible to practitioner's in my bootstrap methods book and my introdcution to bootstrap with applications to R. Hall's book is great but very advanced and theoretical. Tim Hesterberg has written a great supplemental chapter to one of David Moore's introductory statistics books. The late Clifford Lunneborg had a nice book. Chihara and Hesterberg recently came out with an intermediate level mathematical statistics book that covers the bootstrap and other resampling methods. Even advanced books like Lahiri's or Shao and Tu's give good conceptual explanations. Manly does well with his book that covers permutations and the bootstrap There is no reason to be puzzled about the bootstrap anymore. It is important to keep in mind that the bootstrap depends on the bootstrap principle "Sampling with replacement behaves on the original sample the way the original sample behaves on a population. There are examples where this principle fails. It is important to know that the bootstrap is not the answer to every statistical problem. Here are amazon links to all the books I mentioned and more. Mathematical Statistics with Resampling and R Bootstrap Methods and their Application Bootstrap Methods: A Guide for Practitioners and Researchers An Introduction to Bootstrap Methods with Applications to R Resampling Methods for Dependent Data Randomization, Bootstrap and Monte Carlo Methods in Biology An Introduction to the Bootstrap The Practice of Business Statistics Companion Chapter 18: Bootstrap Methods and Permutation Tests Data Analysis by Resampling: Concepts and Applications The Jackknife, the Bootstrap, and Other Resampling Plans The Jackknife and Bootstrap Permutation, Parametric, and Bootstrap Tests of Hypotheses The Bootstrap and Edgeworth Expansion
Explaining to laypeople why bootstrapping works I am answering this question because I agree that this is a difficult thing to do and there are many misconceptions. Efron and Diaconis attempted to do that in their 1983 Scientific American article
156
Explaining to laypeople why bootstrapping works
Through bootstrapping you are simply taking samples over and over again from the same group of data (your sample data) to estimate how accurate your estimates about the entire population (what really is out there in the real world) is. If you were to take one sample and make estimates on the real population, you might not be able to estimate how accurate your estimates are - we only have one estimate and have not identified how this estimate varies with different samples that we might have encountered. With bootstrapping, we use this main sample to generate multiple samples. For example, if we measured the profit every day over 1000 days we might take random samples from this set. We might the profit from one random day, record it, get the profit from another random day (which might happen to be the same day as before - sampling with replacement), record it, and so forth, until we get a "new" sample of 1000days (from the original sample). This "new" sample is not identical to the original sample - indeed we might generate several "new" samples as above. When we look at the variations in the means and estimate, we are able to get a reading on how accurate the original estimates were. Edit - in response to comment The "newer" samples are not identical to the first one and the new estimates based on these will vary. This simulates repeated samples of the population. The variations in the estimates of the "newer" samples generated by the bootstrap will shed a light on how the sample estimates would vary given different samples from the population. This is in fact how we can get try to measure the accuracy of the original estimates. Of course, instead of bootstrapping you might instead take several new samples from the population but this might be infeasible.
Explaining to laypeople why bootstrapping works
Through bootstrapping you are simply taking samples over and over again from the same group of data (your sample data) to estimate how accurate your estimates about the entire population (what really
Explaining to laypeople why bootstrapping works Through bootstrapping you are simply taking samples over and over again from the same group of data (your sample data) to estimate how accurate your estimates about the entire population (what really is out there in the real world) is. If you were to take one sample and make estimates on the real population, you might not be able to estimate how accurate your estimates are - we only have one estimate and have not identified how this estimate varies with different samples that we might have encountered. With bootstrapping, we use this main sample to generate multiple samples. For example, if we measured the profit every day over 1000 days we might take random samples from this set. We might the profit from one random day, record it, get the profit from another random day (which might happen to be the same day as before - sampling with replacement), record it, and so forth, until we get a "new" sample of 1000days (from the original sample). This "new" sample is not identical to the original sample - indeed we might generate several "new" samples as above. When we look at the variations in the means and estimate, we are able to get a reading on how accurate the original estimates were. Edit - in response to comment The "newer" samples are not identical to the first one and the new estimates based on these will vary. This simulates repeated samples of the population. The variations in the estimates of the "newer" samples generated by the bootstrap will shed a light on how the sample estimates would vary given different samples from the population. This is in fact how we can get try to measure the accuracy of the original estimates. Of course, instead of bootstrapping you might instead take several new samples from the population but this might be infeasible.
Explaining to laypeople why bootstrapping works Through bootstrapping you are simply taking samples over and over again from the same group of data (your sample data) to estimate how accurate your estimates about the entire population (what really
157
Explaining to laypeople why bootstrapping works
I realize this is an old question with an accepted answer, but I'd like to provide my view of the bootstrap method. I'm in no ways an expert (more of a statistics user, as the OP) and welcome any corrections or comments. I like to view bootstrap as a generalization of the jackknife method. So, let's say you have a sample S of size 100 and estimate some parameter by using a statistic T(S). Now, you would like to know a confidence interval for this point estimate. In case you don't have a model and analytical expression for standard error you may go ahead and delete one element from the sample, creating a subsample $S_i$ with element i deleted. Now you can compute $T(S_i)$ and get 100 new estimates of the parameter from which you can compute e.g. standard error and create a confidence interval. This is the jackknife method JK-1. You may consider all subsets of size 98 instead and get JK-2 (2 elements deleted) or JK-3 etc. Now, bootstrap is just a randomized version of this. By doing resampling via selection with replacements you would "delete" a random number of elements (possibly none) and "replace" them by one (or more) replicates. By replacing with replicates the resampled dataset always have the same size. For jackknife you may ask what is the effect of jackknifing on samples of size 99 instead of 100, but if sample size is "sufficiently large" this is likely a non-issue. In jackknife you never mix delete-1 and delete-2 etc, to make sure the jacked estimates are from samples of same size. You may also consider splitting the sample of size 100 into e.g. 10 samples of size 10. This would in some theoretical aspects be cleaner (independent subsets) but reduces the sample size (from 100 to 10) so much as to be impractical (in most cases). You could also consider partially overlapping subsets of certain size. All this is handled in an automatic and uniform and random way by the bootstrap method. Further, the bootstrap method gives you an estimate of the sampling distribution of your statistic from the empirical distribution of the original sample, so you can analyze further properties of the statistic besides standard error.
Explaining to laypeople why bootstrapping works
I realize this is an old question with an accepted answer, but I'd like to provide my view of the bootstrap method. I'm in no ways an expert (more of a statistics user, as the OP) and welcome any corr
Explaining to laypeople why bootstrapping works I realize this is an old question with an accepted answer, but I'd like to provide my view of the bootstrap method. I'm in no ways an expert (more of a statistics user, as the OP) and welcome any corrections or comments. I like to view bootstrap as a generalization of the jackknife method. So, let's say you have a sample S of size 100 and estimate some parameter by using a statistic T(S). Now, you would like to know a confidence interval for this point estimate. In case you don't have a model and analytical expression for standard error you may go ahead and delete one element from the sample, creating a subsample $S_i$ with element i deleted. Now you can compute $T(S_i)$ and get 100 new estimates of the parameter from which you can compute e.g. standard error and create a confidence interval. This is the jackknife method JK-1. You may consider all subsets of size 98 instead and get JK-2 (2 elements deleted) or JK-3 etc. Now, bootstrap is just a randomized version of this. By doing resampling via selection with replacements you would "delete" a random number of elements (possibly none) and "replace" them by one (or more) replicates. By replacing with replicates the resampled dataset always have the same size. For jackknife you may ask what is the effect of jackknifing on samples of size 99 instead of 100, but if sample size is "sufficiently large" this is likely a non-issue. In jackknife you never mix delete-1 and delete-2 etc, to make sure the jacked estimates are from samples of same size. You may also consider splitting the sample of size 100 into e.g. 10 samples of size 10. This would in some theoretical aspects be cleaner (independent subsets) but reduces the sample size (from 100 to 10) so much as to be impractical (in most cases). You could also consider partially overlapping subsets of certain size. All this is handled in an automatic and uniform and random way by the bootstrap method. Further, the bootstrap method gives you an estimate of the sampling distribution of your statistic from the empirical distribution of the original sample, so you can analyze further properties of the statistic besides standard error.
Explaining to laypeople why bootstrapping works I realize this is an old question with an accepted answer, but I'd like to provide my view of the bootstrap method. I'm in no ways an expert (more of a statistics user, as the OP) and welcome any corr
158
Explaining to laypeople why bootstrapping works
A finite sampling of the population approximates the distribution the same way a histogram approximates it. By re-sampling, each bin count is changed and you get a new approximation. Large count values fluctuate less that small count values both in the original population and in the sampled set. Since you are explaining this to a layperson, you can argue that for large bin counts this is roughly the square root of the bin count in both cases. If I find $20$ redheads and $80$ others out of a sample of $100$, re-sampling would estimate the fluctuation of redheads as $\sqrt{(0.2 \times 0.8) \times 100}$, which is just like assuming that the original population was truly distributed $1:4$. So if we approximate the true probability as the sampled one, we can get an estimate of sampling error "around" this value. I think it is important to stress that the bootstrap does not uncover "new" data, it is just a convenient, non parametric way to approximately determine the sample to sample fluctuations if the true probability is given by the sampled one.
Explaining to laypeople why bootstrapping works
A finite sampling of the population approximates the distribution the same way a histogram approximates it. By re-sampling, each bin count is changed and you get a new approximation. Large count value
Explaining to laypeople why bootstrapping works A finite sampling of the population approximates the distribution the same way a histogram approximates it. By re-sampling, each bin count is changed and you get a new approximation. Large count values fluctuate less that small count values both in the original population and in the sampled set. Since you are explaining this to a layperson, you can argue that for large bin counts this is roughly the square root of the bin count in both cases. If I find $20$ redheads and $80$ others out of a sample of $100$, re-sampling would estimate the fluctuation of redheads as $\sqrt{(0.2 \times 0.8) \times 100}$, which is just like assuming that the original population was truly distributed $1:4$. So if we approximate the true probability as the sampled one, we can get an estimate of sampling error "around" this value. I think it is important to stress that the bootstrap does not uncover "new" data, it is just a convenient, non parametric way to approximately determine the sample to sample fluctuations if the true probability is given by the sampled one.
Explaining to laypeople why bootstrapping works A finite sampling of the population approximates the distribution the same way a histogram approximates it. By re-sampling, each bin count is changed and you get a new approximation. Large count value
159
Explaining to laypeople why bootstrapping works
Paraphrasing Fox, I would start by saying that the process of repeatedly resampling from your observed sample has been shown to mimic the process of the original sampling from the whole population.
Explaining to laypeople why bootstrapping works
Paraphrasing Fox, I would start by saying that the process of repeatedly resampling from your observed sample has been shown to mimic the process of the original sampling from the whole population.
Explaining to laypeople why bootstrapping works Paraphrasing Fox, I would start by saying that the process of repeatedly resampling from your observed sample has been shown to mimic the process of the original sampling from the whole population.
Explaining to laypeople why bootstrapping works Paraphrasing Fox, I would start by saying that the process of repeatedly resampling from your observed sample has been shown to mimic the process of the original sampling from the whole population.
160
Explaining to laypeople why bootstrapping works
Note that in classic inferential statistics the theoretical entity that connects a sample to the population as a good estimator of the population is the sampling distribution (all the possible samples that could be drawn from the population). The bootstrap method is creating a kind of sampling distribution (a distribution based on multiple samples). Sure, it is a maximum likelihood method, but the basic logic is not that different from that of the traditional probability theory behind classic normal distribution-based statistics.
Explaining to laypeople why bootstrapping works
Note that in classic inferential statistics the theoretical entity that connects a sample to the population as a good estimator of the population is the sampling distribution (all the possible samples
Explaining to laypeople why bootstrapping works Note that in classic inferential statistics the theoretical entity that connects a sample to the population as a good estimator of the population is the sampling distribution (all the possible samples that could be drawn from the population). The bootstrap method is creating a kind of sampling distribution (a distribution based on multiple samples). Sure, it is a maximum likelihood method, but the basic logic is not that different from that of the traditional probability theory behind classic normal distribution-based statistics.
Explaining to laypeople why bootstrapping works Note that in classic inferential statistics the theoretical entity that connects a sample to the population as a good estimator of the population is the sampling distribution (all the possible samples
161
Explaining to laypeople why bootstrapping works
When explaining to beginners I think it helps to take a specific example... Imagine you've got a random sample of 9 measurements from some population. The mean of the sample is 60. Can we be sure that the average of the whole population is also 60? Obviously not because small samples will vary, so the estimate of 60 is likely to be inaccurate. To find out how much samples like this will vary, we can run some experiments - using a method called bootstrapping. The first number in the sample is 74 and the second is 65, so let's imagine a big "pretend" population comprising one ninth 74's, one ninth 65's, and so on. The easiest way to take a random sample from this population is to take a number at random from the sample of nine, then replace it so you have the original sample of nine again and choose another one at random, and so on until you have a "resample" of 9. When I did this, 74 did not appear at all but some of the other numbers appeared twice, and the mean was 54.4. (This is set up on the spreadsheet at http://woodm.myweb.port.ac.uk/SL/resample.xlsx - click on the bootstrap tab at the bottom of the screen.) When I took 1000 resamples in this way their means varied from 44 to 80, with 95% between 48 and 72. Which suggests that there is an error of up to 16-20 units (44 is 16 below the pretend population mean of 60, 80 is 20 units above) in using samples of size 9 to estimate the population mean. and that we can be 95% confident that the error will be 12 or less. So we can be 95% confident that the population mean will be somewhere between 48 and 72. There are a number of assumptions glossed over here, the obvious one being the assumption that the sample gives a useful picture of the population - experience shows this generally works well provided the sample is reasonably large (9 is a bit small but makes it easier to see what's going on). The spreadsheet at http://woodm.myweb.port.ac.uk/SL/resample.xlsx enables you to see individual resamples, plot histograms of 1000 resamples, experiment with larger samples, etc. There's a more detailed explanation in the article at https://arxiv.org/abs/1803.06214.
Explaining to laypeople why bootstrapping works
When explaining to beginners I think it helps to take a specific example... Imagine you've got a random sample of 9 measurements from some population. The mean of the sample is 60. Can we be sure that
Explaining to laypeople why bootstrapping works When explaining to beginners I think it helps to take a specific example... Imagine you've got a random sample of 9 measurements from some population. The mean of the sample is 60. Can we be sure that the average of the whole population is also 60? Obviously not because small samples will vary, so the estimate of 60 is likely to be inaccurate. To find out how much samples like this will vary, we can run some experiments - using a method called bootstrapping. The first number in the sample is 74 and the second is 65, so let's imagine a big "pretend" population comprising one ninth 74's, one ninth 65's, and so on. The easiest way to take a random sample from this population is to take a number at random from the sample of nine, then replace it so you have the original sample of nine again and choose another one at random, and so on until you have a "resample" of 9. When I did this, 74 did not appear at all but some of the other numbers appeared twice, and the mean was 54.4. (This is set up on the spreadsheet at http://woodm.myweb.port.ac.uk/SL/resample.xlsx - click on the bootstrap tab at the bottom of the screen.) When I took 1000 resamples in this way their means varied from 44 to 80, with 95% between 48 and 72. Which suggests that there is an error of up to 16-20 units (44 is 16 below the pretend population mean of 60, 80 is 20 units above) in using samples of size 9 to estimate the population mean. and that we can be 95% confident that the error will be 12 or less. So we can be 95% confident that the population mean will be somewhere between 48 and 72. There are a number of assumptions glossed over here, the obvious one being the assumption that the sample gives a useful picture of the population - experience shows this generally works well provided the sample is reasonably large (9 is a bit small but makes it easier to see what's going on). The spreadsheet at http://woodm.myweb.port.ac.uk/SL/resample.xlsx enables you to see individual resamples, plot histograms of 1000 resamples, experiment with larger samples, etc. There's a more detailed explanation in the article at https://arxiv.org/abs/1803.06214.
Explaining to laypeople why bootstrapping works When explaining to beginners I think it helps to take a specific example... Imagine you've got a random sample of 9 measurements from some population. The mean of the sample is 60. Can we be sure that
162
Explaining to laypeople why bootstrapping works
My point is a very tiny one. Bootstrap works because it computationally intensively exploits the main premise of our research agenda. To be more specific, in statistics or biology, or most non-theoretical sciences, we study individuals, thus collecting samples. Yet, from such samples, we want to make inferences on other individuals, presenting to us in the future or in different samples. With bootstrap, by explicitly founding our modeling on the individual components of our sample, we may better (with fewer assumptions, usually) infer and predict for other individuals.
Explaining to laypeople why bootstrapping works
My point is a very tiny one. Bootstrap works because it computationally intensively exploits the main premise of our research agenda. To be more specific, in statistics or biology, or most non-theore
Explaining to laypeople why bootstrapping works My point is a very tiny one. Bootstrap works because it computationally intensively exploits the main premise of our research agenda. To be more specific, in statistics or biology, or most non-theoretical sciences, we study individuals, thus collecting samples. Yet, from such samples, we want to make inferences on other individuals, presenting to us in the future or in different samples. With bootstrap, by explicitly founding our modeling on the individual components of our sample, we may better (with fewer assumptions, usually) infer and predict for other individuals.
Explaining to laypeople why bootstrapping works My point is a very tiny one. Bootstrap works because it computationally intensively exploits the main premise of our research agenda. To be more specific, in statistics or biology, or most non-theore
163
When conducting multiple regression, when should you center your predictor variables & when should you standardize them?
In regression, it is often recommended to center the variables so that the predictors have mean $0$. This makes it easier to interpret the intercept term as the expected value of $Y_i$ when the predictor values are set to their means. Otherwise, the intercept is interpreted as the expected value of $Y_i$ when the predictors are set to 0, which may not be a realistic or interpretable situation (e.g. what if the predictors were height and weight?). Another practical reason for scaling in regression is when one variable has a very large scale, e.g. if you were using population size of a country as a predictor. In that case, the regression coefficients may be on a very small order of magnitude (e.g. $10^{-6}$) which can be a little annoying when you're reading computer output, so you may convert the variable to, for example, population size in millions. The convention that you standardize predictions primarily exists so that the units of the regression coefficients are the same. As @gung alludes to and @MånsT shows explicitly (+1 to both, btw), centering/scaling does not affect your statistical inference in regression models - the estimates are adjusted appropriately and the $p$-values will be the same. Other situations where centering and/or scaling may be useful: when you're trying to sum or average variables that are on different scales, perhaps to create a composite score of some kind. Without scaling, it may be the case that one variable has a larger impact on the sum due purely to its scale, which may be undesirable. To simplify calculations and notation. For example, the sample covariance matrix of a matrix of values centered by their sample means is simply $X'X$. Similarly, if a univariate random variable $X$ has been mean centered, then ${\rm var}(X) = E(X^2)$ and the variance can be estimated from a sample by looking at the sample mean of the squares of the observed values. Related to aforementioned, PCA can only be interpreted as the singular value decomposition of a data matrix when the columns have first been centered by their means. Note that scaling is not necessary in the last two bullet points I mentioned and centering may not be necessary in the first bullet I mentioned, so the two do not need to go hand and hand at all times.
When conducting multiple regression, when should you center your predictor variables & when should y
In regression, it is often recommended to center the variables so that the predictors have mean $0$. This makes it easier to interpret the intercept term as the expected value of $Y_i$ when the predic
When conducting multiple regression, when should you center your predictor variables & when should you standardize them? In regression, it is often recommended to center the variables so that the predictors have mean $0$. This makes it easier to interpret the intercept term as the expected value of $Y_i$ when the predictor values are set to their means. Otherwise, the intercept is interpreted as the expected value of $Y_i$ when the predictors are set to 0, which may not be a realistic or interpretable situation (e.g. what if the predictors were height and weight?). Another practical reason for scaling in regression is when one variable has a very large scale, e.g. if you were using population size of a country as a predictor. In that case, the regression coefficients may be on a very small order of magnitude (e.g. $10^{-6}$) which can be a little annoying when you're reading computer output, so you may convert the variable to, for example, population size in millions. The convention that you standardize predictions primarily exists so that the units of the regression coefficients are the same. As @gung alludes to and @MånsT shows explicitly (+1 to both, btw), centering/scaling does not affect your statistical inference in regression models - the estimates are adjusted appropriately and the $p$-values will be the same. Other situations where centering and/or scaling may be useful: when you're trying to sum or average variables that are on different scales, perhaps to create a composite score of some kind. Without scaling, it may be the case that one variable has a larger impact on the sum due purely to its scale, which may be undesirable. To simplify calculations and notation. For example, the sample covariance matrix of a matrix of values centered by their sample means is simply $X'X$. Similarly, if a univariate random variable $X$ has been mean centered, then ${\rm var}(X) = E(X^2)$ and the variance can be estimated from a sample by looking at the sample mean of the squares of the observed values. Related to aforementioned, PCA can only be interpreted as the singular value decomposition of a data matrix when the columns have first been centered by their means. Note that scaling is not necessary in the last two bullet points I mentioned and centering may not be necessary in the first bullet I mentioned, so the two do not need to go hand and hand at all times.
When conducting multiple regression, when should you center your predictor variables & when should y In regression, it is often recommended to center the variables so that the predictors have mean $0$. This makes it easier to interpret the intercept term as the expected value of $Y_i$ when the predic
164
When conducting multiple regression, when should you center your predictor variables & when should you standardize them?
You have come across a common belief. However, in general, you do not need to center or standardize your data for multiple regression. Different explanatory variables are almost always on different scales (i.e., measured in different units). This is not a problem; the betas are estimated such that they convert the units of each explanatory variable into the units of the response variable appropriately. One thing that people sometimes say is that if you have standardized your variables first, you can then interpret the betas as measures of importance. For instance, if $\beta_1=.6$, and $\beta_2=.3$, then the first explanatory variable is twice as important as the second. While this idea is appealing, unfortunately, it is not valid. There are several issues, but perhaps the easiest to follow is that you have no way to control for possible range restrictions in the variables. Inferring the 'importance' of different explanatory variables relative to each other is a very tricky philosophical issue. None of that is to suggest that standardizing is bad or wrong, just that it typically isn't necessary. The only case I can think of off the top of my head where centering is helpful is before creating power terms. Lets say you have a variable, $X$, that ranges from 1 to 2, but you suspect a curvilinear relationship with the response variable, and so you want to create an $X^2$ term. If you don't center $X$ first, your squared term will be highly correlated with $X$, which could muddy the estimation of the beta. Centering first addresses this issue. (Update added much later:) An analogous case that I forgot to mention is creating interaction terms. If an interaction / product term is created from two variables that are not centered on 0, some amount of collinearity will be induced (with the exact amount depending on various factors). Centering first addresses this potential problem. For a fuller explanation, see this excellent answer from @Affine: Collinearity diagnostics problematic only when the interaction term is included.
When conducting multiple regression, when should you center your predictor variables & when should y
You have come across a common belief. However, in general, you do not need to center or standardize your data for multiple regression. Different explanatory variables are almost always on different
When conducting multiple regression, when should you center your predictor variables & when should you standardize them? You have come across a common belief. However, in general, you do not need to center or standardize your data for multiple regression. Different explanatory variables are almost always on different scales (i.e., measured in different units). This is not a problem; the betas are estimated such that they convert the units of each explanatory variable into the units of the response variable appropriately. One thing that people sometimes say is that if you have standardized your variables first, you can then interpret the betas as measures of importance. For instance, if $\beta_1=.6$, and $\beta_2=.3$, then the first explanatory variable is twice as important as the second. While this idea is appealing, unfortunately, it is not valid. There are several issues, but perhaps the easiest to follow is that you have no way to control for possible range restrictions in the variables. Inferring the 'importance' of different explanatory variables relative to each other is a very tricky philosophical issue. None of that is to suggest that standardizing is bad or wrong, just that it typically isn't necessary. The only case I can think of off the top of my head where centering is helpful is before creating power terms. Lets say you have a variable, $X$, that ranges from 1 to 2, but you suspect a curvilinear relationship with the response variable, and so you want to create an $X^2$ term. If you don't center $X$ first, your squared term will be highly correlated with $X$, which could muddy the estimation of the beta. Centering first addresses this issue. (Update added much later:) An analogous case that I forgot to mention is creating interaction terms. If an interaction / product term is created from two variables that are not centered on 0, some amount of collinearity will be induced (with the exact amount depending on various factors). Centering first addresses this potential problem. For a fuller explanation, see this excellent answer from @Affine: Collinearity diagnostics problematic only when the interaction term is included.
When conducting multiple regression, when should you center your predictor variables & when should y You have come across a common belief. However, in general, you do not need to center or standardize your data for multiple regression. Different explanatory variables are almost always on different
165
When conducting multiple regression, when should you center your predictor variables & when should you standardize them?
In addition to the remarks in the other answers, I'd like to point out that the scale and location of the explanatory variables does not affect the validity of the regression model in any way. Consider the model $y=\beta_0+\beta_1x_1+\beta_2x_2+\ldots+\epsilon$. The least squares estimators of $\beta_1, \beta_2,\ldots$ are not affected by shifting. The reason is that these are the slopes of the fitting surface - how much the surface changes if you change $x_1,x_2,\ldots$ one unit. This does not depend on location. (The estimator of $\beta_0$, however, does.) By looking at the equations for the estimators you can see that scaling $x_1$ with a factor $a$ scales $\hat{\beta}_1$ by a factor $1/a$. To see this, note that $$\hat{\beta}_1(x_1)=\frac{\sum_{i=1}^n(x_{1,i}-\bar{x}_1)(y_i-\bar{y})}{\sum_{i=1}^n(x_{1,i}-\bar{x}_1)^2}.$$ Thus $$\hat{\beta}_1(ax_1)=\frac{\sum_{i=1}^n(ax_{1,i}-a\bar{x}_1)(y_i-\bar{y})}{\sum_{i=1}^n(ax_{1,i}-a\bar{x}_1)^2}=\frac{a\sum_{i=1}^n(x_{1,i}-\bar{x}_1)(y_i-\bar{y})}{a^2\sum_{i=1}^n(x_{1,i}-\bar{x}_1)^2}=\frac{\hat{\beta}_1(x_1)}{a}.$$ By looking at the corresponding formula for $\hat{\beta}_2$ (for instance) it is (hopefully) clear that this scaling doesn't affect the estimators of the other slopes. Thus, scaling simply corresponds to scaling the corresponding slopes. As gung points out, some people like to rescale by the standard deviation in hopes that they will be able to interpret how "important" the different variables are. While this practice can be questioned, it can be noted that this corresponds to choosing $a_i=1/s_i$ in the above computations, where $s_i$ is the standard deviation of $x_1$ (which in a strange thing to say to begin with, since the $x_i$ are assumed to be deterministic).
When conducting multiple regression, when should you center your predictor variables & when should y
In addition to the remarks in the other answers, I'd like to point out that the scale and location of the explanatory variables does not affect the validity of the regression model in any way. Conside
When conducting multiple regression, when should you center your predictor variables & when should you standardize them? In addition to the remarks in the other answers, I'd like to point out that the scale and location of the explanatory variables does not affect the validity of the regression model in any way. Consider the model $y=\beta_0+\beta_1x_1+\beta_2x_2+\ldots+\epsilon$. The least squares estimators of $\beta_1, \beta_2,\ldots$ are not affected by shifting. The reason is that these are the slopes of the fitting surface - how much the surface changes if you change $x_1,x_2,\ldots$ one unit. This does not depend on location. (The estimator of $\beta_0$, however, does.) By looking at the equations for the estimators you can see that scaling $x_1$ with a factor $a$ scales $\hat{\beta}_1$ by a factor $1/a$. To see this, note that $$\hat{\beta}_1(x_1)=\frac{\sum_{i=1}^n(x_{1,i}-\bar{x}_1)(y_i-\bar{y})}{\sum_{i=1}^n(x_{1,i}-\bar{x}_1)^2}.$$ Thus $$\hat{\beta}_1(ax_1)=\frac{\sum_{i=1}^n(ax_{1,i}-a\bar{x}_1)(y_i-\bar{y})}{\sum_{i=1}^n(ax_{1,i}-a\bar{x}_1)^2}=\frac{a\sum_{i=1}^n(x_{1,i}-\bar{x}_1)(y_i-\bar{y})}{a^2\sum_{i=1}^n(x_{1,i}-\bar{x}_1)^2}=\frac{\hat{\beta}_1(x_1)}{a}.$$ By looking at the corresponding formula for $\hat{\beta}_2$ (for instance) it is (hopefully) clear that this scaling doesn't affect the estimators of the other slopes. Thus, scaling simply corresponds to scaling the corresponding slopes. As gung points out, some people like to rescale by the standard deviation in hopes that they will be able to interpret how "important" the different variables are. While this practice can be questioned, it can be noted that this corresponds to choosing $a_i=1/s_i$ in the above computations, where $s_i$ is the standard deviation of $x_1$ (which in a strange thing to say to begin with, since the $x_i$ are assumed to be deterministic).
When conducting multiple regression, when should you center your predictor variables & when should y In addition to the remarks in the other answers, I'd like to point out that the scale and location of the explanatory variables does not affect the validity of the regression model in any way. Conside
166
When conducting multiple regression, when should you center your predictor variables & when should you standardize them?
In case you use gradient descent to fit your model, standardizing covariates may speed up convergence (because when you have unscaled covariates, the corresponding parameters may inappropriately dominate the gradient). To illustrate this, some R code: > objective <- function(par){ par[1]^2+par[2]^2} #quadratic function in two variables with a minimum at (0,0) > optim(c(10,10), objective, method="BFGS")$counts #returns the number of times the function and its gradient had to be evaluated until convergence function gradient 12 3 > objective2 <- function(par){ par[1]^2+0.1*par[2]^2} #a transformation of the above function, corresponding to unscaled covariates > optim(c(10,10), objective2, method="BFGS")$counts function gradient 19 10 > optim(c(10,1), objective2, method="BFGS")$counts #scaling of initial parameters doesn't get you back to original performance function gradient 12 8 Also, for some applications of SVMs, scaling may improve predictive performance: Feature scaling in support vector data description.
When conducting multiple regression, when should you center your predictor variables & when should y
In case you use gradient descent to fit your model, standardizing covariates may speed up convergence (because when you have unscaled covariates, the corresponding parameters may inappropriately domin
When conducting multiple regression, when should you center your predictor variables & when should you standardize them? In case you use gradient descent to fit your model, standardizing covariates may speed up convergence (because when you have unscaled covariates, the corresponding parameters may inappropriately dominate the gradient). To illustrate this, some R code: > objective <- function(par){ par[1]^2+par[2]^2} #quadratic function in two variables with a minimum at (0,0) > optim(c(10,10), objective, method="BFGS")$counts #returns the number of times the function and its gradient had to be evaluated until convergence function gradient 12 3 > objective2 <- function(par){ par[1]^2+0.1*par[2]^2} #a transformation of the above function, corresponding to unscaled covariates > optim(c(10,10), objective2, method="BFGS")$counts function gradient 19 10 > optim(c(10,1), objective2, method="BFGS")$counts #scaling of initial parameters doesn't get you back to original performance function gradient 12 8 Also, for some applications of SVMs, scaling may improve predictive performance: Feature scaling in support vector data description.
When conducting multiple regression, when should you center your predictor variables & when should y In case you use gradient descent to fit your model, standardizing covariates may speed up convergence (because when you have unscaled covariates, the corresponding parameters may inappropriately domin
167
When conducting multiple regression, when should you center your predictor variables & when should you standardize them?
I prefer "solid reasons" for both centering and standardization (they exist very often). In general, they have more to do with the data set and the problem than with the data analysis method. Very often, I prefer to center (i.e. shift the origin of the data) to other points that are physically/chemically/biologically/... more meaningful than the mean (see also Macro's answer), e.g. the mean of a control group blank signal Numerical stability is an algorithm-related reason to center and/or scale data. Also, have a look at the similar question about standardization. Which also covers "center only".
When conducting multiple regression, when should you center your predictor variables & when should y
I prefer "solid reasons" for both centering and standardization (they exist very often). In general, they have more to do with the data set and the problem than with the data analysis method. Very oft
When conducting multiple regression, when should you center your predictor variables & when should you standardize them? I prefer "solid reasons" for both centering and standardization (they exist very often). In general, they have more to do with the data set and the problem than with the data analysis method. Very often, I prefer to center (i.e. shift the origin of the data) to other points that are physically/chemically/biologically/... more meaningful than the mean (see also Macro's answer), e.g. the mean of a control group blank signal Numerical stability is an algorithm-related reason to center and/or scale data. Also, have a look at the similar question about standardization. Which also covers "center only".
When conducting multiple regression, when should you center your predictor variables & when should y I prefer "solid reasons" for both centering and standardization (they exist very often). In general, they have more to do with the data set and the problem than with the data analysis method. Very oft
168
When conducting multiple regression, when should you center your predictor variables & when should you standardize them?
To illustrate the numerical stability issue mentioned by @cbeleites, here is an example from Simon Wood on how to "break" lm(). First we'll generate some simple data and fit a simple quadratic curve. set.seed(1); n <- 100 xx <- sort(runif(n)) y <- .2*(xx-.5)+(xx-.5)^2 + rnorm(n)*.1 x <- xx+100 b <- lm(y ~ x+I(x^2)) plot(x,y) lines(x, predict(b), col='red') But if we add 900 to X, then the result should be pretty much the same except shifted to the right, no? Unfortunately not... X <- x + 900 B <- lm(y ~ X+I(X^2)) plot(X,y) lines(X, predict(B), col='blue') Edit to add to the comment by @Scortchi - if we look at the object returned by lm() we see that the quadratic term has not been estimated and is shown as NA. > B Call: lm(formula = y ~ X + I(X^2)) Coefficients: (Intercept) X I(X^2) -139.3927 0.1394 NA And indeed as suggested by @Scortchi, if we look at the model matrix and try to solve directly, it "breaks". > X <- model.matrix(b) ## get same model matrix used above > beta.hat <- solve(t(X)%*%X,t(X)%*%y) ## direct solution of ‘normal equations’ Error in solve.default(t(X) %*% X, t(X) %*% y) : system is computationally singular: reciprocal condition number = 3.9864e-19 However, lm() does not give me any warning or error message other than the NAs on the I(X^2) line of summary(B) in R-3.1.1. Other algorithms can of course be "broken" in different ways with different examples.
When conducting multiple regression, when should you center your predictor variables & when should y
To illustrate the numerical stability issue mentioned by @cbeleites, here is an example from Simon Wood on how to "break" lm(). First we'll generate some simple data and fit a simple quadratic curve.
When conducting multiple regression, when should you center your predictor variables & when should you standardize them? To illustrate the numerical stability issue mentioned by @cbeleites, here is an example from Simon Wood on how to "break" lm(). First we'll generate some simple data and fit a simple quadratic curve. set.seed(1); n <- 100 xx <- sort(runif(n)) y <- .2*(xx-.5)+(xx-.5)^2 + rnorm(n)*.1 x <- xx+100 b <- lm(y ~ x+I(x^2)) plot(x,y) lines(x, predict(b), col='red') But if we add 900 to X, then the result should be pretty much the same except shifted to the right, no? Unfortunately not... X <- x + 900 B <- lm(y ~ X+I(X^2)) plot(X,y) lines(X, predict(B), col='blue') Edit to add to the comment by @Scortchi - if we look at the object returned by lm() we see that the quadratic term has not been estimated and is shown as NA. > B Call: lm(formula = y ~ X + I(X^2)) Coefficients: (Intercept) X I(X^2) -139.3927 0.1394 NA And indeed as suggested by @Scortchi, if we look at the model matrix and try to solve directly, it "breaks". > X <- model.matrix(b) ## get same model matrix used above > beta.hat <- solve(t(X)%*%X,t(X)%*%y) ## direct solution of ‘normal equations’ Error in solve.default(t(X) %*% X, t(X) %*% y) : system is computationally singular: reciprocal condition number = 3.9864e-19 However, lm() does not give me any warning or error message other than the NAs on the I(X^2) line of summary(B) in R-3.1.1. Other algorithms can of course be "broken" in different ways with different examples.
When conducting multiple regression, when should you center your predictor variables & when should y To illustrate the numerical stability issue mentioned by @cbeleites, here is an example from Simon Wood on how to "break" lm(). First we'll generate some simple data and fit a simple quadratic curve.
169
When conducting multiple regression, when should you center your predictor variables & when should you standardize them?
I doubt seriously whether centering or standardizing the original data could really mitigate the multicollinearity problem when squared terms or other interaction terms are included in regression, as some of you, gung in particular, have recommend above. To illustrate my point, let's consider a simple example. Suppose the true specification takes the following form such that $$y_i=b_0+b_1x_i+b_2x_i^2+u_i$$ Thus the corresponding OLS equation is given by $$y_i=\hat{y_i}+\hat{u_i}=\hat{b_0}+\hat{b_1}x_i+\hat{b_2}x_i^2+\hat{u_i}$$ where $\hat{y_i}$ is the fitted value of $y_i$, $u_i$ is the residual, $\hat{b_0}$-$\hat{b_2}$ denote the OLS estimates for $b0$-$b2$ – the parameters that we are ultimately interested in. For simplicity, let $z_i=x_i^2$ thereafter. Usually, we know $x$ and $x^2$ are likely to be highly correlated and this would cause the multicollinearity problem. To mitigate this, a popular suggestion would be centering the original data by subtracting mean of $y_i$ from $y_i$ before adding squared terms. It is fairly easy to show that the mean of $y_i$ is given as follows: $$\bar{y}=\hat{b_0}+\hat{b_1} \bar{x}+\hat{b_2} \bar{z}$$ where $\bar{y}$, $\bar{x}$, $\bar{z}$ denote means of $y_i$, $x_i$ and $z_i$, respectively. Hence, subtracting $\bar{y}$ from $y_i$ gives $$y_i-\bar{y}=\hat{b_1}(x_i-\bar{x})+\hat{b_2}(z_i-\bar{z})+\hat{u_i}$$ where $y_i-\bar{y}$, $x_i-\bar{x}$, and $z_i-\bar{z}$ are centered variables. $\hat{b_1}$ and $\hat{b_2}$ – the parameters to be estimated, remain the same as those in the original OLS regression. However, it is clear that in my example, centered RHS-variables $x$ and $x^2$ have exactly the same covariance/correlation as the uncentered $x$ and $x^2$, i.e. $\text{corr}(x, z)=\text{corr}(x-\bar{x}, z-\bar{z})$. In summary, if my understanding on centering is correct, then I do not think centering data would do any help to mitigate the MC-problem caused by including squared terms or other higher order terms into regression. I'd be happy to hear your opinions!
When conducting multiple regression, when should you center your predictor variables & when should y
I doubt seriously whether centering or standardizing the original data could really mitigate the multicollinearity problem when squared terms or other interaction terms are included in regression, as
When conducting multiple regression, when should you center your predictor variables & when should you standardize them? I doubt seriously whether centering or standardizing the original data could really mitigate the multicollinearity problem when squared terms or other interaction terms are included in regression, as some of you, gung in particular, have recommend above. To illustrate my point, let's consider a simple example. Suppose the true specification takes the following form such that $$y_i=b_0+b_1x_i+b_2x_i^2+u_i$$ Thus the corresponding OLS equation is given by $$y_i=\hat{y_i}+\hat{u_i}=\hat{b_0}+\hat{b_1}x_i+\hat{b_2}x_i^2+\hat{u_i}$$ where $\hat{y_i}$ is the fitted value of $y_i$, $u_i$ is the residual, $\hat{b_0}$-$\hat{b_2}$ denote the OLS estimates for $b0$-$b2$ – the parameters that we are ultimately interested in. For simplicity, let $z_i=x_i^2$ thereafter. Usually, we know $x$ and $x^2$ are likely to be highly correlated and this would cause the multicollinearity problem. To mitigate this, a popular suggestion would be centering the original data by subtracting mean of $y_i$ from $y_i$ before adding squared terms. It is fairly easy to show that the mean of $y_i$ is given as follows: $$\bar{y}=\hat{b_0}+\hat{b_1} \bar{x}+\hat{b_2} \bar{z}$$ where $\bar{y}$, $\bar{x}$, $\bar{z}$ denote means of $y_i$, $x_i$ and $z_i$, respectively. Hence, subtracting $\bar{y}$ from $y_i$ gives $$y_i-\bar{y}=\hat{b_1}(x_i-\bar{x})+\hat{b_2}(z_i-\bar{z})+\hat{u_i}$$ where $y_i-\bar{y}$, $x_i-\bar{x}$, and $z_i-\bar{z}$ are centered variables. $\hat{b_1}$ and $\hat{b_2}$ – the parameters to be estimated, remain the same as those in the original OLS regression. However, it is clear that in my example, centered RHS-variables $x$ and $x^2$ have exactly the same covariance/correlation as the uncentered $x$ and $x^2$, i.e. $\text{corr}(x, z)=\text{corr}(x-\bar{x}, z-\bar{z})$. In summary, if my understanding on centering is correct, then I do not think centering data would do any help to mitigate the MC-problem caused by including squared terms or other higher order terms into regression. I'd be happy to hear your opinions!
When conducting multiple regression, when should you center your predictor variables & when should y I doubt seriously whether centering or standardizing the original data could really mitigate the multicollinearity problem when squared terms or other interaction terms are included in regression, as
170
What happens if the explanatory and response variables are sorted independently before regression?
I'm not sure what your boss thinks "more predictive" means. Many people incorrectly believe that lower $p$-values mean a better / more predictive model. That is not necessarily true (this being a case in point). However, independently sorting both variables beforehand will guarantee a lower $p$-value. On the other hand, we can assess the predictive accuracy of a model by comparing its predictions to new data that were generated by the same process. I do that below in a simple example (coded with R). options(digits=3) # for cleaner output set.seed(9149) # this makes the example exactly reproducible B1 = .3 N = 50 # 50 data x = rnorm(N, mean=0, sd=1) # standard normal X y = 0 + B1*x + rnorm(N, mean=0, sd=1) # cor(x, y) = .31 sx = sort(x) # sorted independently sy = sort(y) cor(x,y) # [1] 0.309 cor(sx,sy) # [1] 0.993 model.u = lm(y~x) model.s = lm(sy~sx) summary(model.u)$coefficients # Estimate Std. Error t value Pr(>|t|) # (Intercept) 0.021 0.139 0.151 0.881 # x 0.340 0.151 2.251 0.029 # significant summary(model.s)$coefficients # Estimate Std. Error t value Pr(>|t|) # (Intercept) 0.162 0.0168 9.68 7.37e-13 # sx 1.094 0.0183 59.86 9.31e-47 # wildly significant u.error = vector(length=N) # these will hold the output s.error = vector(length=N) for(i in 1:N){ new.x = rnorm(1, mean=0, sd=1) # data generated in exactly the same way new.y = 0 + B1*x + rnorm(N, mean=0, sd=1) pred.u = predict(model.u, newdata=data.frame(x=new.x)) pred.s = predict(model.s, newdata=data.frame(x=new.x)) u.error[i] = abs(pred.u-new.y) # these are the absolute values of s.error[i] = abs(pred.s-new.y) # the predictive errors }; rm(i, new.x, new.y, pred.u, pred.s) u.s = u.error-s.error # negative values means the original # yielded more accurate predictions mean(u.error) # [1] 1.1 mean(s.error) # [1] 1.98 mean(u.s<0) # [1] 0.68 windows() layout(matrix(1:4, nrow=2, byrow=TRUE)) plot(x, y, main="Original data") abline(model.u, col="blue") plot(sx, sy, main="Sorted data") abline(model.s, col="red") h.u = hist(u.error, breaks=10, plot=FALSE) h.s = hist(s.error, breaks=9, plot=FALSE) plot(h.u, xlim=c(0,5), ylim=c(0,11), main="Histogram of prediction errors", xlab="Magnitude of prediction error", col=rgb(0,0,1,1/2)) plot(h.s, col=rgb(1,0,0,1/4), add=TRUE) legend("topright", legend=c("original","sorted"), pch=15, col=c(rgb(0,0,1,1/2),rgb(1,0,0,1/4))) dotchart(u.s, color=ifelse(u.s<0, "blue", "red"), lcolor="white", main="Difference between predictive errors") abline(v=0, col="gray") legend("topright", legend=c("u better", "s better"), pch=1, col=c("blue","red")) The upper left plot shows the original data. There is some relationship between $x$ and $y$ (viz., the correlation is about $.31$.) The upper right plot shows what the data look like after independently sorting both variables. You can easily see that the strength of the correlation has increased substantially (it is now about $.99$). However, in the lower plots, we see that the distribution of predictive errors is much closer to $0$ for the model trained on the original (unsorted) data. The mean absolute predictive error for the model that used the original data is $1.1$, whereas the mean absolute predictive error for the model trained on the sorted data is $1.98$—nearly twice as large. That means the sorted data model's predictions are much further from the correct values. The plot in the lower right quadrant is a dot plot. It displays the differences between the predictive error with the original data and with the sorted data. This lets you compare the two corresponding predictions for each new observation simulated. Blue dots to the left are times when the original data were closer to the new $y$-value, and red dots to the right are times when the sorted data yielded better predictions. There were more accurate predictions from the model trained on the original data $68\%$ of the time. The degree to which sorting will cause these problems is a function of the linear relationship that exists in your data. If the correlation between $x$ and $y$ were $1.0$ already, sorting would have no effect and thus not be detrimental. On the other hand, if the correlation were $-1.0$, the sorting would completely reverse the relationship, making the model as inaccurate as possible. If the data were completely uncorrelated originally, the sorting would have an intermediate, but still quite large, deleterious effect on the resulting model's predictive accuracy. Since you mention that your data are typically correlated, I suspect that has provided some protection against the harms intrinsic to this procedure. Nonetheless, sorting first is definitely harmful. To explore these possibilities, we can simply re-run the above code with different values for B1 (using the same seed for reproducibility) and examine the output: B1 = -5: cor(x,y) # [1] -0.978 summary(model.u)$coefficients[2,4] # [1] 1.6e-34 # (i.e., the p-value) summary(model.s)$coefficients[2,4] # [1] 1.82e-42 mean(u.error) # [1] 7.27 mean(s.error) # [1] 15.4 mean(u.s<0) # [1] 0.98 B1 = 0: cor(x,y) # [1] 0.0385 summary(model.u)$coefficients[2,4] # [1] 0.791 summary(model.s)$coefficients[2,4] # [1] 4.42e-36 mean(u.error) # [1] 0.908 mean(s.error) # [1] 2.12 mean(u.s<0) # [1] 0.82 B1 = 5: cor(x,y) # [1] 0.979 summary(model.u)$coefficients[2,4] # [1] 7.62e-35 summary(model.s)$coefficients[2,4] # [1] 3e-49 mean(u.error) # [1] 7.55 mean(s.error) # [1] 6.33 mean(u.s<0) # [1] 0.44
What happens if the explanatory and response variables are sorted independently before regression?
I'm not sure what your boss thinks "more predictive" means. Many people incorrectly believe that lower $p$-values mean a better / more predictive model. That is not necessarily true (this being a ca
What happens if the explanatory and response variables are sorted independently before regression? I'm not sure what your boss thinks "more predictive" means. Many people incorrectly believe that lower $p$-values mean a better / more predictive model. That is not necessarily true (this being a case in point). However, independently sorting both variables beforehand will guarantee a lower $p$-value. On the other hand, we can assess the predictive accuracy of a model by comparing its predictions to new data that were generated by the same process. I do that below in a simple example (coded with R). options(digits=3) # for cleaner output set.seed(9149) # this makes the example exactly reproducible B1 = .3 N = 50 # 50 data x = rnorm(N, mean=0, sd=1) # standard normal X y = 0 + B1*x + rnorm(N, mean=0, sd=1) # cor(x, y) = .31 sx = sort(x) # sorted independently sy = sort(y) cor(x,y) # [1] 0.309 cor(sx,sy) # [1] 0.993 model.u = lm(y~x) model.s = lm(sy~sx) summary(model.u)$coefficients # Estimate Std. Error t value Pr(>|t|) # (Intercept) 0.021 0.139 0.151 0.881 # x 0.340 0.151 2.251 0.029 # significant summary(model.s)$coefficients # Estimate Std. Error t value Pr(>|t|) # (Intercept) 0.162 0.0168 9.68 7.37e-13 # sx 1.094 0.0183 59.86 9.31e-47 # wildly significant u.error = vector(length=N) # these will hold the output s.error = vector(length=N) for(i in 1:N){ new.x = rnorm(1, mean=0, sd=1) # data generated in exactly the same way new.y = 0 + B1*x + rnorm(N, mean=0, sd=1) pred.u = predict(model.u, newdata=data.frame(x=new.x)) pred.s = predict(model.s, newdata=data.frame(x=new.x)) u.error[i] = abs(pred.u-new.y) # these are the absolute values of s.error[i] = abs(pred.s-new.y) # the predictive errors }; rm(i, new.x, new.y, pred.u, pred.s) u.s = u.error-s.error # negative values means the original # yielded more accurate predictions mean(u.error) # [1] 1.1 mean(s.error) # [1] 1.98 mean(u.s<0) # [1] 0.68 windows() layout(matrix(1:4, nrow=2, byrow=TRUE)) plot(x, y, main="Original data") abline(model.u, col="blue") plot(sx, sy, main="Sorted data") abline(model.s, col="red") h.u = hist(u.error, breaks=10, plot=FALSE) h.s = hist(s.error, breaks=9, plot=FALSE) plot(h.u, xlim=c(0,5), ylim=c(0,11), main="Histogram of prediction errors", xlab="Magnitude of prediction error", col=rgb(0,0,1,1/2)) plot(h.s, col=rgb(1,0,0,1/4), add=TRUE) legend("topright", legend=c("original","sorted"), pch=15, col=c(rgb(0,0,1,1/2),rgb(1,0,0,1/4))) dotchart(u.s, color=ifelse(u.s<0, "blue", "red"), lcolor="white", main="Difference between predictive errors") abline(v=0, col="gray") legend("topright", legend=c("u better", "s better"), pch=1, col=c("blue","red")) The upper left plot shows the original data. There is some relationship between $x$ and $y$ (viz., the correlation is about $.31$.) The upper right plot shows what the data look like after independently sorting both variables. You can easily see that the strength of the correlation has increased substantially (it is now about $.99$). However, in the lower plots, we see that the distribution of predictive errors is much closer to $0$ for the model trained on the original (unsorted) data. The mean absolute predictive error for the model that used the original data is $1.1$, whereas the mean absolute predictive error for the model trained on the sorted data is $1.98$—nearly twice as large. That means the sorted data model's predictions are much further from the correct values. The plot in the lower right quadrant is a dot plot. It displays the differences between the predictive error with the original data and with the sorted data. This lets you compare the two corresponding predictions for each new observation simulated. Blue dots to the left are times when the original data were closer to the new $y$-value, and red dots to the right are times when the sorted data yielded better predictions. There were more accurate predictions from the model trained on the original data $68\%$ of the time. The degree to which sorting will cause these problems is a function of the linear relationship that exists in your data. If the correlation between $x$ and $y$ were $1.0$ already, sorting would have no effect and thus not be detrimental. On the other hand, if the correlation were $-1.0$, the sorting would completely reverse the relationship, making the model as inaccurate as possible. If the data were completely uncorrelated originally, the sorting would have an intermediate, but still quite large, deleterious effect on the resulting model's predictive accuracy. Since you mention that your data are typically correlated, I suspect that has provided some protection against the harms intrinsic to this procedure. Nonetheless, sorting first is definitely harmful. To explore these possibilities, we can simply re-run the above code with different values for B1 (using the same seed for reproducibility) and examine the output: B1 = -5: cor(x,y) # [1] -0.978 summary(model.u)$coefficients[2,4] # [1] 1.6e-34 # (i.e., the p-value) summary(model.s)$coefficients[2,4] # [1] 1.82e-42 mean(u.error) # [1] 7.27 mean(s.error) # [1] 15.4 mean(u.s<0) # [1] 0.98 B1 = 0: cor(x,y) # [1] 0.0385 summary(model.u)$coefficients[2,4] # [1] 0.791 summary(model.s)$coefficients[2,4] # [1] 4.42e-36 mean(u.error) # [1] 0.908 mean(s.error) # [1] 2.12 mean(u.s<0) # [1] 0.82 B1 = 5: cor(x,y) # [1] 0.979 summary(model.u)$coefficients[2,4] # [1] 7.62e-35 summary(model.s)$coefficients[2,4] # [1] 3e-49 mean(u.error) # [1] 7.55 mean(s.error) # [1] 6.33 mean(u.s<0) # [1] 0.44
What happens if the explanatory and response variables are sorted independently before regression? I'm not sure what your boss thinks "more predictive" means. Many people incorrectly believe that lower $p$-values mean a better / more predictive model. That is not necessarily true (this being a ca
171
What happens if the explanatory and response variables are sorted independently before regression?
If you want to convince your boss, you can show what is happening with simulated, random, independent $x,y$ data. With R: n <- 1000 y<- runif(n) x <- runif(n) linearModel <- lm(y ~ x) x_sorted <- sort(x) y_sorted <- sort(y) linearModel_sorted <- lm(y_sorted ~ x_sorted) par(mfrow = c(2,1)) plot(x,y, main = "Random data") abline(linearModel,col = "red") plot(x_sorted,y_sorted, main = "Random, sorted data") abline(linearModel_sorted,col = "red") Obviously, the sorted results offer a much nicer regression. However, given the process used to generate the data (two independent samples) there is absolutely no chance that one can be used to predict the other.
What happens if the explanatory and response variables are sorted independently before regression?
If you want to convince your boss, you can show what is happening with simulated, random, independent $x,y$ data. With R: n <- 1000 y<- runif(n) x <- runif(n) linearModel <- lm(y ~ x) x_sorted <-
What happens if the explanatory and response variables are sorted independently before regression? If you want to convince your boss, you can show what is happening with simulated, random, independent $x,y$ data. With R: n <- 1000 y<- runif(n) x <- runif(n) linearModel <- lm(y ~ x) x_sorted <- sort(x) y_sorted <- sort(y) linearModel_sorted <- lm(y_sorted ~ x_sorted) par(mfrow = c(2,1)) plot(x,y, main = "Random data") abline(linearModel,col = "red") plot(x_sorted,y_sorted, main = "Random, sorted data") abline(linearModel_sorted,col = "red") Obviously, the sorted results offer a much nicer regression. However, given the process used to generate the data (two independent samples) there is absolutely no chance that one can be used to predict the other.
What happens if the explanatory and response variables are sorted independently before regression? If you want to convince your boss, you can show what is happening with simulated, random, independent $x,y$ data. With R: n <- 1000 y<- runif(n) x <- runif(n) linearModel <- lm(y ~ x) x_sorted <-
172
What happens if the explanatory and response variables are sorted independently before regression?
Your intuition is correct: the independently sorted data have no reliable meaning because the inputs and outputs are being randomly mapped to one another rather than what the observed relationship was. There is a (good) chance that the regression on the sorted data will look nice, but it is meaningless in context. Intuitive example: Suppose a data set $(X = age, Y = height)$ for some population. The graph of the unadulterated data would probably look rather like a logarithmic or power function: faster growth rates for children that slow for later adolescents and "asymptotically" approach one's maximum height for young adults and older. If we sort $x, y$ in ascending order, the graph will probably be nearly linear. Thus, the prediction function is that people grow taller for their entire lives. I wouldn't bet money on that prediction algorithm.
What happens if the explanatory and response variables are sorted independently before regression?
Your intuition is correct: the independently sorted data have no reliable meaning because the inputs and outputs are being randomly mapped to one another rather than what the observed relationship was
What happens if the explanatory and response variables are sorted independently before regression? Your intuition is correct: the independently sorted data have no reliable meaning because the inputs and outputs are being randomly mapped to one another rather than what the observed relationship was. There is a (good) chance that the regression on the sorted data will look nice, but it is meaningless in context. Intuitive example: Suppose a data set $(X = age, Y = height)$ for some population. The graph of the unadulterated data would probably look rather like a logarithmic or power function: faster growth rates for children that slow for later adolescents and "asymptotically" approach one's maximum height for young adults and older. If we sort $x, y$ in ascending order, the graph will probably be nearly linear. Thus, the prediction function is that people grow taller for their entire lives. I wouldn't bet money on that prediction algorithm.
What happens if the explanatory and response variables are sorted independently before regression? Your intuition is correct: the independently sorted data have no reliable meaning because the inputs and outputs are being randomly mapped to one another rather than what the observed relationship was
173
What happens if the explanatory and response variables are sorted independently before regression?
Actually, let's make this really obvious and simple. Suppose I conduct an experiment in which I measure out 1 liter of water in a standardized container, and I look at the amount of water remaining in the container $V_i$ as a function of time $t_i$, the loss of water due to evaporation: Now suppose I obtain the following measurements $(t_i, V_i)$ in hours and liters, respectively: $$(0,1.0), (1,0.9), (2,0.8), (3,0.7), (4,0.6), (5,0.5).$$ This is quite obviously perfectly correlated (and hypothetical) data. But if I were to sort the time and the volume measurements, I would get $$(0,0.5), (1,0.6), (2,0.7), (3,0.8), (4,0.9), (5,1.0).$$ And the conclusion from this sorted data set is that as time increases, the volume of water increases, and moreover, that starting from 1 liter of water, you would get after 5 hours of waiting, more than 1 liter of water. Isn't that remarkable? Not only is the conclusion opposite of what the original data said, it also suggests we have discovered new physics!
What happens if the explanatory and response variables are sorted independently before regression?
Actually, let's make this really obvious and simple. Suppose I conduct an experiment in which I measure out 1 liter of water in a standardized container, and I look at the amount of water remaining i
What happens if the explanatory and response variables are sorted independently before regression? Actually, let's make this really obvious and simple. Suppose I conduct an experiment in which I measure out 1 liter of water in a standardized container, and I look at the amount of water remaining in the container $V_i$ as a function of time $t_i$, the loss of water due to evaporation: Now suppose I obtain the following measurements $(t_i, V_i)$ in hours and liters, respectively: $$(0,1.0), (1,0.9), (2,0.8), (3,0.7), (4,0.6), (5,0.5).$$ This is quite obviously perfectly correlated (and hypothetical) data. But if I were to sort the time and the volume measurements, I would get $$(0,0.5), (1,0.6), (2,0.7), (3,0.8), (4,0.9), (5,1.0).$$ And the conclusion from this sorted data set is that as time increases, the volume of water increases, and moreover, that starting from 1 liter of water, you would get after 5 hours of waiting, more than 1 liter of water. Isn't that remarkable? Not only is the conclusion opposite of what the original data said, it also suggests we have discovered new physics!
What happens if the explanatory and response variables are sorted independently before regression? Actually, let's make this really obvious and simple. Suppose I conduct an experiment in which I measure out 1 liter of water in a standardized container, and I look at the amount of water remaining i
174
What happens if the explanatory and response variables are sorted independently before regression?
It is a real art and takes a real understanding of psychology to be able to convince some people of the error of their ways. Besides all the excellent examples above, a useful strategy is sometimes to show that a person's belief leads to an inconsistency with herself. Or try this approach. Find out something your boss believes strongly about such as how persons perform on task Y has no relation with how much of an attribute X they possess. Show how your boss's own approach would result in a conclusion of a strong association between X and Y. Capitalize on political/racial/religious beliefs. Face invalidity should have been enough. What a stubborn boss. Be searching for a better job in the meantime. Good luck.
What happens if the explanatory and response variables are sorted independently before regression?
It is a real art and takes a real understanding of psychology to be able to convince some people of the error of their ways. Besides all the excellent examples above, a useful strategy is sometimes t
What happens if the explanatory and response variables are sorted independently before regression? It is a real art and takes a real understanding of psychology to be able to convince some people of the error of their ways. Besides all the excellent examples above, a useful strategy is sometimes to show that a person's belief leads to an inconsistency with herself. Or try this approach. Find out something your boss believes strongly about such as how persons perform on task Y has no relation with how much of an attribute X they possess. Show how your boss's own approach would result in a conclusion of a strong association between X and Y. Capitalize on political/racial/religious beliefs. Face invalidity should have been enough. What a stubborn boss. Be searching for a better job in the meantime. Good luck.
What happens if the explanatory and response variables are sorted independently before regression? It is a real art and takes a real understanding of psychology to be able to convince some people of the error of their ways. Besides all the excellent examples above, a useful strategy is sometimes t
175
What happens if the explanatory and response variables are sorted independently before regression?
This technique is actually amazing. I'm finding all sorts of relationships that I never suspected. For instance, I would have not have suspected that the numbers that show up in Powerball lottery, which it is CLAIMED are random, actually are highly correlated with the opening price of Apple stock on the same day! Folks, I think we're about to cash in big time. :) > powerball_last_number = scan() 1: 69 66 64 53 65 68 63 64 57 69 40 68 13: Read 12 items > #Nov. 18, 14, 11, 7, 4 > #Oct. 31, 28, 24, 21, 17, 14, 10 > #These are powerball dates. Stock opening prices > #are on same or preceding day. > > appl_stock_open = scan() 1: 115.76 115.20 116.26 121.11 123.13 6: 120.99 116.93 116.70 114.00 111.78 11: 111.29 110.00 13: Read 12 items > hold = lm(appl_stock_open ~ powerball_last_number) > summary(hold) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 112.08555 9.45628 11.853 3.28e-07 *** powerball_last_number 0.06451 0.15083 0.428 0.678 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 4.249 on 10 degrees of freedom Multiple R-squared: 0.01796, Adjusted R-squared: -0.08024 F-statistic: 0.1829 on 1 and 10 DF, p-value: 0.6779 Hmm, doesn't seem to have a significant relationship. BUT using the new, improved technique: > > vastly_improved_regression = lm(sort(appl_stock_open)~sort(powerball_last_number)) > summary(vastly_improved_regression) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 91.34418 5.36136 17.038 1.02e-08 *** sort(powerball_last_number) 0.39815 0.08551 4.656 9e-04 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 2.409 on 10 degrees of freedom Multiple R-squared: 0.6843, Adjusted R-squared: 0.6528 F-statistic: 21.68 on 1 and 10 DF, p-value: 0.0008998 NOTE: This is not meant to be a serious analysis. Just show your manager that they can make ANY two variables significantly related if you sort them both.
What happens if the explanatory and response variables are sorted independently before regression?
This technique is actually amazing. I'm finding all sorts of relationships that I never suspected. For instance, I would have not have suspected that the numbers that show up in Powerball lottery, w
What happens if the explanatory and response variables are sorted independently before regression? This technique is actually amazing. I'm finding all sorts of relationships that I never suspected. For instance, I would have not have suspected that the numbers that show up in Powerball lottery, which it is CLAIMED are random, actually are highly correlated with the opening price of Apple stock on the same day! Folks, I think we're about to cash in big time. :) > powerball_last_number = scan() 1: 69 66 64 53 65 68 63 64 57 69 40 68 13: Read 12 items > #Nov. 18, 14, 11, 7, 4 > #Oct. 31, 28, 24, 21, 17, 14, 10 > #These are powerball dates. Stock opening prices > #are on same or preceding day. > > appl_stock_open = scan() 1: 115.76 115.20 116.26 121.11 123.13 6: 120.99 116.93 116.70 114.00 111.78 11: 111.29 110.00 13: Read 12 items > hold = lm(appl_stock_open ~ powerball_last_number) > summary(hold) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 112.08555 9.45628 11.853 3.28e-07 *** powerball_last_number 0.06451 0.15083 0.428 0.678 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 4.249 on 10 degrees of freedom Multiple R-squared: 0.01796, Adjusted R-squared: -0.08024 F-statistic: 0.1829 on 1 and 10 DF, p-value: 0.6779 Hmm, doesn't seem to have a significant relationship. BUT using the new, improved technique: > > vastly_improved_regression = lm(sort(appl_stock_open)~sort(powerball_last_number)) > summary(vastly_improved_regression) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 91.34418 5.36136 17.038 1.02e-08 *** sort(powerball_last_number) 0.39815 0.08551 4.656 9e-04 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 2.409 on 10 degrees of freedom Multiple R-squared: 0.6843, Adjusted R-squared: 0.6528 F-statistic: 21.68 on 1 and 10 DF, p-value: 0.0008998 NOTE: This is not meant to be a serious analysis. Just show your manager that they can make ANY two variables significantly related if you sort them both.
What happens if the explanatory and response variables are sorted independently before regression? This technique is actually amazing. I'm finding all sorts of relationships that I never suspected. For instance, I would have not have suspected that the numbers that show up in Powerball lottery, w
176
What happens if the explanatory and response variables are sorted independently before regression?
One more example. Imagine that you have two variables, one connected with eating chocolate and second one connected to overall well-being. You have a sample of two and your data looks like below: $$ \begin{array}{cc} \text{chocolate} & \text{no happiness} \\ \text{no chocolate} & \text{happiness} \\ \end{array} $$ What is the relation of chocolate and happiness based on your sample? And now, change order of one of the columns - what is the relation after this operation? The same problem can be approached differently. Say, that you have a bigger sample, with some number of cases and you measure two continuous variables: chocolate consumption per day (in grams) and happiness (imagine that you have some way to measure it). If you are interested if they are related you can measure correlation or use linear regression model, but sometimes in such cases people simply dichotomize one variable and use it as a grouping factor with $t$-test (this is not the best and not recommended approach, but let me use it as an example). So you divide your sample into two groups: with high chocolate consumption and with low chocolate consumption. Next, you compare average happiness in both groups. Now imagine what would happen if you sorted happiness variable independently of grouping variable: all the cases with high happiness would go go high chocolate consumption group, and all the low happiness cases would end up in low chocolate consumption group -- would such hypothesis test have any sens? This can be easily extrapolated into regression if you imagine that instead of two groups for chocolate consumption you have $N$ such groups, one for each participant (notice that $t$-test is related to regression). In bivariate regression or correlation we are interested in pairwise relations between each $i$-th value of $X$ and $i$-th value of $Y$, changing order of the observations destroys this relation. If you sort both variables that this always leads them to be more positively correlated with each other since it will always be the case that if one of the variables increases, the other one also increases (because they are sorted!). Notice that sometimes we actually are interested in changing order of cases, we do so in resampling methods. For example, we can intentionally shuffle observations multiple times so to learn something about null distribution of our data (how would our data look like if there was no pairwise relations), and next we can compare if our real data is anyhow better than the randomly shuffled. What your manager does is exactly the opposite -- he intentionally forces the observations to have artificial structure where there was no structure, what leads to bogus correlations.
What happens if the explanatory and response variables are sorted independently before regression?
One more example. Imagine that you have two variables, one connected with eating chocolate and second one connected to overall well-being. You have a sample of two and your data looks like below: $$ \
What happens if the explanatory and response variables are sorted independently before regression? One more example. Imagine that you have two variables, one connected with eating chocolate and second one connected to overall well-being. You have a sample of two and your data looks like below: $$ \begin{array}{cc} \text{chocolate} & \text{no happiness} \\ \text{no chocolate} & \text{happiness} \\ \end{array} $$ What is the relation of chocolate and happiness based on your sample? And now, change order of one of the columns - what is the relation after this operation? The same problem can be approached differently. Say, that you have a bigger sample, with some number of cases and you measure two continuous variables: chocolate consumption per day (in grams) and happiness (imagine that you have some way to measure it). If you are interested if they are related you can measure correlation or use linear regression model, but sometimes in such cases people simply dichotomize one variable and use it as a grouping factor with $t$-test (this is not the best and not recommended approach, but let me use it as an example). So you divide your sample into two groups: with high chocolate consumption and with low chocolate consumption. Next, you compare average happiness in both groups. Now imagine what would happen if you sorted happiness variable independently of grouping variable: all the cases with high happiness would go go high chocolate consumption group, and all the low happiness cases would end up in low chocolate consumption group -- would such hypothesis test have any sens? This can be easily extrapolated into regression if you imagine that instead of two groups for chocolate consumption you have $N$ such groups, one for each participant (notice that $t$-test is related to regression). In bivariate regression or correlation we are interested in pairwise relations between each $i$-th value of $X$ and $i$-th value of $Y$, changing order of the observations destroys this relation. If you sort both variables that this always leads them to be more positively correlated with each other since it will always be the case that if one of the variables increases, the other one also increases (because they are sorted!). Notice that sometimes we actually are interested in changing order of cases, we do so in resampling methods. For example, we can intentionally shuffle observations multiple times so to learn something about null distribution of our data (how would our data look like if there was no pairwise relations), and next we can compare if our real data is anyhow better than the randomly shuffled. What your manager does is exactly the opposite -- he intentionally forces the observations to have artificial structure where there was no structure, what leads to bogus correlations.
What happens if the explanatory and response variables are sorted independently before regression? One more example. Imagine that you have two variables, one connected with eating chocolate and second one connected to overall well-being. You have a sample of two and your data looks like below: $$ \
177
What happens if the explanatory and response variables are sorted independently before regression?
A simple example that maybe your manager could understand: Let's say you have Coin Y and Coin X, and you flip each of them 100 times. Then you want to predict whether getting a heads with Coin X (IV) can increase the chance of getting a heads with Coin Y (DV). Without sorting, the relationship will be none, because Coin X's outcome shouldn't affect the Coin Y's outcome. With sorting, relationship will be nearly perfect. How does it make sense to conclude that you have a good chance of getting a heads on a coin flip if you have just flipped a heads with a different coin?
What happens if the explanatory and response variables are sorted independently before regression?
A simple example that maybe your manager could understand: Let's say you have Coin Y and Coin X, and you flip each of them 100 times. Then you want to predict whether getting a heads with Coin X (IV)
What happens if the explanatory and response variables are sorted independently before regression? A simple example that maybe your manager could understand: Let's say you have Coin Y and Coin X, and you flip each of them 100 times. Then you want to predict whether getting a heads with Coin X (IV) can increase the chance of getting a heads with Coin Y (DV). Without sorting, the relationship will be none, because Coin X's outcome shouldn't affect the Coin Y's outcome. With sorting, relationship will be nearly perfect. How does it make sense to conclude that you have a good chance of getting a heads on a coin flip if you have just flipped a heads with a different coin?
What happens if the explanatory and response variables are sorted independently before regression? A simple example that maybe your manager could understand: Let's say you have Coin Y and Coin X, and you flip each of them 100 times. Then you want to predict whether getting a heads with Coin X (IV)
178
What happens if the explanatory and response variables are sorted independently before regression?
Plenty of good counter examples in here. Let me just add a paragraph about the heart of the problem. You are looking for a correlation between $X_i$ and $Y_i$. That means that $X$ and $Y$ both tend to be large for the same $i$ and small for the same $i$. So a correlation is a property of $X_1$ linked with $Y_1$, $X_2$ linked with $Y_2$, and so on. By sorting $X$ and $Y$ independently you (in most cases) lose the pairing. $X_1$ will no longer be paired up with $Y_1$. So the correlation of the sorted values will not measure the connection between $X_1$ and $Y_1$ that you are after. Actually, let me add a paragraph about why it "works" as well. When you sort both lists, let's call the new sorted list $X_a$, $X_b$, and so on, $X_a$ will be smallest $X$ value, and $Y_a$ will be the smallest Y value. $X_z$ will be the largest $X$ and $Y_z$ will be the largest $Y$. Then you query the new lists if small and large value co occur. That is, you ask if $X_a$ is small when $Y_a$ is small. Is $X_z$ large when $Y_z$ is large? Of course the answer is yes, and of course we will get almost perfect correlation. Does that tell you anything about $X_1$'s relationship with $Y_1$? No.
What happens if the explanatory and response variables are sorted independently before regression?
Plenty of good counter examples in here. Let me just add a paragraph about the heart of the problem. You are looking for a correlation between $X_i$ and $Y_i$. That means that $X$ and $Y$ both tend to
What happens if the explanatory and response variables are sorted independently before regression? Plenty of good counter examples in here. Let me just add a paragraph about the heart of the problem. You are looking for a correlation between $X_i$ and $Y_i$. That means that $X$ and $Y$ both tend to be large for the same $i$ and small for the same $i$. So a correlation is a property of $X_1$ linked with $Y_1$, $X_2$ linked with $Y_2$, and so on. By sorting $X$ and $Y$ independently you (in most cases) lose the pairing. $X_1$ will no longer be paired up with $Y_1$. So the correlation of the sorted values will not measure the connection between $X_1$ and $Y_1$ that you are after. Actually, let me add a paragraph about why it "works" as well. When you sort both lists, let's call the new sorted list $X_a$, $X_b$, and so on, $X_a$ will be smallest $X$ value, and $Y_a$ will be the smallest Y value. $X_z$ will be the largest $X$ and $Y_z$ will be the largest $Y$. Then you query the new lists if small and large value co occur. That is, you ask if $X_a$ is small when $Y_a$ is small. Is $X_z$ large when $Y_z$ is large? Of course the answer is yes, and of course we will get almost perfect correlation. Does that tell you anything about $X_1$'s relationship with $Y_1$? No.
What happens if the explanatory and response variables are sorted independently before regression? Plenty of good counter examples in here. Let me just add a paragraph about the heart of the problem. You are looking for a correlation between $X_i$ and $Y_i$. That means that $X$ and $Y$ both tend to
179
What happens if the explanatory and response variables are sorted independently before regression?
Actually, the test that is described (i.e. sort the X values and the Y values independently and regress one against the other) DOES test something, assuming that the (X,Y) are sampled as independent pairs from a bivariate distribution. It just isn't a test of what your manager wants to test. It is essentially checking the linearity of a QQ-plot, comparing the marginal distribution of the Xs with the marginal distribution of the Ys. In particular, the 'data' will fall close to a straight line if the density of the Xs (f(x)) is related to the density of the Ys (g(y)) this way: $f(x) = g((y-a)/b)$ for some constants $a$ and $b>0$. This puts them in a location-scale family. Unfortunately this is not a method to get predictions...
What happens if the explanatory and response variables are sorted independently before regression?
Actually, the test that is described (i.e. sort the X values and the Y values independently and regress one against the other) DOES test something, assuming that the (X,Y) are sampled as independent p
What happens if the explanatory and response variables are sorted independently before regression? Actually, the test that is described (i.e. sort the X values and the Y values independently and regress one against the other) DOES test something, assuming that the (X,Y) are sampled as independent pairs from a bivariate distribution. It just isn't a test of what your manager wants to test. It is essentially checking the linearity of a QQ-plot, comparing the marginal distribution of the Xs with the marginal distribution of the Ys. In particular, the 'data' will fall close to a straight line if the density of the Xs (f(x)) is related to the density of the Ys (g(y)) this way: $f(x) = g((y-a)/b)$ for some constants $a$ and $b>0$. This puts them in a location-scale family. Unfortunately this is not a method to get predictions...
What happens if the explanatory and response variables are sorted independently before regression? Actually, the test that is described (i.e. sort the X values and the Y values independently and regress one against the other) DOES test something, assuming that the (X,Y) are sampled as independent p
180
What happens if the explanatory and response variables are sorted independently before regression?
Strange that the most obvious counterexample is still not present among the answers in its simplest form. Let $Y = -X$. If you sort the variables separately and fit a regression model on such data, you should obtain something like $\hat Y \approx X$ (because when the variables are sorted, larger values of one must correspond to larger values of the other). This is a kind-of a "direct inverse" of the pattern you might be willing to find here.
What happens if the explanatory and response variables are sorted independently before regression?
Strange that the most obvious counterexample is still not present among the answers in its simplest form. Let $Y = -X$. If you sort the variables separately and fit a regression model on such data, yo
What happens if the explanatory and response variables are sorted independently before regression? Strange that the most obvious counterexample is still not present among the answers in its simplest form. Let $Y = -X$. If you sort the variables separately and fit a regression model on such data, you should obtain something like $\hat Y \approx X$ (because when the variables are sorted, larger values of one must correspond to larger values of the other). This is a kind-of a "direct inverse" of the pattern you might be willing to find here.
What happens if the explanatory and response variables are sorted independently before regression? Strange that the most obvious counterexample is still not present among the answers in its simplest form. Let $Y = -X$. If you sort the variables separately and fit a regression model on such data, yo
181
What happens if the explanatory and response variables are sorted independently before regression?
It's a QQ-plot, isn't it? You'd use it to compare the distribution of x vs. y. If you'd plot sorted outcomes of a relationship like $x \sim x^2$, the plot would be crooked, which indicates that $x$ and $x^2$ for some sampling of $x$s have different distributions. The linear regression is usually less reasonable (exceptions exist, see other answers); but the geometry of tails and of distribution of errors tells you how far from similar the distributions are.
What happens if the explanatory and response variables are sorted independently before regression?
It's a QQ-plot, isn't it? You'd use it to compare the distribution of x vs. y. If you'd plot sorted outcomes of a relationship like $x \sim x^2$, the plot would be crooked, which indicates that $x$ an
What happens if the explanatory and response variables are sorted independently before regression? It's a QQ-plot, isn't it? You'd use it to compare the distribution of x vs. y. If you'd plot sorted outcomes of a relationship like $x \sim x^2$, the plot would be crooked, which indicates that $x$ and $x^2$ for some sampling of $x$s have different distributions. The linear regression is usually less reasonable (exceptions exist, see other answers); but the geometry of tails and of distribution of errors tells you how far from similar the distributions are.
What happens if the explanatory and response variables are sorted independently before regression? It's a QQ-plot, isn't it? You'd use it to compare the distribution of x vs. y. If you'd plot sorted outcomes of a relationship like $x \sim x^2$, the plot would be crooked, which indicates that $x$ an
182
What happens if the explanatory and response variables are sorted independently before regression?
You are right. Your manager would find "good" results! But they are meaningless. What you get when you sort them independently is that the two either increase or decrease similarly and this gives a semblance of a good model. But the two variables have been stripped of their actual relationship and the model is incorrect.
What happens if the explanatory and response variables are sorted independently before regression?
You are right. Your manager would find "good" results! But they are meaningless. What you get when you sort them independently is that the two either increase or decrease similarly and this gives a se
What happens if the explanatory and response variables are sorted independently before regression? You are right. Your manager would find "good" results! But they are meaningless. What you get when you sort them independently is that the two either increase or decrease similarly and this gives a semblance of a good model. But the two variables have been stripped of their actual relationship and the model is incorrect.
What happens if the explanatory and response variables are sorted independently before regression? You are right. Your manager would find "good" results! But they are meaningless. What you get when you sort them independently is that the two either increase or decrease similarly and this gives a se
183
What happens if the explanatory and response variables are sorted independently before regression?
I have a simple intuition why this is actually a good idea if the function is monotone: Imagine you know the inputs $x_1, x_2,\cdots, x_n$ and they are ranked, i.e. $x_i<x_{i+1}$ and assume $f:\Re\mapsto\Re$ is the unknown function we want to estimate. You can define a random model $y_i = f(x_i) + \varepsilon_i$ where $\varepsilon_i$ are independently sampled as follows: $$ \varepsilon_i = f(x_{i+\delta}) - f(x_i) $$ where $\delta$ is uniformly sampled from the discrete set $\{-\Delta,-\Delta+1, \cdots \Delta-1, \Delta\}$. Here, $\Delta\in\mathbb{N}$ controls the variance. For example, $\Delta=0$ gives no noise, and $\Delta=n$ give independent input and outputs. With this model in mind, the proposed "sorting" method of you boss makes perfect sense: If you rank the data, you somehow reduce this type of noise and the estimation of $f$ should becomes better under mild assumptions. In fact, a more advanced model would assume that $\varepsilon_i$ are dependent, so that we cannot observe 2 times the same output. In such a case, the sorting method could even be optimal. This might have strong connection with random ranking models, such as Mallow's random permutations. PS: I find it amazing how an apparently simple question can lead to interesting new ways of re-thinking standards model. Please thank you boss!
What happens if the explanatory and response variables are sorted independently before regression?
I have a simple intuition why this is actually a good idea if the function is monotone: Imagine you know the inputs $x_1, x_2,\cdots, x_n$ and they are ranked, i.e. $x_i<x_{i+1}$ and assume $f:\Re\ma
What happens if the explanatory and response variables are sorted independently before regression? I have a simple intuition why this is actually a good idea if the function is monotone: Imagine you know the inputs $x_1, x_2,\cdots, x_n$ and they are ranked, i.e. $x_i<x_{i+1}$ and assume $f:\Re\mapsto\Re$ is the unknown function we want to estimate. You can define a random model $y_i = f(x_i) + \varepsilon_i$ where $\varepsilon_i$ are independently sampled as follows: $$ \varepsilon_i = f(x_{i+\delta}) - f(x_i) $$ where $\delta$ is uniformly sampled from the discrete set $\{-\Delta,-\Delta+1, \cdots \Delta-1, \Delta\}$. Here, $\Delta\in\mathbb{N}$ controls the variance. For example, $\Delta=0$ gives no noise, and $\Delta=n$ give independent input and outputs. With this model in mind, the proposed "sorting" method of you boss makes perfect sense: If you rank the data, you somehow reduce this type of noise and the estimation of $f$ should becomes better under mild assumptions. In fact, a more advanced model would assume that $\varepsilon_i$ are dependent, so that we cannot observe 2 times the same output. In such a case, the sorting method could even be optimal. This might have strong connection with random ranking models, such as Mallow's random permutations. PS: I find it amazing how an apparently simple question can lead to interesting new ways of re-thinking standards model. Please thank you boss!
What happens if the explanatory and response variables are sorted independently before regression? I have a simple intuition why this is actually a good idea if the function is monotone: Imagine you know the inputs $x_1, x_2,\cdots, x_n$ and they are ranked, i.e. $x_i<x_{i+1}$ and assume $f:\Re\ma
184
What happens if the explanatory and response variables are sorted independently before regression?
Say you have these points on a circle of radius 5. You calculate the correlation: import pandas as pd s1 = [(-5, 0), (-4, -3), (-4, 3), (-3, -4), (-3, 4), (0, 5), (0, -5), (3, -4), (3, 4), (4, -3), (4, 3), (5, 0)] df1 = pd.DataFrame(s1, columns=["x", "y"]) print(df1.corr()) x y x 1 0 y 0 1 Then you sort your x- and y-values and do the correlation again: s2 = [(-5, -5), (-4, -4), (-4, -4), (-3, -3), (-3, -3), (0, 0), (0, 0), (3, 3), (3, 3), (4, 4), (4, 4), (5, 5)] df2 = pd.DataFrame(s2, columns=["x", "y"]) print(df2.corr()) x y x 1 1 y 1 1 By this manipulation, you change a data set with 0.0 correlation to one with 1.0 correlation. That's a problem.
What happens if the explanatory and response variables are sorted independently before regression?
Say you have these points on a circle of radius 5. You calculate the correlation: import pandas as pd s1 = [(-5, 0), (-4, -3), (-4, 3), (-3, -4), (-3, 4), (0, 5), (0, -5), (3, -4), (3, 4), (4, -3), (4
What happens if the explanatory and response variables are sorted independently before regression? Say you have these points on a circle of radius 5. You calculate the correlation: import pandas as pd s1 = [(-5, 0), (-4, -3), (-4, 3), (-3, -4), (-3, 4), (0, 5), (0, -5), (3, -4), (3, 4), (4, -3), (4, 3), (5, 0)] df1 = pd.DataFrame(s1, columns=["x", "y"]) print(df1.corr()) x y x 1 0 y 0 1 Then you sort your x- and y-values and do the correlation again: s2 = [(-5, -5), (-4, -4), (-4, -4), (-3, -3), (-3, -3), (0, 0), (0, 0), (3, 3), (3, 3), (4, 4), (4, 4), (5, 5)] df2 = pd.DataFrame(s2, columns=["x", "y"]) print(df2.corr()) x y x 1 1 y 1 1 By this manipulation, you change a data set with 0.0 correlation to one with 1.0 correlation. That's a problem.
What happens if the explanatory and response variables are sorted independently before regression? Say you have these points on a circle of radius 5. You calculate the correlation: import pandas as pd s1 = [(-5, 0), (-4, -3), (-4, 3), (-3, -4), (-3, 4), (0, 5), (0, -5), (3, -4), (3, 4), (4, -3), (4
185
What happens if the explanatory and response variables are sorted independently before regression?
Let me play Devil's Advocate here. I think many answers have made convincing cases that the boss' procedure is fundamentally mistaken. At the same time, I offer a counter-example that illustrates that the boss may have actually seen results improve with this mistaken transformation. I think that acknowledging that this procedure might've "worked" for the boss could begin a more-persuasive argument: Sure, it did work, but only under these lucky circumstances that usually won't hold. Then we can show -- as in the excellent accepted answer -- how bad it can be when we're not lucky. Which is most of the time. In isolation, showing the boss how bad it can be might not persuade him because he might have seen a case where it does improve things, and figure that our fancy argument must have a flaw somewhere. I found this data online, and sure enough, it appears that the regression is improved by the independent sorting of X and Y because: a) the data is highly positively correlated, and b) OLS really doesn't do well with extreme (high-leverage) outliers. The height and weight have a correlation of 0.19 with the outlier included, 0.77 with the outlier excluded, and 0.78 with X and Y independently sorted. x <- read.csv ("https://vincentarelbundock.github.io/Rdatasets/csv/car/Davis.csv", header=TRUE) plot (weight ~ height, data=x) lm1 <- lm (weight ~ height, data=x) xx <- x xx$weight <- sort (xx$weight) xx$height <- sort (xx$height) plot (weight ~ height, data=xx) lm2 <- lm (weight ~ height, data=xx) plot (weight ~ height, data=x) abline (lm1) abline (lm2, col="red") plot (x$height, x$weight) points (xx$height, xx$weight, col="red") So it appears to me that the regression model on this dataset is improved by the independent sorting (black versus red line in first graph), and there is a visible relationship (black versus red in the second graph), due to the particular dataset being highly (positively) correlated and having the right kind of outliers that harm the regression more than the shuffling that occurs when you independently sort x and y. Again, not saying independently sorting does anything sensible in general, nor that it's the correct answer here. Just that the boss might have seen something like this that happened to work under just the right circumstances.
What happens if the explanatory and response variables are sorted independently before regression?
Let me play Devil's Advocate here. I think many answers have made convincing cases that the boss' procedure is fundamentally mistaken. At the same time, I offer a counter-example that illustrates that
What happens if the explanatory and response variables are sorted independently before regression? Let me play Devil's Advocate here. I think many answers have made convincing cases that the boss' procedure is fundamentally mistaken. At the same time, I offer a counter-example that illustrates that the boss may have actually seen results improve with this mistaken transformation. I think that acknowledging that this procedure might've "worked" for the boss could begin a more-persuasive argument: Sure, it did work, but only under these lucky circumstances that usually won't hold. Then we can show -- as in the excellent accepted answer -- how bad it can be when we're not lucky. Which is most of the time. In isolation, showing the boss how bad it can be might not persuade him because he might have seen a case where it does improve things, and figure that our fancy argument must have a flaw somewhere. I found this data online, and sure enough, it appears that the regression is improved by the independent sorting of X and Y because: a) the data is highly positively correlated, and b) OLS really doesn't do well with extreme (high-leverage) outliers. The height and weight have a correlation of 0.19 with the outlier included, 0.77 with the outlier excluded, and 0.78 with X and Y independently sorted. x <- read.csv ("https://vincentarelbundock.github.io/Rdatasets/csv/car/Davis.csv", header=TRUE) plot (weight ~ height, data=x) lm1 <- lm (weight ~ height, data=x) xx <- x xx$weight <- sort (xx$weight) xx$height <- sort (xx$height) plot (weight ~ height, data=xx) lm2 <- lm (weight ~ height, data=xx) plot (weight ~ height, data=x) abline (lm1) abline (lm2, col="red") plot (x$height, x$weight) points (xx$height, xx$weight, col="red") So it appears to me that the regression model on this dataset is improved by the independent sorting (black versus red line in first graph), and there is a visible relationship (black versus red in the second graph), due to the particular dataset being highly (positively) correlated and having the right kind of outliers that harm the regression more than the shuffling that occurs when you independently sort x and y. Again, not saying independently sorting does anything sensible in general, nor that it's the correct answer here. Just that the boss might have seen something like this that happened to work under just the right circumstances.
What happens if the explanatory and response variables are sorted independently before regression? Let me play Devil's Advocate here. I think many answers have made convincing cases that the boss' procedure is fundamentally mistaken. At the same time, I offer a counter-example that illustrates that
186
What happens if the explanatory and response variables are sorted independently before regression?
Sorting the columns of the following table independently also makes it look "better": name,country Alice,DE Daniel,US Christian,DE Bernadette,US -> name,country Alice,DE Bernadette,DE Christian,US Daniel,US Now, all females are from Germany, and all males are from the US. We can now much more nicely predict the gender by just knowing the country. Isn't that great? /s
What happens if the explanatory and response variables are sorted independently before regression?
Sorting the columns of the following table independently also makes it look "better": name,country Alice,DE Daniel,US Christian,DE Bernadette,US -> name,country Alice,DE Bernadette,DE Christian,US Da
What happens if the explanatory and response variables are sorted independently before regression? Sorting the columns of the following table independently also makes it look "better": name,country Alice,DE Daniel,US Christian,DE Bernadette,US -> name,country Alice,DE Bernadette,DE Christian,US Daniel,US Now, all females are from Germany, and all males are from the US. We can now much more nicely predict the gender by just knowing the country. Isn't that great? /s
What happens if the explanatory and response variables are sorted independently before regression? Sorting the columns of the following table independently also makes it look "better": name,country Alice,DE Daniel,US Christian,DE Bernadette,US -> name,country Alice,DE Bernadette,DE Christian,US Da
187
What happens if the explanatory and response variables are sorted independently before regression?
If he has preselected the variables to be monotone, it actually is fairly robust. Google "improper linear models" and "Robin Dawes" or "Howard Wainer." Dawes and Wainer talk about alternate wayes of choosing coefficients. John Cook has a short column (http://www.johndcook.com/blog/2013/03/05/robustness-of-equal-weights/) on it.
What happens if the explanatory and response variables are sorted independently before regression?
If he has preselected the variables to be monotone, it actually is fairly robust. Google "improper linear models" and "Robin Dawes" or "Howard Wainer." Dawes and Wainer talk about alternate wayes of
What happens if the explanatory and response variables are sorted independently before regression? If he has preselected the variables to be monotone, it actually is fairly robust. Google "improper linear models" and "Robin Dawes" or "Howard Wainer." Dawes and Wainer talk about alternate wayes of choosing coefficients. John Cook has a short column (http://www.johndcook.com/blog/2013/03/05/robustness-of-equal-weights/) on it.
What happens if the explanatory and response variables are sorted independently before regression? If he has preselected the variables to be monotone, it actually is fairly robust. Google "improper linear models" and "Robin Dawes" or "Howard Wainer." Dawes and Wainer talk about alternate wayes of
188
What happens if the explanatory and response variables are sorted independently before regression?
I thought about it, and thought there is some structure here based on order statistics. I checked, and seems manager's mo is not as nuts as it sounds Order Statistics Correlation Coefficient as a Novel Association Measurement With Applications to Biosignal Analysis http://www.researchgate.net/profile/Weichao_Xu/publication/3320558_Order_Statistics_Correlation_Coefficient_as_a_Novel_Association_Measurement_With_Applications_to_Biosignal_Analysis/links/0912f507ed6f94a3c6000000.pdf We propose a novel correlation coefficient based on order statistics and rearrangement inequality. The proposed coefficient represents a compromise between the Pearson's linear coefficient and the two rank-based coefficients, namely Spearman's rho and Kendall's tau. Theoretical derivations show that our coefficient possesses the same basic properties as the three classical coefficients. Experimental studies based on four models and six biosignals show that our coefficient performs better than the two rank-based coefficients when measuring linear associations; whereas it is well able to detect monotone nonlinear associations like the two rank-based coefficients. Extensive statistical analyses also suggest that our new coefficient has superior anti-noise robustness, small biasedness, high sensitivity to changes in association, accurate time-delay detection ability, fast computational speed, and robustness under monotone nonlinear transformations.
What happens if the explanatory and response variables are sorted independently before regression?
I thought about it, and thought there is some structure here based on order statistics. I checked, and seems manager's mo is not as nuts as it sounds Order Statistics Correlation Coefficient as a Nove
What happens if the explanatory and response variables are sorted independently before regression? I thought about it, and thought there is some structure here based on order statistics. I checked, and seems manager's mo is not as nuts as it sounds Order Statistics Correlation Coefficient as a Novel Association Measurement With Applications to Biosignal Analysis http://www.researchgate.net/profile/Weichao_Xu/publication/3320558_Order_Statistics_Correlation_Coefficient_as_a_Novel_Association_Measurement_With_Applications_to_Biosignal_Analysis/links/0912f507ed6f94a3c6000000.pdf We propose a novel correlation coefficient based on order statistics and rearrangement inequality. The proposed coefficient represents a compromise between the Pearson's linear coefficient and the two rank-based coefficients, namely Spearman's rho and Kendall's tau. Theoretical derivations show that our coefficient possesses the same basic properties as the three classical coefficients. Experimental studies based on four models and six biosignals show that our coefficient performs better than the two rank-based coefficients when measuring linear associations; whereas it is well able to detect monotone nonlinear associations like the two rank-based coefficients. Extensive statistical analyses also suggest that our new coefficient has superior anti-noise robustness, small biasedness, high sensitivity to changes in association, accurate time-delay detection ability, fast computational speed, and robustness under monotone nonlinear transformations.
What happens if the explanatory and response variables are sorted independently before regression? I thought about it, and thought there is some structure here based on order statistics. I checked, and seems manager's mo is not as nuts as it sounds Order Statistics Correlation Coefficient as a Nove
189
How to normalize data to 0-1 range?
If you want to normalize your data, you can do so as you suggest and simply calculate the following: $$z_i=\frac{x_i-\min(x)}{\max(x)-\min(x)}$$ where $x=(x_1,...,x_n)$ and $z_i$ is now your $i^{th}$ normalized data. As a proof of concept (although you did not ask for it) here is some R code and accompanying graph to illustrate this point: # Example Data x = sample(-100:100, 50) #Normalized Data normalized = (x-min(x))/(max(x)-min(x)) # Histogram of example data and normalized data par(mfrow=c(1,2)) hist(x, breaks=10, xlab="Data", col="lightblue", main="") hist(normalized, breaks=10, xlab="Normalized Data", col="lightblue", main="")
How to normalize data to 0-1 range?
If you want to normalize your data, you can do so as you suggest and simply calculate the following: $$z_i=\frac{x_i-\min(x)}{\max(x)-\min(x)}$$ where $x=(x_1,...,x_n)$ and $z_i$ is now your $i^{th}$
How to normalize data to 0-1 range? If you want to normalize your data, you can do so as you suggest and simply calculate the following: $$z_i=\frac{x_i-\min(x)}{\max(x)-\min(x)}$$ where $x=(x_1,...,x_n)$ and $z_i$ is now your $i^{th}$ normalized data. As a proof of concept (although you did not ask for it) here is some R code and accompanying graph to illustrate this point: # Example Data x = sample(-100:100, 50) #Normalized Data normalized = (x-min(x))/(max(x)-min(x)) # Histogram of example data and normalized data par(mfrow=c(1,2)) hist(x, breaks=10, xlab="Data", col="lightblue", main="") hist(normalized, breaks=10, xlab="Normalized Data", col="lightblue", main="")
How to normalize data to 0-1 range? If you want to normalize your data, you can do so as you suggest and simply calculate the following: $$z_i=\frac{x_i-\min(x)}{\max(x)-\min(x)}$$ where $x=(x_1,...,x_n)$ and $z_i$ is now your $i^{th}$
190
How to normalize data to 0-1 range?
The general one-line formula to linearly rescale data values having observed min and max into a new arbitrary range min' to max' is newvalue= (max'-min')/(max-min)*(value-max)+max' or newvalue= (max'-min')/(max-min)*(value-min)+min'.
How to normalize data to 0-1 range?
The general one-line formula to linearly rescale data values having observed min and max into a new arbitrary range min' to max' is newvalue= (max'-min')/(max-min)*(value-max)+max' or newvalue=
How to normalize data to 0-1 range? The general one-line formula to linearly rescale data values having observed min and max into a new arbitrary range min' to max' is newvalue= (max'-min')/(max-min)*(value-max)+max' or newvalue= (max'-min')/(max-min)*(value-min)+min'.
How to normalize data to 0-1 range? The general one-line formula to linearly rescale data values having observed min and max into a new arbitrary range min' to max' is newvalue= (max'-min')/(max-min)*(value-max)+max' or newvalue=
191
How to normalize data to 0-1 range?
Here is my PHP implementation for normalisation: function normalize($value, $min, $max) { $normalized = ($value - $min) / ($max - $min); return $normalized; } But while I was building my own artificial neural networks, I needed to transform the normalized output back to the original data to get good readable output for the graph. function denormalize($normalized, $min, $max) { $denormalized = ($normalized * ($max - $min) + $min); return $denormalized; } $int = 12; $max = 20; $min = 10; $normalized = normalize($int, $min, $max); // 0.2 $denormalized = denormalize($normalized, $min, $max); //12 Denormalisation uses the following formula: $x (\text{max} - \text{min}) + \text{min}$
How to normalize data to 0-1 range?
Here is my PHP implementation for normalisation: function normalize($value, $min, $max) { $normalized = ($value - $min) / ($max - $min); return $normalized; } But while I was building my own artifi
How to normalize data to 0-1 range? Here is my PHP implementation for normalisation: function normalize($value, $min, $max) { $normalized = ($value - $min) / ($max - $min); return $normalized; } But while I was building my own artificial neural networks, I needed to transform the normalized output back to the original data to get good readable output for the graph. function denormalize($normalized, $min, $max) { $denormalized = ($normalized * ($max - $min) + $min); return $denormalized; } $int = 12; $max = 20; $min = 10; $normalized = normalize($int, $min, $max); // 0.2 $denormalized = denormalize($normalized, $min, $max); //12 Denormalisation uses the following formula: $x (\text{max} - \text{min}) + \text{min}$
How to normalize data to 0-1 range? Here is my PHP implementation for normalisation: function normalize($value, $min, $max) { $normalized = ($value - $min) / ($max - $min); return $normalized; } But while I was building my own artifi
192
How to normalize data to 0-1 range?
Division by zero One thing to keep in mind is that max - min could equal zero. In this case, you would not want to perform that division. The case where this would happen is when all values in the list you're trying to normalize are the same. To normalize such a list, each item would be 1 / length. // JavaScript function normalize(list) { var minMax = list.reduce((acc, value) => { if (value < acc.min) { acc.min = value; } if (value > acc.max) { acc.max = value; } return acc; }, {min: Number.POSITIVE_INFINITY, max: Number.NEGATIVE_INFINITY}); return list.map(value => { // Verify that you're not about to divide by zero if (minMax.max === minMax.min) { return 1 / list.length } var diff = minMax.max - minMax.min; return (value - minMax.min) / diff; }); } Example: normalize([3, 3, 3, 3]); // output => [0.25, 0.25, 0.25, 0.25]
How to normalize data to 0-1 range?
Division by zero One thing to keep in mind is that max - min could equal zero. In this case, you would not want to perform that division. The case where this would happen is when all values in the lis
How to normalize data to 0-1 range? Division by zero One thing to keep in mind is that max - min could equal zero. In this case, you would not want to perform that division. The case where this would happen is when all values in the list you're trying to normalize are the same. To normalize such a list, each item would be 1 / length. // JavaScript function normalize(list) { var minMax = list.reduce((acc, value) => { if (value < acc.min) { acc.min = value; } if (value > acc.max) { acc.max = value; } return acc; }, {min: Number.POSITIVE_INFINITY, max: Number.NEGATIVE_INFINITY}); return list.map(value => { // Verify that you're not about to divide by zero if (minMax.max === minMax.min) { return 1 / list.length } var diff = minMax.max - minMax.min; return (value - minMax.min) / diff; }); } Example: normalize([3, 3, 3, 3]); // output => [0.25, 0.25, 0.25, 0.25]
How to normalize data to 0-1 range? Division by zero One thing to keep in mind is that max - min could equal zero. In this case, you would not want to perform that division. The case where this would happen is when all values in the lis
193
How to normalize data to 0-1 range?
Try this. It is consistent with the function scale normalize <- function(x) { x <- as.matrix(x) minAttr=apply(x, 2, min) maxAttr=apply(x, 2, max) x <- sweep(x, 2, minAttr, FUN="-") x=sweep(x, 2, maxAttr-minAttr, "/") attr(x, 'normalized:min') = minAttr attr(x, 'normalized:max') = maxAttr return (x) }
How to normalize data to 0-1 range?
Try this. It is consistent with the function scale normalize <- function(x) { x <- as.matrix(x) minAttr=apply(x, 2, min) maxAttr=apply(x, 2, max) x <- sweep(x, 2, minAttr, FUN="-") x=sweep
How to normalize data to 0-1 range? Try this. It is consistent with the function scale normalize <- function(x) { x <- as.matrix(x) minAttr=apply(x, 2, min) maxAttr=apply(x, 2, max) x <- sweep(x, 2, minAttr, FUN="-") x=sweep(x, 2, maxAttr-minAttr, "/") attr(x, 'normalized:min') = minAttr attr(x, 'normalized:max') = maxAttr return (x) }
How to normalize data to 0-1 range? Try this. It is consistent with the function scale normalize <- function(x) { x <- as.matrix(x) minAttr=apply(x, 2, min) maxAttr=apply(x, 2, max) x <- sweep(x, 2, minAttr, FUN="-") x=sweep
194
How to normalize data to 0-1 range?
The answer is right but I have a suggestion, what if your training data face some number out of range? you could use the squashing technique. it will be guaranteed never to go out of range. rather than this I recommend using this with squashing like this in min and max of the range and the size of the expected out-of-range gap is directly proportional to the degree of confidence that there will be out-of-range values. For more information, you can google: squashing the out-of-range numbers and refer to the data preparation book of "Dorian Pyle".
How to normalize data to 0-1 range?
The answer is right but I have a suggestion, what if your training data face some number out of range? you could use the squashing technique. it will be guaranteed never to go out of range. rather th
How to normalize data to 0-1 range? The answer is right but I have a suggestion, what if your training data face some number out of range? you could use the squashing technique. it will be guaranteed never to go out of range. rather than this I recommend using this with squashing like this in min and max of the range and the size of the expected out-of-range gap is directly proportional to the degree of confidence that there will be out-of-range values. For more information, you can google: squashing the out-of-range numbers and refer to the data preparation book of "Dorian Pyle".
How to normalize data to 0-1 range? The answer is right but I have a suggestion, what if your training data face some number out of range? you could use the squashing technique. it will be guaranteed never to go out of range. rather th
195
How to normalize data to 0-1 range?
Select a cumulative probability distribution F. Then F(x) is between 0 and 1 for every x.
How to normalize data to 0-1 range?
Select a cumulative probability distribution F. Then F(x) is between 0 and 1 for every x.
How to normalize data to 0-1 range? Select a cumulative probability distribution F. Then F(x) is between 0 and 1 for every x.
How to normalize data to 0-1 range? Select a cumulative probability distribution F. Then F(x) is between 0 and 1 for every x.
196
Difference between logit and probit models
They mainly differ in the link function. In Logit: $\Pr(Y=1 \mid X) = [1 + e^{-X'\beta}]^{-1} $ In Probit: $\Pr(Y=1 \mid X) = \Phi(X'\beta)$ (Cumulative standard normal pdf) In other way, logistic has slightly flatter tails. i.e the probit curve approaches the axes more quickly than the logit curve. Logit has easier interpretation than probit. Logistic regression can be interpreted as modelling log odds (i.e those who smoke >25 cigarettes a day are 6 times more likely to die before 65 years of age). Usually people start the modelling with logit. You could use the likelihood value of each model to decide for logit vs probit.
Difference between logit and probit models
They mainly differ in the link function. In Logit: $\Pr(Y=1 \mid X) = [1 + e^{-X'\beta}]^{-1} $ In Probit: $\Pr(Y=1 \mid X) = \Phi(X'\beta)$ (Cumulative standard normal pdf) In other way, logistic h
Difference between logit and probit models They mainly differ in the link function. In Logit: $\Pr(Y=1 \mid X) = [1 + e^{-X'\beta}]^{-1} $ In Probit: $\Pr(Y=1 \mid X) = \Phi(X'\beta)$ (Cumulative standard normal pdf) In other way, logistic has slightly flatter tails. i.e the probit curve approaches the axes more quickly than the logit curve. Logit has easier interpretation than probit. Logistic regression can be interpreted as modelling log odds (i.e those who smoke >25 cigarettes a day are 6 times more likely to die before 65 years of age). Usually people start the modelling with logit. You could use the likelihood value of each model to decide for logit vs probit.
Difference between logit and probit models They mainly differ in the link function. In Logit: $\Pr(Y=1 \mid X) = [1 + e^{-X'\beta}]^{-1} $ In Probit: $\Pr(Y=1 \mid X) = \Phi(X'\beta)$ (Cumulative standard normal pdf) In other way, logistic h
197
Difference between logit and probit models
A standard linear model (e.g., a simple regression model) can be thought of as having two 'parts'. These are called the structural component and the random component. For example: $$ Y=\beta_0+\beta_1X+\varepsilon \\ \text{where } \varepsilon\sim\mathcal{N}(0,\sigma^2) $$ The first two terms (that is, $\beta_0+\beta_1X$) constitute the structural component, and the $\varepsilon$ (which indicates a normally distributed error term) is the random component. When the response variable is not normally distributed (for example, if your response variable is binary) this approach may no longer be valid. The generalized linear model (GLiM) was developed to address such cases, and logit and probit models are special cases of GLiMs that are appropriate for binary variables (or multi-category response variables with some adaptations to the process). A GLiM has three parts, a structural component, a link function, and a response distribution. For example: $$ g(\mu)=\beta_0+\beta_1X $$ Here $\beta_0+\beta_1X$ is again the structural component, $g()$ is the link function, and $\mu$ is a mean of a conditional response distribution at a given point in the covariate space. The way we think about the structural component here doesn't really differ from how we think about it with standard linear models; in fact, that's one of the great advantages of GLiMs. Because for many distributions the variance is a function of the mean, having fit a conditional mean (and given that you stipulated a response distribution), you have automatically accounted for the analog of the random component in a linear model (N.B.: this can be more complicated in practice). The link function is the key to GLiMs: since the distribution of the response variable is non-normal, it's what lets us connect the structural component to the response--it 'links' them (hence the name). It's also the key to your question, since the logit and probit are links (as @vinux explained), and understanding link functions will allow us to intelligently choose when to use which one. Although there can be many link functions that can be acceptable, often there is one that is special. Without wanting to get too far into the weeds (this can get very technical) the predicted mean, $\mu$, will not necessarily be mathematically the same as the response distribution's canonical location parameter; the link function that does equate them is the canonical link function. The advantage of this "is that a minimal sufficient statistic for $\beta$ exists" (German Rodriguez). The canonical link for binary response data (more specifically, the binomial distribution) is the logit. However, there are lots of functions that can map the structural component onto the interval $(0,1)$, and thus be acceptable; the probit is also popular, but there are yet other options that are sometimes used (such as the complementary log log, $\ln(-\ln(1-\mu))$, often called 'cloglog'). Thus, there are lots of possible link functions and the choice of link function can be very important. The choice should be made based on some combination of: Knowledge of the response distribution, Theoretical considerations, and Empirical fit to the data. Having covered a little of conceptual background needed to understand these ideas more clearly (forgive me), I will explain how these considerations can be used to guide your choice of link. (Let me note that I think @David's comment accurately captures why different links are chosen in practice.) To start with, if your response variable is the outcome of a Bernoulli trial (that is, $0$ or $1$), your response distribution will be binomial, and what you are actually modeling is the probability of an observation being a $1$ (that is, $\pi(Y=1)$). As a result, any function that maps the real number line, $(-\infty,+\infty)$, to the interval $(0,1)$ will work. From the point of view of your substantive theory, if you are thinking of your covariates as directly connected to the probability of success, then you would typically choose logistic regression because it is the canonical link. However, consider the following example: You are asked to model high_Blood_Pressure as a function of some covariates. Blood pressure itself is normally distributed in the population (I don't actually know that, but it seems reasonable prima facie), nonetheless, clinicians dichotomized it during the study (that is, they only recorded 'high-BP' or 'normal'). In this case, probit would be preferable a-priori for theoretical reasons. This is what @Elvis meant by "your binary outcome depends on a hidden Gaussian variable". Another consideration is that both logit and probit are symmetrical, if you believe that the probability of success rises slowly from zero, but then tapers off more quickly as it approaches one, the cloglog is called for, etc. Lastly, note that the empirical fit of the model to the data is unlikely to be of assistance in selecting a link, unless the shapes of the link functions in question differ substantially (of which, the logit and probit do not). For instance, consider the following simulation: set.seed(1) probLower = vector(length=1000) for(i in 1:1000){ x = rnorm(1000) y = rbinom(n=1000, size=1, prob=pnorm(x)) logitModel = glm(y~x, family=binomial(link="logit")) probitModel = glm(y~x, family=binomial(link="probit")) probLower[i] = deviance(probitModel)<deviance(logitModel) } sum(probLower)/1000 [1] 0.695 Even when we know the data were generated by a probit model, and we have 1000 data points, the probit model only yields a better fit 70% of the time, and even then, often by only a trivial amount. Consider the last iteration: deviance(probitModel) [1] 1025.759 deviance(logitModel) [1] 1026.366 deviance(logitModel)-deviance(probitModel) [1] 0.6076806 The reason for this is simply that the logit and probit link functions yield very similar outputs when given the same inputs. The logit and probit functions are practically identical, except that the logit is slightly further from the bounds when they 'turn the corner', as @vinux stated. (Note that to get the logit and the probit to align optimally, the logit's $\beta_1$ must be $\approx 1.7$ times the corresponding slope value for the probit. In addition, I could have shifted the cloglog over slightly so that they would lay on top of each other more, but I left it to the side to keep the figure more readable.) Notice that the cloglog is asymmetrical whereas the others are not; it starts pulling away from 0 earlier, but more slowly, and approaches close to 1 and then turns sharply. A couple more things can be said about link functions. First, considering the identity function ($g(\eta)=\eta$) as a link function allows us to understand the standard linear model as a special case of the generalized linear model (that is, the response distribution is normal, and the link is the identity function). It's also important to recognize that whatever transformation the link instantiates is properly applied to the parameter governing the response distribution (that is, $\mu$), not the actual response data. Finally, because in practice we never have the underlying parameter to transform, in discussions of these models, often what is considered to be the actual link is left implicit and the model is represented by the inverse of the link function applied to the structural component instead. That is: $$ \mu=g^{-1}(\beta_0+\beta_1X) $$ For instance, logistic regression is usually represented: $$ \pi(Y)=\frac{\exp(\beta_0+\beta_1X)}{1+\exp(\beta_0+\beta_1X)} $$ instead of: $$ \ln\left(\frac{\pi(Y)}{1-\pi(Y)}\right)=\beta_0+\beta_1X $$ For a quick and clear, but solid, overview of the generalized linear model, see chapter 10 of Fitzmaurice, Laird, & Ware (2004), (on which I leaned for parts of this answer, although since this is my own adaptation of that--and other--material, any mistakes would be my own). For how to fit these models in R, check out the documentation for the function ?glm in the base package. (One final note added later:) I occasionally hear people say that you shouldn't use the probit, because it can't be interpreted. This is not true, although the interpretation of the betas is less intuitive. With logistic regression, a one unit change in $X_1$ is associated with a $\beta_1$ change in the log odds of 'success' (alternatively, an $\exp(\beta_1)$-fold change in the odds), all else being equal. With a probit, this would be a change of $\beta_1\text{ }z$'s. (Think of two observations in a dataset with $z$-scores of 1 and 2, for example.) To convert these into predicted probabilities, you can pass them through the normal CDF, or look them up on a $z$-table. (+1 to both @vinux and @Elvis. Here I have tried to provide a broader framework within which to think about these things and then using that to address the choice between logit and probit.)
Difference between logit and probit models
A standard linear model (e.g., a simple regression model) can be thought of as having two 'parts'. These are called the structural component and the random component. For example: $$ Y=\beta_0+\beta_1
Difference between logit and probit models A standard linear model (e.g., a simple regression model) can be thought of as having two 'parts'. These are called the structural component and the random component. For example: $$ Y=\beta_0+\beta_1X+\varepsilon \\ \text{where } \varepsilon\sim\mathcal{N}(0,\sigma^2) $$ The first two terms (that is, $\beta_0+\beta_1X$) constitute the structural component, and the $\varepsilon$ (which indicates a normally distributed error term) is the random component. When the response variable is not normally distributed (for example, if your response variable is binary) this approach may no longer be valid. The generalized linear model (GLiM) was developed to address such cases, and logit and probit models are special cases of GLiMs that are appropriate for binary variables (or multi-category response variables with some adaptations to the process). A GLiM has three parts, a structural component, a link function, and a response distribution. For example: $$ g(\mu)=\beta_0+\beta_1X $$ Here $\beta_0+\beta_1X$ is again the structural component, $g()$ is the link function, and $\mu$ is a mean of a conditional response distribution at a given point in the covariate space. The way we think about the structural component here doesn't really differ from how we think about it with standard linear models; in fact, that's one of the great advantages of GLiMs. Because for many distributions the variance is a function of the mean, having fit a conditional mean (and given that you stipulated a response distribution), you have automatically accounted for the analog of the random component in a linear model (N.B.: this can be more complicated in practice). The link function is the key to GLiMs: since the distribution of the response variable is non-normal, it's what lets us connect the structural component to the response--it 'links' them (hence the name). It's also the key to your question, since the logit and probit are links (as @vinux explained), and understanding link functions will allow us to intelligently choose when to use which one. Although there can be many link functions that can be acceptable, often there is one that is special. Without wanting to get too far into the weeds (this can get very technical) the predicted mean, $\mu$, will not necessarily be mathematically the same as the response distribution's canonical location parameter; the link function that does equate them is the canonical link function. The advantage of this "is that a minimal sufficient statistic for $\beta$ exists" (German Rodriguez). The canonical link for binary response data (more specifically, the binomial distribution) is the logit. However, there are lots of functions that can map the structural component onto the interval $(0,1)$, and thus be acceptable; the probit is also popular, but there are yet other options that are sometimes used (such as the complementary log log, $\ln(-\ln(1-\mu))$, often called 'cloglog'). Thus, there are lots of possible link functions and the choice of link function can be very important. The choice should be made based on some combination of: Knowledge of the response distribution, Theoretical considerations, and Empirical fit to the data. Having covered a little of conceptual background needed to understand these ideas more clearly (forgive me), I will explain how these considerations can be used to guide your choice of link. (Let me note that I think @David's comment accurately captures why different links are chosen in practice.) To start with, if your response variable is the outcome of a Bernoulli trial (that is, $0$ or $1$), your response distribution will be binomial, and what you are actually modeling is the probability of an observation being a $1$ (that is, $\pi(Y=1)$). As a result, any function that maps the real number line, $(-\infty,+\infty)$, to the interval $(0,1)$ will work. From the point of view of your substantive theory, if you are thinking of your covariates as directly connected to the probability of success, then you would typically choose logistic regression because it is the canonical link. However, consider the following example: You are asked to model high_Blood_Pressure as a function of some covariates. Blood pressure itself is normally distributed in the population (I don't actually know that, but it seems reasonable prima facie), nonetheless, clinicians dichotomized it during the study (that is, they only recorded 'high-BP' or 'normal'). In this case, probit would be preferable a-priori for theoretical reasons. This is what @Elvis meant by "your binary outcome depends on a hidden Gaussian variable". Another consideration is that both logit and probit are symmetrical, if you believe that the probability of success rises slowly from zero, but then tapers off more quickly as it approaches one, the cloglog is called for, etc. Lastly, note that the empirical fit of the model to the data is unlikely to be of assistance in selecting a link, unless the shapes of the link functions in question differ substantially (of which, the logit and probit do not). For instance, consider the following simulation: set.seed(1) probLower = vector(length=1000) for(i in 1:1000){ x = rnorm(1000) y = rbinom(n=1000, size=1, prob=pnorm(x)) logitModel = glm(y~x, family=binomial(link="logit")) probitModel = glm(y~x, family=binomial(link="probit")) probLower[i] = deviance(probitModel)<deviance(logitModel) } sum(probLower)/1000 [1] 0.695 Even when we know the data were generated by a probit model, and we have 1000 data points, the probit model only yields a better fit 70% of the time, and even then, often by only a trivial amount. Consider the last iteration: deviance(probitModel) [1] 1025.759 deviance(logitModel) [1] 1026.366 deviance(logitModel)-deviance(probitModel) [1] 0.6076806 The reason for this is simply that the logit and probit link functions yield very similar outputs when given the same inputs. The logit and probit functions are practically identical, except that the logit is slightly further from the bounds when they 'turn the corner', as @vinux stated. (Note that to get the logit and the probit to align optimally, the logit's $\beta_1$ must be $\approx 1.7$ times the corresponding slope value for the probit. In addition, I could have shifted the cloglog over slightly so that they would lay on top of each other more, but I left it to the side to keep the figure more readable.) Notice that the cloglog is asymmetrical whereas the others are not; it starts pulling away from 0 earlier, but more slowly, and approaches close to 1 and then turns sharply. A couple more things can be said about link functions. First, considering the identity function ($g(\eta)=\eta$) as a link function allows us to understand the standard linear model as a special case of the generalized linear model (that is, the response distribution is normal, and the link is the identity function). It's also important to recognize that whatever transformation the link instantiates is properly applied to the parameter governing the response distribution (that is, $\mu$), not the actual response data. Finally, because in practice we never have the underlying parameter to transform, in discussions of these models, often what is considered to be the actual link is left implicit and the model is represented by the inverse of the link function applied to the structural component instead. That is: $$ \mu=g^{-1}(\beta_0+\beta_1X) $$ For instance, logistic regression is usually represented: $$ \pi(Y)=\frac{\exp(\beta_0+\beta_1X)}{1+\exp(\beta_0+\beta_1X)} $$ instead of: $$ \ln\left(\frac{\pi(Y)}{1-\pi(Y)}\right)=\beta_0+\beta_1X $$ For a quick and clear, but solid, overview of the generalized linear model, see chapter 10 of Fitzmaurice, Laird, & Ware (2004), (on which I leaned for parts of this answer, although since this is my own adaptation of that--and other--material, any mistakes would be my own). For how to fit these models in R, check out the documentation for the function ?glm in the base package. (One final note added later:) I occasionally hear people say that you shouldn't use the probit, because it can't be interpreted. This is not true, although the interpretation of the betas is less intuitive. With logistic regression, a one unit change in $X_1$ is associated with a $\beta_1$ change in the log odds of 'success' (alternatively, an $\exp(\beta_1)$-fold change in the odds), all else being equal. With a probit, this would be a change of $\beta_1\text{ }z$'s. (Think of two observations in a dataset with $z$-scores of 1 and 2, for example.) To convert these into predicted probabilities, you can pass them through the normal CDF, or look them up on a $z$-table. (+1 to both @vinux and @Elvis. Here I have tried to provide a broader framework within which to think about these things and then using that to address the choice between logit and probit.)
Difference between logit and probit models A standard linear model (e.g., a simple regression model) can be thought of as having two 'parts'. These are called the structural component and the random component. For example: $$ Y=\beta_0+\beta_1
198
Difference between logit and probit models
In addition to vinux’ answer, which already tells the most important: the coefficients $\beta$ in the logit regression have natural interpretations in terms of odds ratio; the probistic regression is the natural model when you think that your binary outcome depends of a hidden gaussian variable $Z = X' \beta + \epsilon\ $ [eq. 1] with $\epsilon \sim \mathcal N(0,1)$ in a deterministic manner: $Y = 1$ exactly when $Z > 0$. More generally, and more naturally, probistic regression is the more natural model if you think that the outcome is $1$ exactly when some $Z_0 = X' \beta_0 + \epsilon_0$ exceeds a threshold $c$, with $\epsilon \sim \mathcal N(0,\sigma^2)$. It is easy to see that this can be reduced to the aforementioned case: just rescale $Z_0$ as $Z = {1\over \sigma}(Z_0-c)$; it’s easy to check that equation [eq. 1] still holds (rescale the coefficients and translate the intercept). These models have been defended, for example, in medical contexts, where $Z_0$ would be an unobserved continuous variable, and $Y$ eg a disease which appears when $Z_0$ exceeds some "pathological threshold". Both logit and probit models are only models. "All models are wrong, some are useful", as Box once said! Both models will allow you to detect the existence of an effect of $X$ on the outcome $Y$; except in some very special cases, none of them will be "really true", and their interpretation should be done with cautiousness.
Difference between logit and probit models
In addition to vinux’ answer, which already tells the most important: the coefficients $\beta$ in the logit regression have natural interpretations in terms of odds ratio; the probistic regression is
Difference between logit and probit models In addition to vinux’ answer, which already tells the most important: the coefficients $\beta$ in the logit regression have natural interpretations in terms of odds ratio; the probistic regression is the natural model when you think that your binary outcome depends of a hidden gaussian variable $Z = X' \beta + \epsilon\ $ [eq. 1] with $\epsilon \sim \mathcal N(0,1)$ in a deterministic manner: $Y = 1$ exactly when $Z > 0$. More generally, and more naturally, probistic regression is the more natural model if you think that the outcome is $1$ exactly when some $Z_0 = X' \beta_0 + \epsilon_0$ exceeds a threshold $c$, with $\epsilon \sim \mathcal N(0,\sigma^2)$. It is easy to see that this can be reduced to the aforementioned case: just rescale $Z_0$ as $Z = {1\over \sigma}(Z_0-c)$; it’s easy to check that equation [eq. 1] still holds (rescale the coefficients and translate the intercept). These models have been defended, for example, in medical contexts, where $Z_0$ would be an unobserved continuous variable, and $Y$ eg a disease which appears when $Z_0$ exceeds some "pathological threshold". Both logit and probit models are only models. "All models are wrong, some are useful", as Box once said! Both models will allow you to detect the existence of an effect of $X$ on the outcome $Y$; except in some very special cases, none of them will be "really true", and their interpretation should be done with cautiousness.
Difference between logit and probit models In addition to vinux’ answer, which already tells the most important: the coefficients $\beta$ in the logit regression have natural interpretations in terms of odds ratio; the probistic regression is
199
Difference between logit and probit models
Regarding your statement I'm more interested here in knowing when to use logistic regression, and when to use probit There are already many answers here that bring up things to consider when choosing between the two but there is one important consideration that hasn't been stated yet: When your interest is in looking at within-cluster associations in binary data using mixed effects logistic or probit models, there is a theoretical grounding for preferring the probit model. This is, of course, assuming that there is no a priori reason for preferring the logistic model (e.g. if you're doing a simulation and know it to be the true model). First, To see why this is true first note that both of these models can be viewed as thresholded continuous regression models. As an example consider the simple linear mixed effects model for the observation $i$ within cluster $j$: $$ y^{\star}_{ij} = \mu + \eta_{j} + \varepsilon_{ij} $$ where $\eta_j \sim N(0,\sigma^2)$ is the cluster $j$ random effect and $\varepsilon_{ij}$ is the error term. Then both the logistic and probit regression models are equivalently formulated as being generated from this model and thresholding at 0: $$ y_{ij} = \begin{cases} 1 & \text{if} \ \ \ y^{\star}_{ij}≥0\\ \\ 0 &\text{if} \ \ \ y^{\star}_{ij}<0 \end{cases} $$ If the $\varepsilon_{ij}$ term is normally distributed, you have a probit regression and if it is logistically distributed you have a logistic regression model. Since the scale is not identified, these residuals errors are specified as standard normal and standard logistic, respectively. Pearson (1900) showed that that if multivariate normal data were generated and thresholded to be categorical, the correlations between the underlying variables were still statistically identified - these correlations are termed polychoric correlations and, specific to the binary case, they are termed tetrachoric correlations. This means that, in a probit model, the intraclass correlation coefficient of the underlying normally distributed variables: $$ {\rm ICC} = \frac{ \hat{\sigma}^{2} }{\hat{\sigma}^{2} + 1 } $$ is identified which means that in the probit case you can fully characterize the joint distribution of the underlying latent variables. In the logistic model the random effect variance in the logistic model is still identified but it does not fully characterize the dependence structure (and therefore the joint distribution), since it is a mixture between a normal and a logistic random variable that does not have the property that it is fully specified by its mean and covariance matrix. Noting this odd parametric assumption for the underlying latent variables makes interpretation of the random effects in the logistic model less clear to interpret in general.
Difference between logit and probit models
Regarding your statement I'm more interested here in knowing when to use logistic regression, and when to use probit There are already many answers here that bring up things to consider when choosing
Difference between logit and probit models Regarding your statement I'm more interested here in knowing when to use logistic regression, and when to use probit There are already many answers here that bring up things to consider when choosing between the two but there is one important consideration that hasn't been stated yet: When your interest is in looking at within-cluster associations in binary data using mixed effects logistic or probit models, there is a theoretical grounding for preferring the probit model. This is, of course, assuming that there is no a priori reason for preferring the logistic model (e.g. if you're doing a simulation and know it to be the true model). First, To see why this is true first note that both of these models can be viewed as thresholded continuous regression models. As an example consider the simple linear mixed effects model for the observation $i$ within cluster $j$: $$ y^{\star}_{ij} = \mu + \eta_{j} + \varepsilon_{ij} $$ where $\eta_j \sim N(0,\sigma^2)$ is the cluster $j$ random effect and $\varepsilon_{ij}$ is the error term. Then both the logistic and probit regression models are equivalently formulated as being generated from this model and thresholding at 0: $$ y_{ij} = \begin{cases} 1 & \text{if} \ \ \ y^{\star}_{ij}≥0\\ \\ 0 &\text{if} \ \ \ y^{\star}_{ij}<0 \end{cases} $$ If the $\varepsilon_{ij}$ term is normally distributed, you have a probit regression and if it is logistically distributed you have a logistic regression model. Since the scale is not identified, these residuals errors are specified as standard normal and standard logistic, respectively. Pearson (1900) showed that that if multivariate normal data were generated and thresholded to be categorical, the correlations between the underlying variables were still statistically identified - these correlations are termed polychoric correlations and, specific to the binary case, they are termed tetrachoric correlations. This means that, in a probit model, the intraclass correlation coefficient of the underlying normally distributed variables: $$ {\rm ICC} = \frac{ \hat{\sigma}^{2} }{\hat{\sigma}^{2} + 1 } $$ is identified which means that in the probit case you can fully characterize the joint distribution of the underlying latent variables. In the logistic model the random effect variance in the logistic model is still identified but it does not fully characterize the dependence structure (and therefore the joint distribution), since it is a mixture between a normal and a logistic random variable that does not have the property that it is fully specified by its mean and covariance matrix. Noting this odd parametric assumption for the underlying latent variables makes interpretation of the random effects in the logistic model less clear to interpret in general.
Difference between logit and probit models Regarding your statement I'm more interested here in knowing when to use logistic regression, and when to use probit There are already many answers here that bring up things to consider when choosing
200
Difference between logit and probit models
An important point that has not been addressed in the previous (excellent) answers is the actual estimation step. Multinomial logit models have a PDF that is easy to integrate, leading to a closed-form expression of the choice probability. The density function of the normal distribution is not so easily integrated, so probit models typically require simulation. So while both models are abstractions of real world situations, logit is usually faster to use on larger problems (multiple alternatives or large datasets). To see this more clearly, the probability of a particular outcome being selected is a function of the $x$ predictor variables and the $\varepsilon$ error terms (following Train) $$ P = \int I[\varepsilon > -\beta'x] f(\varepsilon)d\varepsilon $$ Where $I$ is an indicator function, 1 if selected and zero otherwise. Evaluating this integral depends heavily on the assumption of $f(x)$. In a logit model, this is a logistic function, and a normal distribution in the probit model. For a logit model, this becomes $$ P=\int_{\varepsilon=-\beta'x}^{\infty} f(\varepsilon)d\varepsilon\\ = 1- F(-\beta'x) = 1-\dfrac{1}{\exp(\beta'x)} $$ No such convenient form exists for probit models.
Difference between logit and probit models
An important point that has not been addressed in the previous (excellent) answers is the actual estimation step. Multinomial logit models have a PDF that is easy to integrate, leading to a closed-for
Difference between logit and probit models An important point that has not been addressed in the previous (excellent) answers is the actual estimation step. Multinomial logit models have a PDF that is easy to integrate, leading to a closed-form expression of the choice probability. The density function of the normal distribution is not so easily integrated, so probit models typically require simulation. So while both models are abstractions of real world situations, logit is usually faster to use on larger problems (multiple alternatives or large datasets). To see this more clearly, the probability of a particular outcome being selected is a function of the $x$ predictor variables and the $\varepsilon$ error terms (following Train) $$ P = \int I[\varepsilon > -\beta'x] f(\varepsilon)d\varepsilon $$ Where $I$ is an indicator function, 1 if selected and zero otherwise. Evaluating this integral depends heavily on the assumption of $f(x)$. In a logit model, this is a logistic function, and a normal distribution in the probit model. For a logit model, this becomes $$ P=\int_{\varepsilon=-\beta'x}^{\infty} f(\varepsilon)d\varepsilon\\ = 1- F(-\beta'x) = 1-\dfrac{1}{\exp(\beta'x)} $$ No such convenient form exists for probit models.
Difference between logit and probit models An important point that has not been addressed in the previous (excellent) answers is the actual estimation step. Multinomial logit models have a PDF that is easy to integrate, leading to a closed-for