idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
601
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
Generally using "n" in the denominator gives smaller values than the population variance which is what we want to estimate. This especially happens if the small samples are taken. In the language of statistics, we say that the sample variance provides a “biased” estimate of the population variance and needs to be made "unbiased". If you are looking for an intuitive explanation, you should let your students see the reason for themselves by actually taking samples! Watch this, it precisely answers your question. https://www.youtube.com/watch?v=xslIhnquFoE
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
Generally using "n" in the denominator gives smaller values than the population variance which is what we want to estimate. This especially happens if the small samples are taken. In the language of s
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? Generally using "n" in the denominator gives smaller values than the population variance which is what we want to estimate. This especially happens if the small samples are taken. In the language of statistics, we say that the sample variance provides a “biased” estimate of the population variance and needs to be made "unbiased". If you are looking for an intuitive explanation, you should let your students see the reason for themselves by actually taking samples! Watch this, it precisely answers your question. https://www.youtube.com/watch?v=xslIhnquFoE
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? Generally using "n" in the denominator gives smaller values than the population variance which is what we want to estimate. This especially happens if the small samples are taken. In the language of s
602
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
I think it's worth pointing out the connection to Bayesian estimation. Suppose you assume your data is Gaussian, and so you measure the mean $\mu$ and variance $\sigma^2$ of a sample of $n$ points. You want to draw conclusions about the population. The Bayesian approach would be to evaluate the posterior predictive distribution over the sample, which is a generalized Student's T distribution (the origin of the T-test). This distribution has mean $\mu$, and variance $$\sigma^2 \left(\frac{n+1}{n-1}\right),$$ which is even larger than the typical correction. (It has $2n$ degrees of freedom.) The generalized Student's T distribution has three parameters and makes use of all three of your statistics. If you decide to throw out some information, you can further approximate your data using a two-parameter normal distribution as described in your question. From a Bayesian standpoint, you can imagine that uncertainty in the hyperparameters of the model (distributions over the mean and variance) cause the variance of the posterior predictive to be greater than the population variance.
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
I think it's worth pointing out the connection to Bayesian estimation. Suppose you assume your data is Gaussian, and so you measure the mean $\mu$ and variance $\sigma^2$ of a sample of $n$ points.
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? I think it's worth pointing out the connection to Bayesian estimation. Suppose you assume your data is Gaussian, and so you measure the mean $\mu$ and variance $\sigma^2$ of a sample of $n$ points. You want to draw conclusions about the population. The Bayesian approach would be to evaluate the posterior predictive distribution over the sample, which is a generalized Student's T distribution (the origin of the T-test). This distribution has mean $\mu$, and variance $$\sigma^2 \left(\frac{n+1}{n-1}\right),$$ which is even larger than the typical correction. (It has $2n$ degrees of freedom.) The generalized Student's T distribution has three parameters and makes use of all three of your statistics. If you decide to throw out some information, you can further approximate your data using a two-parameter normal distribution as described in your question. From a Bayesian standpoint, you can imagine that uncertainty in the hyperparameters of the model (distributions over the mean and variance) cause the variance of the posterior predictive to be greater than the population variance.
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? I think it's worth pointing out the connection to Bayesian estimation. Suppose you assume your data is Gaussian, and so you measure the mean $\mu$ and variance $\sigma^2$ of a sample of $n$ points.
603
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
I'm jumping VERY late into this, but would like to offer an answer that is possibly more intuitive than others, albeit incomplete. As others asserted, the population mean ($\mu$) and the sample mean ($\overline{X}$) are going to differ (where the larger the sample size the smaller the difference). Let $e$ be the difference (or error) between the population and sample means: $$ e = \mu - \overline{X} $$ After rearranging: $$ \overline{X} = \mu - e $$ Thus: $$ (X_i-\overline{X})^2 = (X_i-(\mu - e))^2 = (X_i - \mu + e)^2 $$ In other words $(X_i-\overline{X})^2$ conceals an error: $(X_i - \mu + e)^2$. What does this result in? The table below shows a population of $\{2, 4, 6\}$, so $\mu = 4$, and three possible sample means ($\overline{X}$): $4\ (e = 0)$ $3.5\ (e = -0.5)$ $4.5\ (e = 0.5)$ The (non-bold) numeric cells shows the squared difference. For example, with $X_1 = 2$ and $\overline{X} = 3.5$, $(X_i-\overline{X})^2 = 2.25$. The bottom row shows the sum of squares (the numerator in $\frac{\sum(X_i-\overline{X})^2}{n}$), and as you can see, whenever there's an error, it is "overestimated". To compensate for this, we have to take away something from the denominator. $$ \begin{array}{|c|c|c|c|} \hline & \overline{X} = \mu = 4 & \overline{X} = 3.5 & \overline{X} = 4.5 \\ \hline X_1 = 2 & 4 & 2.25 & 6.25 \\ \hline X_2 = 4 & 0 & 0.25 & 0.25 \\ \hline X_3 = 6 & 4 & 6.25 & 2.25 \\ \hline \sum(X_i-\overline{X})^2 & \textbf{8} & \textbf{8.75} & \textbf{8.75} \\ \hline \end{array} $$
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
I'm jumping VERY late into this, but would like to offer an answer that is possibly more intuitive than others, albeit incomplete. As others asserted, the population mean ($\mu$) and the sample mean (
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? I'm jumping VERY late into this, but would like to offer an answer that is possibly more intuitive than others, albeit incomplete. As others asserted, the population mean ($\mu$) and the sample mean ($\overline{X}$) are going to differ (where the larger the sample size the smaller the difference). Let $e$ be the difference (or error) between the population and sample means: $$ e = \mu - \overline{X} $$ After rearranging: $$ \overline{X} = \mu - e $$ Thus: $$ (X_i-\overline{X})^2 = (X_i-(\mu - e))^2 = (X_i - \mu + e)^2 $$ In other words $(X_i-\overline{X})^2$ conceals an error: $(X_i - \mu + e)^2$. What does this result in? The table below shows a population of $\{2, 4, 6\}$, so $\mu = 4$, and three possible sample means ($\overline{X}$): $4\ (e = 0)$ $3.5\ (e = -0.5)$ $4.5\ (e = 0.5)$ The (non-bold) numeric cells shows the squared difference. For example, with $X_1 = 2$ and $\overline{X} = 3.5$, $(X_i-\overline{X})^2 = 2.25$. The bottom row shows the sum of squares (the numerator in $\frac{\sum(X_i-\overline{X})^2}{n}$), and as you can see, whenever there's an error, it is "overestimated". To compensate for this, we have to take away something from the denominator. $$ \begin{array}{|c|c|c|c|} \hline & \overline{X} = \mu = 4 & \overline{X} = 3.5 & \overline{X} = 4.5 \\ \hline X_1 = 2 & 4 & 2.25 & 6.25 \\ \hline X_2 = 4 & 0 & 0.25 & 0.25 \\ \hline X_3 = 6 & 4 & 6.25 & 2.25 \\ \hline \sum(X_i-\overline{X})^2 & \textbf{8} & \textbf{8.75} & \textbf{8.75} \\ \hline \end{array} $$
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? I'm jumping VERY late into this, but would like to offer an answer that is possibly more intuitive than others, albeit incomplete. As others asserted, the population mean ($\mu$) and the sample mean (
604
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
Here's a very good overview and full proof In the more general case, note that the sample mean is not the same as the population mean. One's sample observations are naturally going to be closer on average to the sample mean than the population mean, resulting in the average $(x−\bar{x})^2$ value underestimating the average $(x−μ)^2$ value. Thus, $s^2_{biased}$ generally underestimates $σ^2$ with the difference between the two more pronounced when the sample size is small.
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
Here's a very good overview and full proof In the more general case, note that the sample mean is not the same as the population mean. One's sample observations are naturally going to be closer on av
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? Here's a very good overview and full proof In the more general case, note that the sample mean is not the same as the population mean. One's sample observations are naturally going to be closer on average to the sample mean than the population mean, resulting in the average $(x−\bar{x})^2$ value underestimating the average $(x−μ)^2$ value. Thus, $s^2_{biased}$ generally underestimates $σ^2$ with the difference between the two more pronounced when the sample size is small.
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? Here's a very good overview and full proof In the more general case, note that the sample mean is not the same as the population mean. One's sample observations are naturally going to be closer on av
605
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
My goodness it's getting complicated! I thought the simple answer was... if you have all the data points you can use "n" but if you have a "sample" then, assuming it's a random sample, you've got more sample points from inside the standard deviation than from outside (the definition of standard deviation). You just don't have enough data outside to ensure you get all the data points you need randomly. The n-1 helps expand toward the "real" standard deviation.
Intuitive explanation for dividing by $n-1$ when calculating standard deviation?
My goodness it's getting complicated! I thought the simple answer was... if you have all the data points you can use "n" but if you have a "sample" then, assuming it's a random sample, you've got mo
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? My goodness it's getting complicated! I thought the simple answer was... if you have all the data points you can use "n" but if you have a "sample" then, assuming it's a random sample, you've got more sample points from inside the standard deviation than from outside (the definition of standard deviation). You just don't have enough data outside to ensure you get all the data points you need randomly. The n-1 helps expand toward the "real" standard deviation.
Intuitive explanation for dividing by $n-1$ when calculating standard deviation? My goodness it's getting complicated! I thought the simple answer was... if you have all the data points you can use "n" but if you have a "sample" then, assuming it's a random sample, you've got mo
606
Why is accuracy not the best measure for assessing classification models?
Most of the other answers focus on the example of unbalanced classes. Yes, this is important. However, I argue that accuracy is problematic even with balanced classes. Frank Harrell has written about this on his blog: Classification vs. Prediction and Damage Caused by Classification Accuracy and Other Discontinuous Improper Accuracy Scoring Rules. Essentially, his argument is that the statistical component of your exercise ends when you output a probability for each class of your new sample. Mapping these predicted probabilities $(\hat{p}, 1-\hat{p})$ to a 0-1 classification, by choosing a threshold beyond which you classify a new observation as 1 vs. 0 is not part of the statistics any more. It is part of the decision component. And here, you need the probabilistic output of your model - but also considerations like: What are the consequences of deciding to treat a new observation as class 1 vs. 0? Do I then send out a cheap marketing mail to all 1s? Or do I apply an invasive cancer treatment with big side effects? What are the consequences of treating a "true" 0 as 1, and vice versa? Will I tick off a customer? Subject someone to unnecessary medical treatment? Are my "classes" truly discrete? Or is there actually a continuum (e.g., blood pressure), where clinical thresholds are in reality just cognitive shortcuts? If so, how far beyond a threshold is the case I'm "classifying" right now? Or does a low-but-positive probability to be class 1 actually mean "get more data", "run another test"? Depending on the consequences of your decision, you will use a different threshold to make the decision. If the action is invasive surgery, you will require a much higher probability for your classification of the patient as suffering from something than if the action is to recommend two aspirin. Or you might even have three different decisions although there are only two classes (sick vs. healthy): "go home and don't worry" vs. "run another test because the one we have is inconclusive" vs. "operate immediately". The correct way of assessing predicted probabilities $(\hat{p}, 1-\hat{p})$ is not to compare them to a threshold, map them to $(0,1)$ based on the threshold and then assess the transformed $(0,1)$ classification. Instead, one should use proper scoring-rules. These are loss functions that map predicted probabilities and corresponding observed outcomes to loss values, which are minimized in expectation by the true probabilities $(p,1-p)$. The idea is that we take the average over the scoring rule evaluated on multiple (best: many) observed outcomes and the corresponding predicted class membership probabilities, as an estimate of the expectation of the scoring rule. Note that "proper" here has a precisely defined meaning - there are improper scoring rules as well as proper scoring rules and finally strictly proper scoring rules. Scoring rules as such are loss functions of predictive densities and outcomes. Proper scoring rules are scoring rules that are minimized in expectation if the predictive density is the true density. Strictly proper scoring rules are scoring rules that are only minimized in expectation if the predictive density is the true density. As Frank Harrell notes, accuracy is an improper scoring rule. (More precisely, accuracy is not even a scoring rule at all: see my answer to Is accuracy an improper scoring rule in a binary classification setting?) This can be seen, e.g., if we have no predictors at all and just a flip of an unfair coin with probabilities $(0.6,0.4)$. Accuracy is maximized if we classify everything as the first class and completely ignore the 40% probability that any outcome might be in the second class. (Here we see that accuracy is problematic even for balanced classes.) Proper scoring-rules will prefer a $(0.6,0.4)$ prediction to the $(1,0)$ one in expectation. In particular, accuracy is discontinuous in the threshold: moving the threshold a tiny little bit may make one (or multiple) predictions change classes and change the entire accuracy by a discrete amount. This makes little sense. More information can be found at Frank's two blog posts linked to above, as well as in Chapter 10 of Frank Harrell's Regression Modeling Strategies. (This is shamelessly cribbed from an earlier answer of mine.) EDIT. My answer to Example when using accuracy as an outcome measure will lead to a wrong conclusion gives a hopefully illustrative example where maximizing accuracy can lead to wrong decisions even for balanced classes.
Why is accuracy not the best measure for assessing classification models?
Most of the other answers focus on the example of unbalanced classes. Yes, this is important. However, I argue that accuracy is problematic even with balanced classes. Frank Harrell has written about
Why is accuracy not the best measure for assessing classification models? Most of the other answers focus on the example of unbalanced classes. Yes, this is important. However, I argue that accuracy is problematic even with balanced classes. Frank Harrell has written about this on his blog: Classification vs. Prediction and Damage Caused by Classification Accuracy and Other Discontinuous Improper Accuracy Scoring Rules. Essentially, his argument is that the statistical component of your exercise ends when you output a probability for each class of your new sample. Mapping these predicted probabilities $(\hat{p}, 1-\hat{p})$ to a 0-1 classification, by choosing a threshold beyond which you classify a new observation as 1 vs. 0 is not part of the statistics any more. It is part of the decision component. And here, you need the probabilistic output of your model - but also considerations like: What are the consequences of deciding to treat a new observation as class 1 vs. 0? Do I then send out a cheap marketing mail to all 1s? Or do I apply an invasive cancer treatment with big side effects? What are the consequences of treating a "true" 0 as 1, and vice versa? Will I tick off a customer? Subject someone to unnecessary medical treatment? Are my "classes" truly discrete? Or is there actually a continuum (e.g., blood pressure), where clinical thresholds are in reality just cognitive shortcuts? If so, how far beyond a threshold is the case I'm "classifying" right now? Or does a low-but-positive probability to be class 1 actually mean "get more data", "run another test"? Depending on the consequences of your decision, you will use a different threshold to make the decision. If the action is invasive surgery, you will require a much higher probability for your classification of the patient as suffering from something than if the action is to recommend two aspirin. Or you might even have three different decisions although there are only two classes (sick vs. healthy): "go home and don't worry" vs. "run another test because the one we have is inconclusive" vs. "operate immediately". The correct way of assessing predicted probabilities $(\hat{p}, 1-\hat{p})$ is not to compare them to a threshold, map them to $(0,1)$ based on the threshold and then assess the transformed $(0,1)$ classification. Instead, one should use proper scoring-rules. These are loss functions that map predicted probabilities and corresponding observed outcomes to loss values, which are minimized in expectation by the true probabilities $(p,1-p)$. The idea is that we take the average over the scoring rule evaluated on multiple (best: many) observed outcomes and the corresponding predicted class membership probabilities, as an estimate of the expectation of the scoring rule. Note that "proper" here has a precisely defined meaning - there are improper scoring rules as well as proper scoring rules and finally strictly proper scoring rules. Scoring rules as such are loss functions of predictive densities and outcomes. Proper scoring rules are scoring rules that are minimized in expectation if the predictive density is the true density. Strictly proper scoring rules are scoring rules that are only minimized in expectation if the predictive density is the true density. As Frank Harrell notes, accuracy is an improper scoring rule. (More precisely, accuracy is not even a scoring rule at all: see my answer to Is accuracy an improper scoring rule in a binary classification setting?) This can be seen, e.g., if we have no predictors at all and just a flip of an unfair coin with probabilities $(0.6,0.4)$. Accuracy is maximized if we classify everything as the first class and completely ignore the 40% probability that any outcome might be in the second class. (Here we see that accuracy is problematic even for balanced classes.) Proper scoring-rules will prefer a $(0.6,0.4)$ prediction to the $(1,0)$ one in expectation. In particular, accuracy is discontinuous in the threshold: moving the threshold a tiny little bit may make one (or multiple) predictions change classes and change the entire accuracy by a discrete amount. This makes little sense. More information can be found at Frank's two blog posts linked to above, as well as in Chapter 10 of Frank Harrell's Regression Modeling Strategies. (This is shamelessly cribbed from an earlier answer of mine.) EDIT. My answer to Example when using accuracy as an outcome measure will lead to a wrong conclusion gives a hopefully illustrative example where maximizing accuracy can lead to wrong decisions even for balanced classes.
Why is accuracy not the best measure for assessing classification models? Most of the other answers focus on the example of unbalanced classes. Yes, this is important. However, I argue that accuracy is problematic even with balanced classes. Frank Harrell has written about
607
Why is accuracy not the best measure for assessing classification models?
When we use accuracy, we assign equal cost to false positives and false negatives. When that data set is imbalanced - say it has 99% of instances in one class and only 1 % in the other - there is a great way to lower the cost. Predict that every instance belongs to the majority class, get accuracy of 99% and go home early. The problem starts when the actual costs that we assign to every error are not equal. If we deal with a rare but fatal disease, the cost of failing to diagnose the disease of a sick person is much higher than the cost of sending a healthy person to more tests. In general, there is no general best measure. The best measure is derived from your needs. In a sense, it is not a machine learning question, but a business question. It is common that two people will use the same data set but will choose different metrics due to different goals. Accuracy is a great metric. Actually, most metrics are great and I like to evaluate many metrics. However, at some point you will need to decide between using model A or B. There you should use a single metric that best fits your need. For extra credit, choose this metric before the analysis, so you won't be distracted when making the decision.
Why is accuracy not the best measure for assessing classification models?
When we use accuracy, we assign equal cost to false positives and false negatives. When that data set is imbalanced - say it has 99% of instances in one class and only 1 % in the other - there is a gr
Why is accuracy not the best measure for assessing classification models? When we use accuracy, we assign equal cost to false positives and false negatives. When that data set is imbalanced - say it has 99% of instances in one class and only 1 % in the other - there is a great way to lower the cost. Predict that every instance belongs to the majority class, get accuracy of 99% and go home early. The problem starts when the actual costs that we assign to every error are not equal. If we deal with a rare but fatal disease, the cost of failing to diagnose the disease of a sick person is much higher than the cost of sending a healthy person to more tests. In general, there is no general best measure. The best measure is derived from your needs. In a sense, it is not a machine learning question, but a business question. It is common that two people will use the same data set but will choose different metrics due to different goals. Accuracy is a great metric. Actually, most metrics are great and I like to evaluate many metrics. However, at some point you will need to decide between using model A or B. There you should use a single metric that best fits your need. For extra credit, choose this metric before the analysis, so you won't be distracted when making the decision.
Why is accuracy not the best measure for assessing classification models? When we use accuracy, we assign equal cost to false positives and false negatives. When that data set is imbalanced - say it has 99% of instances in one class and only 1 % in the other - there is a gr
608
Why is accuracy not the best measure for assessing classification models?
The problem with accuracy Standard accuracy is defined as the ratio of correct classifications to the number of classifications done. \begin{align*} accuracy := \frac{\text{correct classifications}}{\text{number of classifications}} \end{align*} It is thus an overall measure over all classes and as we'll shortly see it's not a good measure to tell an oracle apart from an actual useful test. An oracle is a classification function that returns a random guess for each sample. Likewise, we want to be able to rate the classification performance of our classification function. Accuracy can be a useful measure if we have the same amount of samples per class but if we have an imbalanced set of samples accuracy isn't useful at all. Even more so, a test can have a high accuracy but actually perform worse than a test with a lower accuracy. If we have a distribution of samples such that $90\%$ of samples belong to class $\mathcal{A}$, $5\%$ belonging to $\mathcal{B}$ and another $5\%$ belonging to $\mathcal{C}$ then the following classification function will have an accuracy of $0.9$: \begin{align*} classify(sample) := \begin{cases} \mathcal{A} & \text{if }\top \\ \end{cases} \end{align*} Yet, it is obvious given that we know how $classify$ works that this it can not tell the classes apart at all. Likewise, we can construct a classification function \begin{align*} classify(sample) := \text{guess} \begin{cases} \mathcal{A} & \text{with p } = 0.96 \\ \mathcal{B} & \text{with p } = 0.02 \\ \mathcal{C} & \text{with p } = 0.02 \\ \end{cases} \end{align*} which has an accuracy of $0.96 \cdot 0.9 + 0.02 \cdot 0.05 \cdot 2 = 0.866$ and will not always predict $\mathcal{A}$ but still given that we know how $classify$ works it is obvious that it can not tell classes apart. Accuracy in this case only tells us how good our classification function is at guessing. This means that accuracy is not a good measure to tell an oracle apart from a useful test. Accuracy per Class We can compute the accuracy individually per class by giving our classification function only samples from the same class and remember and count the number of correct classifications and incorrect classifications then compute $accuracy := \text{correct}/(\text{correct} + \text{incorrect})$. We repeat this for every class. If we have a classification function that can accurately recognize class $\mathcal{A}$ but will output a random guess for the other classes then this results in an accuracy of $1.00$ for $\mathcal{A}$ and an accuracy of $0.33$ for the other classes. This already provides us a much better way to judge the performance of our classification function. An oracle always guessing the same class will produce a per class accuracy of $1.00$ for that class, but $0.00$ for the other class. If our test is useful all the accuracies per class should be $>0.5$. Otherwise, our test isn't better than chance. However, accuracy per class does not take into account false positives. Even though our classification function has a $100\%$ accuracy for class $\mathcal{A}$ there will also be false positives for $\mathcal{A}$ (such as a $\mathcal{B}$ wrongly classified as a $\mathcal{A}$). Sensitivity and Specificity In medical tests sensitivity is defined as the ratio between people correctly identified as having the disease and the amount of people actually having the disease. Specificity is defined as the ratio between people correctly identified as healthy and the amount of people that are actually healthy. The amount of people actually having the disease is the amount of true positive test results plus the amount of false negative test results. The amount of actually healthy people is the amount of true negative test results plus the amount of false positive test results. Binary Classification In binary classification problems there are two classes $\mathcal{P}$ and $\mathcal{N}$. $T_{n}$ refers to the number of samples that were correctly identified as belonging to class $n$ and $F_{n}$ refers to the number of samples that werey falsely identified as belonging to class $n$. In this case sensitivity and specificity are defined as following: \begin{align*} sensitivity := \frac{T_{\mathcal{P}}}{T_{\mathcal{P}}+F_{\mathcal{N}}} \\ specificity := \frac{T_{\mathcal{N}}}{T_{\mathcal{N}}+F_{\mathcal{P}}} \end{align*} $T_{\mathcal{P}}$ being the true positives $F_{\mathcal{N}}$ being the false negatives, $T_{\mathcal{N}}$ being the true negatives and $F_{\mathcal{P}}$ being the false positives. However, thinking in terms of negatives and positives is fine for medical tests but in order to get a better intuition we should not think in terms of negatives and positives but in generic classes $\alpha$ and $\beta$. Then, we can say that the amount of samples correctly identified as belonging to $\alpha$ is $T_{\alpha}$ and the amount of samples that actually belong to $\alpha$ is $T_{\alpha} + F_{\beta}$. The amount of samples correctly identified as not belonging to $\alpha$ is $T_{\beta}$ and the amount of samples actually not belonging to $\alpha$ is $T_{\beta} + F_{\alpha}$. This gives us the sensitivity and specificity for $\alpha$ but we can also apply the same thing to the class $\beta$. The amount of samples correctly identified as belonging to $\beta$ is $T_{\beta}$ and the amount of samples actually belonging to $\beta$ is $T_{\beta} + F_{\alpha}$. The amount of samples correctly identified as not belonging to $\beta$ is $T_{\alpha}$ and the amount of samples actually not belonging to $\beta$ is $T_{\alpha} + F_{\beta}$. We thus get a sensitivity and specificity per class: \begin{align*} sensitivity_{\alpha} := \frac{T_{\alpha}}{T_{\alpha}+F_{\beta}} \\ specificity_{\alpha} := \frac{T_{\beta}}{T_{\beta} + F_{\alpha}} \\ sensitivity_{\beta} := \frac{T_{\beta}}{T_{\beta}+F_{\alpha}} \\ specificity_{\beta} := \frac{T_{\alpha}}{T_{\alpha} + F_{\beta}} \\ \end{align*} We however observe that $sensitivity_{\alpha} = specificity_{\beta}$ and $specificity_{\alpha} = sensitivity_{\beta}$. This means that if we only have two classes we don't need sensitivity and specificity per class. N-Ary Classification Sensitivity and specificity per class isn't useful if we only have two classes, but we can extend it to multiple classes. Sensitivity and specificity is defined as: \begin{align*} \text{sensitivity} := \frac{\text{true positives}}{\text{true positives} + \text{false negatives}} \\ \text{specificity} := \frac{\text{true negatives}}{\text{true negatives} + \text{false-positives}} \\ \end{align*} The true positives is simply $T_{n}$, the false negatives is simply $\sum_{i}(F_{n,i})$ and the false positives is simply $\sum_{i}(F_{i,n})$. Finding the true negatives is much harder but we can say that if we correctly classify something as belonging to a class different than $n$ it counts as a true negative. This means we have at least $\sum_{i}(T_{i}) - T(n)$ true negatives. However, this aren't all true negatives. All the wrong classifications for a class different than $n$ are also true negatives, because they correctly weren't identified as belonging to $n$. $\sum_{i}(\sum_{k}(F_{i,k}))$ represents all wrong classifications. From this we have to subtract the cases where the input class was $n$ meaning we have to subtract the false negatives for $n$ which is $\sum_{i}(F_{n,i})$ but we also have to subtract the false positives for $n$ because they are false positives and not true negatives so we have to also subtract $\sum_{i}(F_{i,n})$ finally getting $\sum_{i}(T_{i}) - T(n) + \sum_{i}(\sum_{k}(F_{n,i})) - \sum_{i}(F_{n,i}) - \sum_{i}(F_{i,n})$. As a summary we have: \begin{align*} \text{true positives} := T_{n} \\ \text{true negatives} := \sum_{i}(T_{i}) - T(n) + \sum_{i}(\sum_{k}(F_{n,i})) - \sum_{i}(F_{n,i}) - \sum_{i}(F_{i,n}) \\ \text{false positives} := \sum_{i}(F_{i,n}) \\ \text{false negatives} := \sum_{i}(F_{n,i}) \end{align*} \begin{align*} sensitivity(n) := \frac{T_{n}}{T_{n} + \sum_{i}(F_{n,i})} \\ specificity(n) := \frac{\sum_{i}(T_{i}) - T_{n} + \sum_{i}(\sum_{k}(F_{i,k})) - \sum_{i}(F_{n,i}) - \sum_{i}(F_{i,n})}{\sum_{i}(T_{i}) - T_{n} + \sum_{i}(\sum_{k}(F_{i,k})) - \sum_{i}(F_{n,i})} \end{align*} Introducing Confidence We define a $confidence^{\top}$ which is a measure of how confident we can be that the reply of our classification function is actually correct. $T_{n} + \sum_{i}(F_{i,n})$ are all cases where the classification function replied with $n$ but only $T_{n}$ of those are correct. We thus define \begin{align*} confidence^{\top}(n) := \frac{T_{n}}{T_{n}+\sum_{i}(F_{i,n})} \end{align*} But can we also define a $confidence^{\bot}$ which is a measure of how confident we can be that if our classification function responds with a class different than $n$ that it actually wasn't an $n$? Well, we get $\sum_{i}(\sum_{k}(F_{i,k})) - \sum_{i}(F_{i,n}) + \sum_{i}(T_{i}) - T_{n}$ all of which are correct except $\sum_{i}(F_{n,i})$.Thus, we define \begin{align*} confidence^{\bot}(n) = \frac{\sum_{i}(\sum_{k}(F_{i,k})) - \sum_{i}(F_{i,n}) + \sum_{i}(T_{i}) - T_{n}-\sum_{i}(F_{n,i})}{\sum_{i}(\sum_{k}(F_{i,k})) - \sum_{i}(F_{i,n}) + \sum_{i}(T_{i}) - T_{n}} \end{align*}
Why is accuracy not the best measure for assessing classification models?
The problem with accuracy Standard accuracy is defined as the ratio of correct classifications to the number of classifications done. \begin{align*} accuracy := \frac{\text{correct classifications}}{\
Why is accuracy not the best measure for assessing classification models? The problem with accuracy Standard accuracy is defined as the ratio of correct classifications to the number of classifications done. \begin{align*} accuracy := \frac{\text{correct classifications}}{\text{number of classifications}} \end{align*} It is thus an overall measure over all classes and as we'll shortly see it's not a good measure to tell an oracle apart from an actual useful test. An oracle is a classification function that returns a random guess for each sample. Likewise, we want to be able to rate the classification performance of our classification function. Accuracy can be a useful measure if we have the same amount of samples per class but if we have an imbalanced set of samples accuracy isn't useful at all. Even more so, a test can have a high accuracy but actually perform worse than a test with a lower accuracy. If we have a distribution of samples such that $90\%$ of samples belong to class $\mathcal{A}$, $5\%$ belonging to $\mathcal{B}$ and another $5\%$ belonging to $\mathcal{C}$ then the following classification function will have an accuracy of $0.9$: \begin{align*} classify(sample) := \begin{cases} \mathcal{A} & \text{if }\top \\ \end{cases} \end{align*} Yet, it is obvious given that we know how $classify$ works that this it can not tell the classes apart at all. Likewise, we can construct a classification function \begin{align*} classify(sample) := \text{guess} \begin{cases} \mathcal{A} & \text{with p } = 0.96 \\ \mathcal{B} & \text{with p } = 0.02 \\ \mathcal{C} & \text{with p } = 0.02 \\ \end{cases} \end{align*} which has an accuracy of $0.96 \cdot 0.9 + 0.02 \cdot 0.05 \cdot 2 = 0.866$ and will not always predict $\mathcal{A}$ but still given that we know how $classify$ works it is obvious that it can not tell classes apart. Accuracy in this case only tells us how good our classification function is at guessing. This means that accuracy is not a good measure to tell an oracle apart from a useful test. Accuracy per Class We can compute the accuracy individually per class by giving our classification function only samples from the same class and remember and count the number of correct classifications and incorrect classifications then compute $accuracy := \text{correct}/(\text{correct} + \text{incorrect})$. We repeat this for every class. If we have a classification function that can accurately recognize class $\mathcal{A}$ but will output a random guess for the other classes then this results in an accuracy of $1.00$ for $\mathcal{A}$ and an accuracy of $0.33$ for the other classes. This already provides us a much better way to judge the performance of our classification function. An oracle always guessing the same class will produce a per class accuracy of $1.00$ for that class, but $0.00$ for the other class. If our test is useful all the accuracies per class should be $>0.5$. Otherwise, our test isn't better than chance. However, accuracy per class does not take into account false positives. Even though our classification function has a $100\%$ accuracy for class $\mathcal{A}$ there will also be false positives for $\mathcal{A}$ (such as a $\mathcal{B}$ wrongly classified as a $\mathcal{A}$). Sensitivity and Specificity In medical tests sensitivity is defined as the ratio between people correctly identified as having the disease and the amount of people actually having the disease. Specificity is defined as the ratio between people correctly identified as healthy and the amount of people that are actually healthy. The amount of people actually having the disease is the amount of true positive test results plus the amount of false negative test results. The amount of actually healthy people is the amount of true negative test results plus the amount of false positive test results. Binary Classification In binary classification problems there are two classes $\mathcal{P}$ and $\mathcal{N}$. $T_{n}$ refers to the number of samples that were correctly identified as belonging to class $n$ and $F_{n}$ refers to the number of samples that werey falsely identified as belonging to class $n$. In this case sensitivity and specificity are defined as following: \begin{align*} sensitivity := \frac{T_{\mathcal{P}}}{T_{\mathcal{P}}+F_{\mathcal{N}}} \\ specificity := \frac{T_{\mathcal{N}}}{T_{\mathcal{N}}+F_{\mathcal{P}}} \end{align*} $T_{\mathcal{P}}$ being the true positives $F_{\mathcal{N}}$ being the false negatives, $T_{\mathcal{N}}$ being the true negatives and $F_{\mathcal{P}}$ being the false positives. However, thinking in terms of negatives and positives is fine for medical tests but in order to get a better intuition we should not think in terms of negatives and positives but in generic classes $\alpha$ and $\beta$. Then, we can say that the amount of samples correctly identified as belonging to $\alpha$ is $T_{\alpha}$ and the amount of samples that actually belong to $\alpha$ is $T_{\alpha} + F_{\beta}$. The amount of samples correctly identified as not belonging to $\alpha$ is $T_{\beta}$ and the amount of samples actually not belonging to $\alpha$ is $T_{\beta} + F_{\alpha}$. This gives us the sensitivity and specificity for $\alpha$ but we can also apply the same thing to the class $\beta$. The amount of samples correctly identified as belonging to $\beta$ is $T_{\beta}$ and the amount of samples actually belonging to $\beta$ is $T_{\beta} + F_{\alpha}$. The amount of samples correctly identified as not belonging to $\beta$ is $T_{\alpha}$ and the amount of samples actually not belonging to $\beta$ is $T_{\alpha} + F_{\beta}$. We thus get a sensitivity and specificity per class: \begin{align*} sensitivity_{\alpha} := \frac{T_{\alpha}}{T_{\alpha}+F_{\beta}} \\ specificity_{\alpha} := \frac{T_{\beta}}{T_{\beta} + F_{\alpha}} \\ sensitivity_{\beta} := \frac{T_{\beta}}{T_{\beta}+F_{\alpha}} \\ specificity_{\beta} := \frac{T_{\alpha}}{T_{\alpha} + F_{\beta}} \\ \end{align*} We however observe that $sensitivity_{\alpha} = specificity_{\beta}$ and $specificity_{\alpha} = sensitivity_{\beta}$. This means that if we only have two classes we don't need sensitivity and specificity per class. N-Ary Classification Sensitivity and specificity per class isn't useful if we only have two classes, but we can extend it to multiple classes. Sensitivity and specificity is defined as: \begin{align*} \text{sensitivity} := \frac{\text{true positives}}{\text{true positives} + \text{false negatives}} \\ \text{specificity} := \frac{\text{true negatives}}{\text{true negatives} + \text{false-positives}} \\ \end{align*} The true positives is simply $T_{n}$, the false negatives is simply $\sum_{i}(F_{n,i})$ and the false positives is simply $\sum_{i}(F_{i,n})$. Finding the true negatives is much harder but we can say that if we correctly classify something as belonging to a class different than $n$ it counts as a true negative. This means we have at least $\sum_{i}(T_{i}) - T(n)$ true negatives. However, this aren't all true negatives. All the wrong classifications for a class different than $n$ are also true negatives, because they correctly weren't identified as belonging to $n$. $\sum_{i}(\sum_{k}(F_{i,k}))$ represents all wrong classifications. From this we have to subtract the cases where the input class was $n$ meaning we have to subtract the false negatives for $n$ which is $\sum_{i}(F_{n,i})$ but we also have to subtract the false positives for $n$ because they are false positives and not true negatives so we have to also subtract $\sum_{i}(F_{i,n})$ finally getting $\sum_{i}(T_{i}) - T(n) + \sum_{i}(\sum_{k}(F_{n,i})) - \sum_{i}(F_{n,i}) - \sum_{i}(F_{i,n})$. As a summary we have: \begin{align*} \text{true positives} := T_{n} \\ \text{true negatives} := \sum_{i}(T_{i}) - T(n) + \sum_{i}(\sum_{k}(F_{n,i})) - \sum_{i}(F_{n,i}) - \sum_{i}(F_{i,n}) \\ \text{false positives} := \sum_{i}(F_{i,n}) \\ \text{false negatives} := \sum_{i}(F_{n,i}) \end{align*} \begin{align*} sensitivity(n) := \frac{T_{n}}{T_{n} + \sum_{i}(F_{n,i})} \\ specificity(n) := \frac{\sum_{i}(T_{i}) - T_{n} + \sum_{i}(\sum_{k}(F_{i,k})) - \sum_{i}(F_{n,i}) - \sum_{i}(F_{i,n})}{\sum_{i}(T_{i}) - T_{n} + \sum_{i}(\sum_{k}(F_{i,k})) - \sum_{i}(F_{n,i})} \end{align*} Introducing Confidence We define a $confidence^{\top}$ which is a measure of how confident we can be that the reply of our classification function is actually correct. $T_{n} + \sum_{i}(F_{i,n})$ are all cases where the classification function replied with $n$ but only $T_{n}$ of those are correct. We thus define \begin{align*} confidence^{\top}(n) := \frac{T_{n}}{T_{n}+\sum_{i}(F_{i,n})} \end{align*} But can we also define a $confidence^{\bot}$ which is a measure of how confident we can be that if our classification function responds with a class different than $n$ that it actually wasn't an $n$? Well, we get $\sum_{i}(\sum_{k}(F_{i,k})) - \sum_{i}(F_{i,n}) + \sum_{i}(T_{i}) - T_{n}$ all of which are correct except $\sum_{i}(F_{n,i})$.Thus, we define \begin{align*} confidence^{\bot}(n) = \frac{\sum_{i}(\sum_{k}(F_{i,k})) - \sum_{i}(F_{i,n}) + \sum_{i}(T_{i}) - T_{n}-\sum_{i}(F_{n,i})}{\sum_{i}(\sum_{k}(F_{i,k})) - \sum_{i}(F_{i,n}) + \sum_{i}(T_{i}) - T_{n}} \end{align*}
Why is accuracy not the best measure for assessing classification models? The problem with accuracy Standard accuracy is defined as the ratio of correct classifications to the number of classifications done. \begin{align*} accuracy := \frac{\text{correct classifications}}{\
609
Why is accuracy not the best measure for assessing classification models?
Here is a somewhat adversarial counter-example, where accuracy is better than a proper scoring rule, based on @Benoit_Sanchez's neat thought experiment, You own an egg shop and each egg you sell generates a net revenue of 2 dollars. Each customer who enters the shop may either buy an egg or leave without buying any. For some customers you can decide to make a discount and you will only get 1 dollar revenue but then the customer will always buy. You plug a webcam that analyses the customer behaviour with features such as "sniffs the eggs", "holds a book with omelette recipes"... and classify them into "wants to buy at 2 dollars" (positive) and "wants to buy only at 1 dollar" (negative) before he leaves. If your classifier makes no mistake, then you get the maximum revenue you can expect. If it's not perfect, then: for every false positive you lose 1 dollar because the customer leaves and you didn't try to make a successful discount for every false negative you lose 1 dollar because you make a useless discount Then the accuracy of your classifier is exactly how close you are to the maximum revenue. It is the perfect measure. So say we record the amount of time the customer spends "sniffing eggs" and "holding a book with omelette recipes" and make ourselves a classification task: This is actually my version of Brian Ripley's synthetic benchmark dataset, but lets pretend it is the data for our task. As this is a synthetic task, I can work out the probabilities of class membership according to the true data generating process: Unfortunately it is upside-down because I couldn't work out how to fix it in MATLAB, but please bear with me. Now in practice, we won't get a perfect model, so here is a model with an error (I have just perturbed the true posterior probabilities with a Gaussian bump). And here is another one, with a bump in a different place. Now the Brier score is a proper scoring rule, and it gives a slightly lower (better) score for the second model (because the perturbation is in a region of slightly lower density). However, the perturbation in the first model is well away from the decision boundary, and so that one has a higher accuracy. Since in this particular application, the accuracy is equal to our financial gain in dollars, the Brier score is selecting the wrong model, and we will lose money. Vapnik's advice that it is often better to form a purely discriminative classifier directly (rather than estimate a probability and threshold it) is based on this sort of situation. If all we are interested in is making a binary decision, then we don't really care what the classifier does away from the decision boundary, so we shouldn't waste resources modelling features of the data distribution that don't affect the decision. This is a Laconic "if" though. If it is a classification task with fixed misclassification costs, no covariate shift and known and constant operational class priors, then this approach may indeed be better (and the success of the SVM in many practical applications is some evidence of that). However, many applications are not like that, we may not know ahead of time what the misclassification costs are, or equivalently the operational class frequencies. In those applications we are much better off with a probabilistic classifier, and set the thresholds appropriately according to operational conditions. Whether accuracy is a good performance metric depends on the needs of the application, there is no "one size fits all" policy. We need to understand the tools we use and be aware of their advantages and pitfalls, and consider the purpose of the exercise in choosing the right tool from the toolbox. In this example, the problem with the Brier score is that it ignores the true needs of the application, and no amount of adjusting the threshold will compensate for its selection of the wrong model. It is also important to make a distinction between performance evaluation and model selection - they are not the same thing, and sometimes (often?) it is better to have a proper scoring rule for model selection in order to achieve maximum performance according to your metric of real interest (e.g. accuracy).
Why is accuracy not the best measure for assessing classification models?
Here is a somewhat adversarial counter-example, where accuracy is better than a proper scoring rule, based on @Benoit_Sanchez's neat thought experiment, You own an egg shop and each egg you sell gene
Why is accuracy not the best measure for assessing classification models? Here is a somewhat adversarial counter-example, where accuracy is better than a proper scoring rule, based on @Benoit_Sanchez's neat thought experiment, You own an egg shop and each egg you sell generates a net revenue of 2 dollars. Each customer who enters the shop may either buy an egg or leave without buying any. For some customers you can decide to make a discount and you will only get 1 dollar revenue but then the customer will always buy. You plug a webcam that analyses the customer behaviour with features such as "sniffs the eggs", "holds a book with omelette recipes"... and classify them into "wants to buy at 2 dollars" (positive) and "wants to buy only at 1 dollar" (negative) before he leaves. If your classifier makes no mistake, then you get the maximum revenue you can expect. If it's not perfect, then: for every false positive you lose 1 dollar because the customer leaves and you didn't try to make a successful discount for every false negative you lose 1 dollar because you make a useless discount Then the accuracy of your classifier is exactly how close you are to the maximum revenue. It is the perfect measure. So say we record the amount of time the customer spends "sniffing eggs" and "holding a book with omelette recipes" and make ourselves a classification task: This is actually my version of Brian Ripley's synthetic benchmark dataset, but lets pretend it is the data for our task. As this is a synthetic task, I can work out the probabilities of class membership according to the true data generating process: Unfortunately it is upside-down because I couldn't work out how to fix it in MATLAB, but please bear with me. Now in practice, we won't get a perfect model, so here is a model with an error (I have just perturbed the true posterior probabilities with a Gaussian bump). And here is another one, with a bump in a different place. Now the Brier score is a proper scoring rule, and it gives a slightly lower (better) score for the second model (because the perturbation is in a region of slightly lower density). However, the perturbation in the first model is well away from the decision boundary, and so that one has a higher accuracy. Since in this particular application, the accuracy is equal to our financial gain in dollars, the Brier score is selecting the wrong model, and we will lose money. Vapnik's advice that it is often better to form a purely discriminative classifier directly (rather than estimate a probability and threshold it) is based on this sort of situation. If all we are interested in is making a binary decision, then we don't really care what the classifier does away from the decision boundary, so we shouldn't waste resources modelling features of the data distribution that don't affect the decision. This is a Laconic "if" though. If it is a classification task with fixed misclassification costs, no covariate shift and known and constant operational class priors, then this approach may indeed be better (and the success of the SVM in many practical applications is some evidence of that). However, many applications are not like that, we may not know ahead of time what the misclassification costs are, or equivalently the operational class frequencies. In those applications we are much better off with a probabilistic classifier, and set the thresholds appropriately according to operational conditions. Whether accuracy is a good performance metric depends on the needs of the application, there is no "one size fits all" policy. We need to understand the tools we use and be aware of their advantages and pitfalls, and consider the purpose of the exercise in choosing the right tool from the toolbox. In this example, the problem with the Brier score is that it ignores the true needs of the application, and no amount of adjusting the threshold will compensate for its selection of the wrong model. It is also important to make a distinction between performance evaluation and model selection - they are not the same thing, and sometimes (often?) it is better to have a proper scoring rule for model selection in order to achieve maximum performance according to your metric of real interest (e.g. accuracy).
Why is accuracy not the best measure for assessing classification models? Here is a somewhat adversarial counter-example, where accuracy is better than a proper scoring rule, based on @Benoit_Sanchez's neat thought experiment, You own an egg shop and each egg you sell gene
610
Why is accuracy not the best measure for assessing classification models?
Imbalanced classes in your dataset To be short: imagine, 99% of one class (say apples) and 1% of another class is in your data set (say bananas). My super duper algorithm gets an astonishing 99% accuracy for this data set, check it out: return "it's an apple" He will be right 99% of the time and therefore gets a 99% accuracy. Can I sell you my algorithm? Solution: don't use an absolute measure (accuracy) but a relative-to-each-class measure (there are a lot out there, like ROC AUC)
Why is accuracy not the best measure for assessing classification models?
Imbalanced classes in your dataset To be short: imagine, 99% of one class (say apples) and 1% of another class is in your data set (say bananas). My super duper algorithm gets an astonishing 99% accur
Why is accuracy not the best measure for assessing classification models? Imbalanced classes in your dataset To be short: imagine, 99% of one class (say apples) and 1% of another class is in your data set (say bananas). My super duper algorithm gets an astonishing 99% accuracy for this data set, check it out: return "it's an apple" He will be right 99% of the time and therefore gets a 99% accuracy. Can I sell you my algorithm? Solution: don't use an absolute measure (accuracy) but a relative-to-each-class measure (there are a lot out there, like ROC AUC)
Why is accuracy not the best measure for assessing classification models? Imbalanced classes in your dataset To be short: imagine, 99% of one class (say apples) and 1% of another class is in your data set (say bananas). My super duper algorithm gets an astonishing 99% accur
611
Why is accuracy not the best measure for assessing classification models?
DaL answer is just exactly this. I'll illustrate it with a very simple example about... selling eggs. You own an egg shop and each egg you sell generates a net revenue of $2$ dollars. Each customer who enters the shop may either buy an egg or leave without buying any. For some customers you can decide to make a discount and you will only get $1$ dollar revenue but then the customer will always buy. You plug a webcam that analyses the customer behaviour with features such as "sniffs the eggs", "holds a book with omelette recipes"... and classify them into "wants to buy at $2$ dollars" (positive) and "wants to buy only at $1$ dollar" (negative) before he leaves. If your classifier makes no mistake, then you get the maximum revenue you can expect. If it's not perfect, then: for every false positive you loose $1$ dollar because the customer leaves and you didn't try to make a successful discount for every false negative you loose $1$ dollar because you make a useless discount Then the accuracy of your classifier is exactly how close you are to the maximum revenue. It is the perfect measure. But now if the discount is $a$ dollars. The costs are: false positive: $a$ false negative: $2-a$ Then you need an accuracy weighted with these numbers as a measure of efficiency of the classifier. If $a=0.001$ for example, the measure is totally different. This situation is likely related to imbalanced data: few customers are ready to pay $2$, while most would pay $0.001$. You don't care getting many false positives to get a few more true positives. You can adjust the threshold of the classifier according to this. If the classifier is about finding relevant documents in a database for example, then you can compare "how much" wasting time reading an irrelevant document is compared to finding a relevant document.
Why is accuracy not the best measure for assessing classification models?
DaL answer is just exactly this. I'll illustrate it with a very simple example about... selling eggs. You own an egg shop and each egg you sell generates a net revenue of $2$ dollars. Each customer wh
Why is accuracy not the best measure for assessing classification models? DaL answer is just exactly this. I'll illustrate it with a very simple example about... selling eggs. You own an egg shop and each egg you sell generates a net revenue of $2$ dollars. Each customer who enters the shop may either buy an egg or leave without buying any. For some customers you can decide to make a discount and you will only get $1$ dollar revenue but then the customer will always buy. You plug a webcam that analyses the customer behaviour with features such as "sniffs the eggs", "holds a book with omelette recipes"... and classify them into "wants to buy at $2$ dollars" (positive) and "wants to buy only at $1$ dollar" (negative) before he leaves. If your classifier makes no mistake, then you get the maximum revenue you can expect. If it's not perfect, then: for every false positive you loose $1$ dollar because the customer leaves and you didn't try to make a successful discount for every false negative you loose $1$ dollar because you make a useless discount Then the accuracy of your classifier is exactly how close you are to the maximum revenue. It is the perfect measure. But now if the discount is $a$ dollars. The costs are: false positive: $a$ false negative: $2-a$ Then you need an accuracy weighted with these numbers as a measure of efficiency of the classifier. If $a=0.001$ for example, the measure is totally different. This situation is likely related to imbalanced data: few customers are ready to pay $2$, while most would pay $0.001$. You don't care getting many false positives to get a few more true positives. You can adjust the threshold of the classifier according to this. If the classifier is about finding relevant documents in a database for example, then you can compare "how much" wasting time reading an irrelevant document is compared to finding a relevant document.
Why is accuracy not the best measure for assessing classification models? DaL answer is just exactly this. I'll illustrate it with a very simple example about... selling eggs. You own an egg shop and each egg you sell generates a net revenue of $2$ dollars. Each customer wh
612
Why is accuracy not the best measure for assessing classification models?
After reading through all the answers above, here is an appeal to common sense. Optimality is a flexible term and always needs to be qualified; in other words, saying a model or algorithm is "optimal" is meaningless, especially in a scientific sense. Whenever anyone says they are scientifically optimizing something, I recommend asking a question like: "In what sense do you define optimality?" This is because in science, unless you can measure something, you cannot optimize (maximize, minimize, etc.) it. As an example, the OP asks the following: "Why is accuracy not the best measure for assessing classification models?" There is an embedded reference to optimization in the word "best" from the question above. "Best" is meaningless in science because "goodness" cannot be measured scientifically. The scientifically correct response to this question is that the OP needed to define what "good" means. In the real world (outside of academic exercises and Kaggle competitions) there is always a cost/benefit structure to consider when using a machine to suggest or make decisions to or on behalf of/instead of people. For classification tasks, that information can be embedded in a cost/benefit matrix with entries corresponding to those of the confusion matrix. Finally, since cost/benefit information is a function of the people who are considering using mechanistic help for their decision-making, it is subject to change with the circumstances, and therefore, there is never going to be one fixed measure of optimality which will work for all time in even one problem, let alone all problems (i.e., "models") involving classification. Any measure of optimality for classification which ignores costs does so at its own risk. Even the ROC AUC fails to be cost-invariant, as shown in this figure.
Why is accuracy not the best measure for assessing classification models?
After reading through all the answers above, here is an appeal to common sense. Optimality is a flexible term and always needs to be qualified; in other words, saying a model or algorithm is "optimal
Why is accuracy not the best measure for assessing classification models? After reading through all the answers above, here is an appeal to common sense. Optimality is a flexible term and always needs to be qualified; in other words, saying a model or algorithm is "optimal" is meaningless, especially in a scientific sense. Whenever anyone says they are scientifically optimizing something, I recommend asking a question like: "In what sense do you define optimality?" This is because in science, unless you can measure something, you cannot optimize (maximize, minimize, etc.) it. As an example, the OP asks the following: "Why is accuracy not the best measure for assessing classification models?" There is an embedded reference to optimization in the word "best" from the question above. "Best" is meaningless in science because "goodness" cannot be measured scientifically. The scientifically correct response to this question is that the OP needed to define what "good" means. In the real world (outside of academic exercises and Kaggle competitions) there is always a cost/benefit structure to consider when using a machine to suggest or make decisions to or on behalf of/instead of people. For classification tasks, that information can be embedded in a cost/benefit matrix with entries corresponding to those of the confusion matrix. Finally, since cost/benefit information is a function of the people who are considering using mechanistic help for their decision-making, it is subject to change with the circumstances, and therefore, there is never going to be one fixed measure of optimality which will work for all time in even one problem, let alone all problems (i.e., "models") involving classification. Any measure of optimality for classification which ignores costs does so at its own risk. Even the ROC AUC fails to be cost-invariant, as shown in this figure.
Why is accuracy not the best measure for assessing classification models? After reading through all the answers above, here is an appeal to common sense. Optimality is a flexible term and always needs to be qualified; in other words, saying a model or algorithm is "optimal
613
Why is accuracy not the best measure for assessing classification models?
I wrote a whole blog post on the matter: https://blog.ephorie.de/zeror-the-simplest-possible-classifier-or-why-high-accuracy-can-be-misleading ZeroR, the simplest possible classifier, just takes the majority class as the prediction. With highly imbalanced data you will get a very high accuracy, yet if your minority class is the class of interest, this is completely useless. Please find the details and examples in the post. Bottom line: when dealing with imbalanced data you can construct overly simple classifiers that give a high accuracy yet have no practical value whatsoever...
Why is accuracy not the best measure for assessing classification models?
I wrote a whole blog post on the matter: https://blog.ephorie.de/zeror-the-simplest-possible-classifier-or-why-high-accuracy-can-be-misleading ZeroR, the simplest possible classifier, just takes the m
Why is accuracy not the best measure for assessing classification models? I wrote a whole blog post on the matter: https://blog.ephorie.de/zeror-the-simplest-possible-classifier-or-why-high-accuracy-can-be-misleading ZeroR, the simplest possible classifier, just takes the majority class as the prediction. With highly imbalanced data you will get a very high accuracy, yet if your minority class is the class of interest, this is completely useless. Please find the details and examples in the post. Bottom line: when dealing with imbalanced data you can construct overly simple classifiers that give a high accuracy yet have no practical value whatsoever...
Why is accuracy not the best measure for assessing classification models? I wrote a whole blog post on the matter: https://blog.ephorie.de/zeror-the-simplest-possible-classifier-or-why-high-accuracy-can-be-misleading ZeroR, the simplest possible classifier, just takes the m
614
Why is accuracy not the best measure for assessing classification models?
Classification accuracy is the number of correct predictions divided by the total number of predictions. Accuracy can be misleading. For example, in a problem where there is a large class imbalance, a model can predict the value of the majority class for all predictions and achieve a high classification accuracy. So, further performance measures are needed such as F1 score and Brier score.
Why is accuracy not the best measure for assessing classification models?
Classification accuracy is the number of correct predictions divided by the total number of predictions. Accuracy can be misleading. For example, in a problem where there is a large class imbalance, a
Why is accuracy not the best measure for assessing classification models? Classification accuracy is the number of correct predictions divided by the total number of predictions. Accuracy can be misleading. For example, in a problem where there is a large class imbalance, a model can predict the value of the majority class for all predictions and achieve a high classification accuracy. So, further performance measures are needed such as F1 score and Brier score.
Why is accuracy not the best measure for assessing classification models? Classification accuracy is the number of correct predictions divided by the total number of predictions. Accuracy can be misleading. For example, in a problem where there is a large class imbalance, a
615
Why is accuracy not the best measure for assessing classification models?
You may view accuracy as the $R^2$ of classification: an initially appealing metric with which to compare models, that falls short under detailed examination. In both cases overfitting can be a major problem. Just as in the case of a high $R^2$ might mean that you are modelling the noise rather than the signal, a high accuracy may be a red-flag that your model applied too rigidly to your test dataset and does not have general applicability. This is especially problematic when you have highly imbalanced classification categories. The most accurate model might be a trivial one which classifies all data as one category (with the accuracy equal to proportion of the most frequent category), but this accuracy will fall spectacularly if you need to classify a dataset with a different true distribution of categories. As others have noted, another problem with accuracy is an implicit indifference to the price of failure - i.e. an assumption that all mis-classifications are equal. In practice they are not, and the costs of getting the wrong classification is highly subject dependent and you may prefer to minimise a particular kind of wrongness than maximise accuracy.
Why is accuracy not the best measure for assessing classification models?
You may view accuracy as the $R^2$ of classification: an initially appealing metric with which to compare models, that falls short under detailed examination. In both cases overfitting can be a major
Why is accuracy not the best measure for assessing classification models? You may view accuracy as the $R^2$ of classification: an initially appealing metric with which to compare models, that falls short under detailed examination. In both cases overfitting can be a major problem. Just as in the case of a high $R^2$ might mean that you are modelling the noise rather than the signal, a high accuracy may be a red-flag that your model applied too rigidly to your test dataset and does not have general applicability. This is especially problematic when you have highly imbalanced classification categories. The most accurate model might be a trivial one which classifies all data as one category (with the accuracy equal to proportion of the most frequent category), but this accuracy will fall spectacularly if you need to classify a dataset with a different true distribution of categories. As others have noted, another problem with accuracy is an implicit indifference to the price of failure - i.e. an assumption that all mis-classifications are equal. In practice they are not, and the costs of getting the wrong classification is highly subject dependent and you may prefer to minimise a particular kind of wrongness than maximise accuracy.
Why is accuracy not the best measure for assessing classification models? You may view accuracy as the $R^2$ of classification: an initially appealing metric with which to compare models, that falls short under detailed examination. In both cases overfitting can be a major
616
What are the advantages of ReLU over sigmoid function in deep neural networks?
Two additional major benefits of ReLUs are sparsity and a reduced likelihood of vanishing gradient. But first recall the definition of a ReLU is $h = \max(0, a)$ where $a = Wx + b$. One major benefit is the reduced likelihood of the gradient to vanish. This arises when $a > 0$. In this regime the gradient has a constant value. In contrast, the gradient of sigmoids becomes increasingly small as the absolute value of x increases. The constant gradient of ReLUs results in faster learning. The other benefit of ReLUs is sparsity. Sparsity arises when $a \le 0$. The more such units that exist in a layer the more sparse the resulting representation. Sigmoids on the other hand are always likely to generate some non-zero value resulting in dense representations. Sparse representations seem to be more beneficial than dense representations.
What are the advantages of ReLU over sigmoid function in deep neural networks?
Two additional major benefits of ReLUs are sparsity and a reduced likelihood of vanishing gradient. But first recall the definition of a ReLU is $h = \max(0, a)$ where $a = Wx + b$. One major benefit
What are the advantages of ReLU over sigmoid function in deep neural networks? Two additional major benefits of ReLUs are sparsity and a reduced likelihood of vanishing gradient. But first recall the definition of a ReLU is $h = \max(0, a)$ where $a = Wx + b$. One major benefit is the reduced likelihood of the gradient to vanish. This arises when $a > 0$. In this regime the gradient has a constant value. In contrast, the gradient of sigmoids becomes increasingly small as the absolute value of x increases. The constant gradient of ReLUs results in faster learning. The other benefit of ReLUs is sparsity. Sparsity arises when $a \le 0$. The more such units that exist in a layer the more sparse the resulting representation. Sigmoids on the other hand are always likely to generate some non-zero value resulting in dense representations. Sparse representations seem to be more beneficial than dense representations.
What are the advantages of ReLU over sigmoid function in deep neural networks? Two additional major benefits of ReLUs are sparsity and a reduced likelihood of vanishing gradient. But first recall the definition of a ReLU is $h = \max(0, a)$ where $a = Wx + b$. One major benefit
617
What are the advantages of ReLU over sigmoid function in deep neural networks?
Advantage: Sigmoid: not blowing up activation Relu : not vanishing gradient Relu : More computationally efficient to compute than Sigmoid like functions since Relu just needs to pick max(0,$x$) and not perform expensive exponential operations as in Sigmoids Relu : In practice, networks with Relu tend to show better convergence performance than sigmoid. (Krizhevsky et al.) Disadvantage: Sigmoid: tend to vanish gradient (cause there is a mechanism to reduce the gradient as "$a$" increase, where "$a$" is the input of a sigmoid function. Gradient of Sigmoid: $S'(a)= S(a)(1-S(a))$. When "$a$" grows to infinite large , $S'(a)= S(a)(1-S(a)) = 1\times(1-1)=0$). Relu : tend to blow up activation (there is no mechanism to constrain the output of the neuron, as "$a$" itself is the output) Relu : Dying Relu problem - if too many activations get below zero then most of the units(neurons) in network with Relu will simply output zero, in other words, die and thereby prohibiting learning.(This can be handled, to some extent, by using Leaky-Relu instead.)
What are the advantages of ReLU over sigmoid function in deep neural networks?
Advantage: Sigmoid: not blowing up activation Relu : not vanishing gradient Relu : More computationally efficient to compute than Sigmoid like functions since Relu just needs to pick m
What are the advantages of ReLU over sigmoid function in deep neural networks? Advantage: Sigmoid: not blowing up activation Relu : not vanishing gradient Relu : More computationally efficient to compute than Sigmoid like functions since Relu just needs to pick max(0,$x$) and not perform expensive exponential operations as in Sigmoids Relu : In practice, networks with Relu tend to show better convergence performance than sigmoid. (Krizhevsky et al.) Disadvantage: Sigmoid: tend to vanish gradient (cause there is a mechanism to reduce the gradient as "$a$" increase, where "$a$" is the input of a sigmoid function. Gradient of Sigmoid: $S'(a)= S(a)(1-S(a))$. When "$a$" grows to infinite large , $S'(a)= S(a)(1-S(a)) = 1\times(1-1)=0$). Relu : tend to blow up activation (there is no mechanism to constrain the output of the neuron, as "$a$" itself is the output) Relu : Dying Relu problem - if too many activations get below zero then most of the units(neurons) in network with Relu will simply output zero, in other words, die and thereby prohibiting learning.(This can be handled, to some extent, by using Leaky-Relu instead.)
What are the advantages of ReLU over sigmoid function in deep neural networks? Advantage: Sigmoid: not blowing up activation Relu : not vanishing gradient Relu : More computationally efficient to compute than Sigmoid like functions since Relu just needs to pick m
618
What are the advantages of ReLU over sigmoid function in deep neural networks?
Just complementing the other answers: Vanishing Gradients The other answers are right to point out that the bigger the input (in absolute value) the smaller the gradient of the sigmoid function. But, probably an even more important effect is that the derivative of the sigmoid function is ALWAYS smaller than one. In fact it is at most 0.25! The down side of this is that if you have many layers, you will multiply these gradients, and the product of many smaller than 1 values goes to zero very quickly. Since the state of the art of for Deep Learning has shown that more layers helps a lot, then this disadvantage of the Sigmoid function is a game killer. You just can't do Deep Learning with Sigmoid. On the other hand the gradient of the ReLu function is either $0$ for $a < 0$ or $1$ for $a > 0$. That means that you can put as many layers as you like, because multiplying the gradients will neither vanish nor explode.
What are the advantages of ReLU over sigmoid function in deep neural networks?
Just complementing the other answers: Vanishing Gradients The other answers are right to point out that the bigger the input (in absolute value) the smaller the gradient of the sigmoid function. But,
What are the advantages of ReLU over sigmoid function in deep neural networks? Just complementing the other answers: Vanishing Gradients The other answers are right to point out that the bigger the input (in absolute value) the smaller the gradient of the sigmoid function. But, probably an even more important effect is that the derivative of the sigmoid function is ALWAYS smaller than one. In fact it is at most 0.25! The down side of this is that if you have many layers, you will multiply these gradients, and the product of many smaller than 1 values goes to zero very quickly. Since the state of the art of for Deep Learning has shown that more layers helps a lot, then this disadvantage of the Sigmoid function is a game killer. You just can't do Deep Learning with Sigmoid. On the other hand the gradient of the ReLu function is either $0$ for $a < 0$ or $1$ for $a > 0$. That means that you can put as many layers as you like, because multiplying the gradients will neither vanish nor explode.
What are the advantages of ReLU over sigmoid function in deep neural networks? Just complementing the other answers: Vanishing Gradients The other answers are right to point out that the bigger the input (in absolute value) the smaller the gradient of the sigmoid function. But,
619
What are the advantages of ReLU over sigmoid function in deep neural networks?
An advantage to ReLU other than avoiding vanishing gradients problem is that it has much lower run time. max(0,a) runs much faster than any sigmoid function (logistic function for example = 1/(1+e^(-a)) which uses an exponent which is computational slow when done often). This is true for both feed forward and back propagation as the gradient of ReLU (if a<0, =0 else =1) is also very easy to compute compared to sigmoid (for logistic curve=e^a/((1+e^a)^2)). Although ReLU does have the disadvantage of dying cells which limits the capacity of the network. To overcome this just use a variant of ReLU such as leaky ReLU, ELU,etc if you notice the problem described above.
What are the advantages of ReLU over sigmoid function in deep neural networks?
An advantage to ReLU other than avoiding vanishing gradients problem is that it has much lower run time. max(0,a) runs much faster than any sigmoid function (logistic function for example = 1/(1+e^(-a
What are the advantages of ReLU over sigmoid function in deep neural networks? An advantage to ReLU other than avoiding vanishing gradients problem is that it has much lower run time. max(0,a) runs much faster than any sigmoid function (logistic function for example = 1/(1+e^(-a)) which uses an exponent which is computational slow when done often). This is true for both feed forward and back propagation as the gradient of ReLU (if a<0, =0 else =1) is also very easy to compute compared to sigmoid (for logistic curve=e^a/((1+e^a)^2)). Although ReLU does have the disadvantage of dying cells which limits the capacity of the network. To overcome this just use a variant of ReLU such as leaky ReLU, ELU,etc if you notice the problem described above.
What are the advantages of ReLU over sigmoid function in deep neural networks? An advantage to ReLU other than avoiding vanishing gradients problem is that it has much lower run time. max(0,a) runs much faster than any sigmoid function (logistic function for example = 1/(1+e^(-a
620
What are the advantages of ReLU over sigmoid function in deep neural networks?
The main reason why ReLu is used is because it is simple, fast, and empirically it seems to work well. Empirically, early papers observed that training a deep network with ReLu tended to converge much more quickly and reliably than training a deep network with sigmoid activation. In the early days, people were able to train deep networks with ReLu but training deep networks with sigmoid flat-out failed. There are many hypotheses that have attempted to explain why this could be. First, with a standard sigmoid activation, the gradient of the sigmoid is typically some fraction between 0 and 1; if you have many layers, these multiply, and might give an overall gradient that is exponentially small, so each step of gradient descent will make only a tiny change to the weights, leading to slow convergence (the vanishing gradient problem). In contrast, with ReLu activation, the gradient of the ReLu is either 0 or 1, so after many layers often the gradient will include the product of a bunch of 1's, and thus the overall gradient is not too small or not too large. But this story might be too simplistic, because it doesn't take into account the way that we multiply by the weights and add up internal activations. Second, with sigmoid activation, the gradient goes to zero if the input is very large or very small. When the gradient goes to zero, gradient descent tends to have very slow convergence. In contrast, with ReLu activation, the gradient goes to zero if the input is negative but not if the input is large, so it might have only "half" of the problems of sigmoid. But this seems a bit naive too as it is clear that negative values still give a zero gradient. Since then, we've accumulated more experience and more tricks that can be used to train neural networks. For instance, batch normalization is very helpful. When you add in those tricks, the comparison becomes less clear. It is possible to successfully train a deep network with either sigmoid or ReLu, if you apply the right set of tricks. I suspect that ultimately there are several reasons for widespread use of ReLu today: Historical accident: we discovered ReLu in the early days before we knew about those tricks, so in the early days ReLu was the only choice that worked, and everyone had to use it. And now that everyone uses it, it is a safe choice and people keep using it. Efficiency: ReLu is faster to compute than the sigmoid function, and its derivative is faster to compute. This makes a significant difference to training and inference time for neural networks: only a constant factor, but constants can matter. Simplicity: ReLu is simple. Fragility: empirically, ReLu seems to be a bit more forgiving (in terms of the tricks needed to make the network train successfully), whereas sigmoid is more fiddly (to train a deep network, you need more tricks, and it's more fragile). Good enough: empirically, in many domains, other activation functions are no better than ReLu, or if they are better, are better by only a tiny amount. So, if ReLu is simple, fast, and about as good as anything else in most settings, it makes a reasonable default.
What are the advantages of ReLU over sigmoid function in deep neural networks?
The main reason why ReLu is used is because it is simple, fast, and empirically it seems to work well. Empirically, early papers observed that training a deep network with ReLu tended to converge much
What are the advantages of ReLU over sigmoid function in deep neural networks? The main reason why ReLu is used is because it is simple, fast, and empirically it seems to work well. Empirically, early papers observed that training a deep network with ReLu tended to converge much more quickly and reliably than training a deep network with sigmoid activation. In the early days, people were able to train deep networks with ReLu but training deep networks with sigmoid flat-out failed. There are many hypotheses that have attempted to explain why this could be. First, with a standard sigmoid activation, the gradient of the sigmoid is typically some fraction between 0 and 1; if you have many layers, these multiply, and might give an overall gradient that is exponentially small, so each step of gradient descent will make only a tiny change to the weights, leading to slow convergence (the vanishing gradient problem). In contrast, with ReLu activation, the gradient of the ReLu is either 0 or 1, so after many layers often the gradient will include the product of a bunch of 1's, and thus the overall gradient is not too small or not too large. But this story might be too simplistic, because it doesn't take into account the way that we multiply by the weights and add up internal activations. Second, with sigmoid activation, the gradient goes to zero if the input is very large or very small. When the gradient goes to zero, gradient descent tends to have very slow convergence. In contrast, with ReLu activation, the gradient goes to zero if the input is negative but not if the input is large, so it might have only "half" of the problems of sigmoid. But this seems a bit naive too as it is clear that negative values still give a zero gradient. Since then, we've accumulated more experience and more tricks that can be used to train neural networks. For instance, batch normalization is very helpful. When you add in those tricks, the comparison becomes less clear. It is possible to successfully train a deep network with either sigmoid or ReLu, if you apply the right set of tricks. I suspect that ultimately there are several reasons for widespread use of ReLu today: Historical accident: we discovered ReLu in the early days before we knew about those tricks, so in the early days ReLu was the only choice that worked, and everyone had to use it. And now that everyone uses it, it is a safe choice and people keep using it. Efficiency: ReLu is faster to compute than the sigmoid function, and its derivative is faster to compute. This makes a significant difference to training and inference time for neural networks: only a constant factor, but constants can matter. Simplicity: ReLu is simple. Fragility: empirically, ReLu seems to be a bit more forgiving (in terms of the tricks needed to make the network train successfully), whereas sigmoid is more fiddly (to train a deep network, you need more tricks, and it's more fragile). Good enough: empirically, in many domains, other activation functions are no better than ReLu, or if they are better, are better by only a tiny amount. So, if ReLu is simple, fast, and about as good as anything else in most settings, it makes a reasonable default.
What are the advantages of ReLU over sigmoid function in deep neural networks? The main reason why ReLu is used is because it is simple, fast, and empirically it seems to work well. Empirically, early papers observed that training a deep network with ReLu tended to converge much
621
What are the advantages of ReLU over sigmoid function in deep neural networks?
Main benefit is that the derivative of ReLu is either 0 or 1, so multiplying by it won't cause weights that are further away from the end result of the loss function to suffer from the vanishing gradient problem:
What are the advantages of ReLU over sigmoid function in deep neural networks?
Main benefit is that the derivative of ReLu is either 0 or 1, so multiplying by it won't cause weights that are further away from the end result of the loss function to suffer from the vanishing gradi
What are the advantages of ReLU over sigmoid function in deep neural networks? Main benefit is that the derivative of ReLu is either 0 or 1, so multiplying by it won't cause weights that are further away from the end result of the loss function to suffer from the vanishing gradient problem:
What are the advantages of ReLU over sigmoid function in deep neural networks? Main benefit is that the derivative of ReLu is either 0 or 1, so multiplying by it won't cause weights that are further away from the end result of the loss function to suffer from the vanishing gradi
622
What are the advantages of ReLU over sigmoid function in deep neural networks?
ReLu does not have the vanishing gradient problem. Vanishing gradients lead to very small changes in the weights proportional to the partial derivative of the error function. The gradient is multiplied n times in back propagation to get the gradients of lower layers. The effect of multiplying the gradient n times makes the gradient to be even smaller for lower layers, leading to a very small change or even no change in the weights of lower layers. Therefore, the deeper the network, the more the effect of vanishing gradients. This makes learning per iteration slower when activation functions that suffer from vanishing gradients is used e.g Sigmoid and tanh functions. Kindly refer here ReLU function is not computationally heavy to compute compared to sigmoid function. This is well covered above.
What are the advantages of ReLU over sigmoid function in deep neural networks?
ReLu does not have the vanishing gradient problem. Vanishing gradients lead to very small changes in the weights proportional to the partial derivative of the error function. The gradient is multiplie
What are the advantages of ReLU over sigmoid function in deep neural networks? ReLu does not have the vanishing gradient problem. Vanishing gradients lead to very small changes in the weights proportional to the partial derivative of the error function. The gradient is multiplied n times in back propagation to get the gradients of lower layers. The effect of multiplying the gradient n times makes the gradient to be even smaller for lower layers, leading to a very small change or even no change in the weights of lower layers. Therefore, the deeper the network, the more the effect of vanishing gradients. This makes learning per iteration slower when activation functions that suffer from vanishing gradients is used e.g Sigmoid and tanh functions. Kindly refer here ReLU function is not computationally heavy to compute compared to sigmoid function. This is well covered above.
What are the advantages of ReLU over sigmoid function in deep neural networks? ReLu does not have the vanishing gradient problem. Vanishing gradients lead to very small changes in the weights proportional to the partial derivative of the error function. The gradient is multiplie
623
What are the advantages of ReLU over sigmoid function in deep neural networks?
I read all the answers and still feel that I need to write a new one. Let us consider a linear activation function g(z)=z, which is different from Relu(z) only in the region z<0. If all activation functions used in a network is g(z), then the network is equivalent to a simple single layer linear network, which we know is not useful in learning complicate patterns. We need to introduce nonlinearity into the network. So the interesting part of Relu(z) is actually its combination of this linear part with another linear part that has a different slope (specifically, zero in Relu). This introduces a nonlinearity we need, which seems to be the most simple nonlinearity that one can think of. However simplicity itself does not imply superiorness over complexity in terms of its practical use. Then why is this simple nonlinearity more powerful than the sigmoid function? Both relu and sigmoid have regions of zero derivative. Other answers have claimed that relu has a reduced chance of encountering the vanishing gradient problem based on the facts that (1) its zero derivative region is narrower than sigmoid and (2) relu's derivative for z>0 is equal to one, which is not damped or enhanced when multiplied. Other possible reasons for the advantage of relu over sigmoid may be that (1) Relu has larger possible range than that of the sigmoid function for z>0. (2) The exact zero values of relu for z<0 introduce sparsity effect in the network, which forces the network to learn more robust features. If this is true, something like leaky Relu, which is claimed as an improvement over relu, may be actually damaging the efficacy of Relu. Some people consider relu very strange at first glance. It turns out that the adoption of relu is a natural choice if we consider that (1) sigmoid is a modified version of the step function (g=0 for z<0, and g=1 for z>0) to make it continuous near zero; (2) another imaginable modified version of the step function would be replacing g=1 in z>0 by g=z, which is relu.
What are the advantages of ReLU over sigmoid function in deep neural networks?
I read all the answers and still feel that I need to write a new one. Let us consider a linear activation function g(z)=z, which is different from Relu(z) only in the region z<0. If all activation fun
What are the advantages of ReLU over sigmoid function in deep neural networks? I read all the answers and still feel that I need to write a new one. Let us consider a linear activation function g(z)=z, which is different from Relu(z) only in the region z<0. If all activation functions used in a network is g(z), then the network is equivalent to a simple single layer linear network, which we know is not useful in learning complicate patterns. We need to introduce nonlinearity into the network. So the interesting part of Relu(z) is actually its combination of this linear part with another linear part that has a different slope (specifically, zero in Relu). This introduces a nonlinearity we need, which seems to be the most simple nonlinearity that one can think of. However simplicity itself does not imply superiorness over complexity in terms of its practical use. Then why is this simple nonlinearity more powerful than the sigmoid function? Both relu and sigmoid have regions of zero derivative. Other answers have claimed that relu has a reduced chance of encountering the vanishing gradient problem based on the facts that (1) its zero derivative region is narrower than sigmoid and (2) relu's derivative for z>0 is equal to one, which is not damped or enhanced when multiplied. Other possible reasons for the advantage of relu over sigmoid may be that (1) Relu has larger possible range than that of the sigmoid function for z>0. (2) The exact zero values of relu for z<0 introduce sparsity effect in the network, which forces the network to learn more robust features. If this is true, something like leaky Relu, which is claimed as an improvement over relu, may be actually damaging the efficacy of Relu. Some people consider relu very strange at first glance. It turns out that the adoption of relu is a natural choice if we consider that (1) sigmoid is a modified version of the step function (g=0 for z<0, and g=1 for z>0) to make it continuous near zero; (2) another imaginable modified version of the step function would be replacing g=1 in z>0 by g=z, which is relu.
What are the advantages of ReLU over sigmoid function in deep neural networks? I read all the answers and still feel that I need to write a new one. Let us consider a linear activation function g(z)=z, which is different from Relu(z) only in the region z<0. If all activation fun
624
Which "mean" to use and when?
This answer may have a slightly more mathematical bent than you were looking for. The important thing to recognize is that all of these means are simply the arithmetic mean in disguise. The important characteristic in identifying which (if any!) of the three common means (arithmetic, geometric or harmonic) is the "right" mean is to find the "additive structure" in the question at hand. In other words suppose we're given some abstract quantities $x_1, x_2,\ldots,x_n$, which I will call "measurements", somewhat abusing this term below for the sake of consistency. Each of these three means can be obtained by (1) transforming each $x_i$ into some $y_i$, (2) taking the arithmetic mean and then (3) transforming back to the original scale of measurement. Arithmetic mean: Obviously, we use the "identity" transformation: $y_i = x_i$. So, steps (1) and (3) are trivial (nothing is done) and $\bar x_{\mathrm{AM}} = \bar y$. Geometric mean: Here the additive structure is on the logarithms of the original observations. So, we take $y_i = \log x_i$ and then to get the GM in step (3) we convert back via the inverse function of the $\log$, i.e., $\bar x_{\mathrm{GM}} = \exp(\bar{y})$. Harmonic mean: Here the additive structure is on the reciprocals of our observations. So, $y_i = 1/x_i$, whence $\bar x_{\mathrm{HM}} = 1/\bar{y}$. In physical problems, these often arise through the following process: We have some quantity $w$ that remains fixed in relation to our measurements $x_1,\ldots,x_n$ and some other quantities, say $z_1,\ldots,z_n$. Now, we play the following game: Keep $w$ and $z_1+\cdots+z_n$ constant and try to find some $\bar x$ such that if we replace each of our individual observations $x_i$ by $\bar x$, then the "total" relationship is still conserved. The distance–velocity–time example appears to be popular, so let's use it. Constant distance, varying times Consider a fixed distance traveled $d$. Now suppose we travel this distance $n$ different times at speeds $v_1,\ldots,v_n$, taking times $t_1,\ldots,t_n$. We now play our game. Suppose we wanted to replace our individual velocities with some fixed velocity $\bar v$ such that the total time remains constant. Note that we have $$ d - v_i t_i = 0 \>, $$ so that $\sum_i (d - v_i t_i) = 0$. We want this total relationship (total time and total distance traveled) conserved when we replace each of the $v_i$ by $\bar v$ in our game. Hence, $$ n d - \bar v \sum_i t_i = 0 \>, $$ and since each $t_i = d / v_i$, we get that $$ \bar v = \frac{n}{\frac{1}{v_1}+\cdots+\frac{1}{v_n}} = \bar v_{\mathrm{HM}} \>. $$ Note that the "additive structure" here is with respect to the individual times, and our measurements are inversely related to them, hence the harmonic mean applies. Varying distances, constant time Now, let's change the situation. Suppose that for $n$ instances we travel a fixed time $t$ at velocities $v_1,\ldots,v_n$ over distances $d_1,\ldots,d_n$. Now, we want the total distance conserved. We have $$ d_i - v_i t = 0 \>, $$ and the total system is conserved if $\sum_i (d_i - v_i t) = 0$. Playing our game again, we seek a $\bar v$ such that $$ \sum_i (d_i - \bar v t) = 0 \>, $$ but, since $d_i = v_i t$, we get that $$ \bar v = \frac{1}{n} \sum_i v_i = \bar v_{\mathrm{AM}} \>. $$ Here the additive structure we are trying to maintain is proportional to the measurements we have, so the arithmetic mean applies. Equal volume cube Suppose we have constructed an $n$-dimensional box with a given volume $V$ and our measurements are the side-lengths of the box. Then $$ V = x_1 \cdot x_2 \cdots x_n \>, $$ and suppose we wanted to construct an $n$-dimensional (hyper)cube with the same volume. That is, we want to replace our individual side-lengths $x_i$ by a common side-length $\bar x$. Then $$ V = \bar x \cdot \bar x \cdots \bar x = \bar x^n \>. $$ This easily indicates that we should take $\bar x = (x_i \cdots x_n)^{1/n} = \bar x_{\mathrm{GM}}$. Note that the additive structure is in the logarithms, that is, $\log V = \sum_i \log x_i$ and we are trying to conserve the left-hand quantity. New means from old As an exercise, think about what the "natural" mean is in the situation where you let both the distances and times vary in the first example. That is, we have distances $d_i$, velocities $v_i$ and times $t_i$. We want to conserve the total distance and time traveled and find a constant $\bar v$ to achieve this. Exercise: What is the "natural" mean in this situation?
Which "mean" to use and when?
This answer may have a slightly more mathematical bent than you were looking for. The important thing to recognize is that all of these means are simply the arithmetic mean in disguise. The important
Which "mean" to use and when? This answer may have a slightly more mathematical bent than you were looking for. The important thing to recognize is that all of these means are simply the arithmetic mean in disguise. The important characteristic in identifying which (if any!) of the three common means (arithmetic, geometric or harmonic) is the "right" mean is to find the "additive structure" in the question at hand. In other words suppose we're given some abstract quantities $x_1, x_2,\ldots,x_n$, which I will call "measurements", somewhat abusing this term below for the sake of consistency. Each of these three means can be obtained by (1) transforming each $x_i$ into some $y_i$, (2) taking the arithmetic mean and then (3) transforming back to the original scale of measurement. Arithmetic mean: Obviously, we use the "identity" transformation: $y_i = x_i$. So, steps (1) and (3) are trivial (nothing is done) and $\bar x_{\mathrm{AM}} = \bar y$. Geometric mean: Here the additive structure is on the logarithms of the original observations. So, we take $y_i = \log x_i$ and then to get the GM in step (3) we convert back via the inverse function of the $\log$, i.e., $\bar x_{\mathrm{GM}} = \exp(\bar{y})$. Harmonic mean: Here the additive structure is on the reciprocals of our observations. So, $y_i = 1/x_i$, whence $\bar x_{\mathrm{HM}} = 1/\bar{y}$. In physical problems, these often arise through the following process: We have some quantity $w$ that remains fixed in relation to our measurements $x_1,\ldots,x_n$ and some other quantities, say $z_1,\ldots,z_n$. Now, we play the following game: Keep $w$ and $z_1+\cdots+z_n$ constant and try to find some $\bar x$ such that if we replace each of our individual observations $x_i$ by $\bar x$, then the "total" relationship is still conserved. The distance–velocity–time example appears to be popular, so let's use it. Constant distance, varying times Consider a fixed distance traveled $d$. Now suppose we travel this distance $n$ different times at speeds $v_1,\ldots,v_n$, taking times $t_1,\ldots,t_n$. We now play our game. Suppose we wanted to replace our individual velocities with some fixed velocity $\bar v$ such that the total time remains constant. Note that we have $$ d - v_i t_i = 0 \>, $$ so that $\sum_i (d - v_i t_i) = 0$. We want this total relationship (total time and total distance traveled) conserved when we replace each of the $v_i$ by $\bar v$ in our game. Hence, $$ n d - \bar v \sum_i t_i = 0 \>, $$ and since each $t_i = d / v_i$, we get that $$ \bar v = \frac{n}{\frac{1}{v_1}+\cdots+\frac{1}{v_n}} = \bar v_{\mathrm{HM}} \>. $$ Note that the "additive structure" here is with respect to the individual times, and our measurements are inversely related to them, hence the harmonic mean applies. Varying distances, constant time Now, let's change the situation. Suppose that for $n$ instances we travel a fixed time $t$ at velocities $v_1,\ldots,v_n$ over distances $d_1,\ldots,d_n$. Now, we want the total distance conserved. We have $$ d_i - v_i t = 0 \>, $$ and the total system is conserved if $\sum_i (d_i - v_i t) = 0$. Playing our game again, we seek a $\bar v$ such that $$ \sum_i (d_i - \bar v t) = 0 \>, $$ but, since $d_i = v_i t$, we get that $$ \bar v = \frac{1}{n} \sum_i v_i = \bar v_{\mathrm{AM}} \>. $$ Here the additive structure we are trying to maintain is proportional to the measurements we have, so the arithmetic mean applies. Equal volume cube Suppose we have constructed an $n$-dimensional box with a given volume $V$ and our measurements are the side-lengths of the box. Then $$ V = x_1 \cdot x_2 \cdots x_n \>, $$ and suppose we wanted to construct an $n$-dimensional (hyper)cube with the same volume. That is, we want to replace our individual side-lengths $x_i$ by a common side-length $\bar x$. Then $$ V = \bar x \cdot \bar x \cdots \bar x = \bar x^n \>. $$ This easily indicates that we should take $\bar x = (x_i \cdots x_n)^{1/n} = \bar x_{\mathrm{GM}}$. Note that the additive structure is in the logarithms, that is, $\log V = \sum_i \log x_i$ and we are trying to conserve the left-hand quantity. New means from old As an exercise, think about what the "natural" mean is in the situation where you let both the distances and times vary in the first example. That is, we have distances $d_i$, velocities $v_i$ and times $t_i$. We want to conserve the total distance and time traveled and find a constant $\bar v$ to achieve this. Exercise: What is the "natural" mean in this situation?
Which "mean" to use and when? This answer may have a slightly more mathematical bent than you were looking for. The important thing to recognize is that all of these means are simply the arithmetic mean in disguise. The important
625
Which "mean" to use and when?
Expanding on @Brandon 's excellent comment (which I think should be promoted to answer): The geometric mean should be used when you are interested in multiplicative differences. Brandon notes that geometric mean should be used when the ranges are different. This is usually correct. The reason is that we want to equalize the ranges. For example, suppose college applicants are rated on SAT score (0 to 800), grade point average in HS (0 to 4) and extracurricular activities (1 to 10). If a college wanted to average these and equalize the ranges (that is, weight increases in each quality relative to the range) then geometric mean would be the way to go. But this isn't always true when we have scales with different ranges. If we were comparing income in different countries (including poor and rich ones), we would probably not want the geometric mean, but the arithmetic mean (or, more likely, the median or perhaps a trimmed mean). The only use I've seen for harmonic mean is that of comparing rates. As an example: If you drive from New York to Boston at 40 MPH, and return at 60 MPH, then your overall average is not the arithmetic mean of 50 MPH, but the harmonic mean. AM = $(40 + 60)/2 = 50$ HM = $2/(1/40 + 1/60) = 48$ to check that this is right for this simple example, imagine it is 120 miles from NYC to Boston. Then the drive there takes 3 hours, the drive home takes 2 hours, the total is 5 hours, and the distance is 240 miles. $240/5 = 48$
Which "mean" to use and when?
Expanding on @Brandon 's excellent comment (which I think should be promoted to answer): The geometric mean should be used when you are interested in multiplicative differences. Brandon notes that geo
Which "mean" to use and when? Expanding on @Brandon 's excellent comment (which I think should be promoted to answer): The geometric mean should be used when you are interested in multiplicative differences. Brandon notes that geometric mean should be used when the ranges are different. This is usually correct. The reason is that we want to equalize the ranges. For example, suppose college applicants are rated on SAT score (0 to 800), grade point average in HS (0 to 4) and extracurricular activities (1 to 10). If a college wanted to average these and equalize the ranges (that is, weight increases in each quality relative to the range) then geometric mean would be the way to go. But this isn't always true when we have scales with different ranges. If we were comparing income in different countries (including poor and rich ones), we would probably not want the geometric mean, but the arithmetic mean (or, more likely, the median or perhaps a trimmed mean). The only use I've seen for harmonic mean is that of comparing rates. As an example: If you drive from New York to Boston at 40 MPH, and return at 60 MPH, then your overall average is not the arithmetic mean of 50 MPH, but the harmonic mean. AM = $(40 + 60)/2 = 50$ HM = $2/(1/40 + 1/60) = 48$ to check that this is right for this simple example, imagine it is 120 miles from NYC to Boston. Then the drive there takes 3 hours, the drive home takes 2 hours, the total is 5 hours, and the distance is 240 miles. $240/5 = 48$
Which "mean" to use and when? Expanding on @Brandon 's excellent comment (which I think should be promoted to answer): The geometric mean should be used when you are interested in multiplicative differences. Brandon notes that geo
626
Which "mean" to use and when?
I'll try to boil it down to 3-4 rules of thumb and provide some more examples of the Pythagorean means. The relationship between the 3 means is HM < GM < AM for non-negative data with some variation. They will be equal if and only if there's no variation at all in sample data. For data in levels, use the AM. Prices are a good example. For ratios, use the GM. Investment returns, relative prices, and the UN's Human Development Index are all examples. HM is appropriate when dealing with rates. Here's a non-automotive example courtesy of David Giles: For instance, consider data on "hours worked per week" (a rate). Suppose that we have four people (sample observations), each of whom work a total of 2,000 hours. However, they work for different numbers of hours per week, as follows: Person Total Hours Hours per Week Weeks Taken 1 2,000 40 50 2 2,000 45 44.4444 3 2,000 35 57.142857 4 2,000 50 40 Total: 8,000 191.587297 The Arithmetic Mean of the values in the third column is AM = 42.5 hours per week. However, notice what this value implies. Dividing the total number of weeks worked by the sample members (8,000) by this average value yields a value of 188.2353 as the total number of weeks worked by all four people. Now look at the last column in the table above. In fact the correct value for the total number of weeks worked by sample members is 191.5873 weeks. If we compute the Harmonic Mean for the values for Hours per Week in the third column of the table we get HM = 41.75642 hours (< AM), and dividing this number into the 8,000 hours gives us the correct result of 191.5873 for the total number of weeks worked. Here is a case where the Harmonic Mean provides the appropriate measure for the sample average. David also discusses the weighted version of the 3 means, which come up in price indices used to measure inflation. I often find it hard to figure out if something is a rate or a ratio. Returns on an investment are usually treated as ratios when calculating means, but they are also a rate since they are usually denominated in "% per unit of time." I think a useful distinction is that ratios are usually unitless, so returns are ratios because \$ of current value over $ invested has the dollars signs cancel. Rates have different units in the numerator and the denominator. Thus if you wanted to summarize the Big Mac Index for Northern European countries, you would use the equally weighted HM, because it is a rate. Divided by the number countries, the HM would tell you how much currency you would need to afford a BM under the constraint that you had to have the same amount of each currency.
Which "mean" to use and when?
I'll try to boil it down to 3-4 rules of thumb and provide some more examples of the Pythagorean means. The relationship between the 3 means is HM < GM < AM for non-negative data with some variatio
Which "mean" to use and when? I'll try to boil it down to 3-4 rules of thumb and provide some more examples of the Pythagorean means. The relationship between the 3 means is HM < GM < AM for non-negative data with some variation. They will be equal if and only if there's no variation at all in sample data. For data in levels, use the AM. Prices are a good example. For ratios, use the GM. Investment returns, relative prices, and the UN's Human Development Index are all examples. HM is appropriate when dealing with rates. Here's a non-automotive example courtesy of David Giles: For instance, consider data on "hours worked per week" (a rate). Suppose that we have four people (sample observations), each of whom work a total of 2,000 hours. However, they work for different numbers of hours per week, as follows: Person Total Hours Hours per Week Weeks Taken 1 2,000 40 50 2 2,000 45 44.4444 3 2,000 35 57.142857 4 2,000 50 40 Total: 8,000 191.587297 The Arithmetic Mean of the values in the third column is AM = 42.5 hours per week. However, notice what this value implies. Dividing the total number of weeks worked by the sample members (8,000) by this average value yields a value of 188.2353 as the total number of weeks worked by all four people. Now look at the last column in the table above. In fact the correct value for the total number of weeks worked by sample members is 191.5873 weeks. If we compute the Harmonic Mean for the values for Hours per Week in the third column of the table we get HM = 41.75642 hours (< AM), and dividing this number into the 8,000 hours gives us the correct result of 191.5873 for the total number of weeks worked. Here is a case where the Harmonic Mean provides the appropriate measure for the sample average. David also discusses the weighted version of the 3 means, which come up in price indices used to measure inflation. I often find it hard to figure out if something is a rate or a ratio. Returns on an investment are usually treated as ratios when calculating means, but they are also a rate since they are usually denominated in "% per unit of time." I think a useful distinction is that ratios are usually unitless, so returns are ratios because \$ of current value over $ invested has the dollars signs cancel. Rates have different units in the numerator and the denominator. Thus if you wanted to summarize the Big Mac Index for Northern European countries, you would use the equally weighted HM, because it is a rate. Divided by the number countries, the HM would tell you how much currency you would need to afford a BM under the constraint that you had to have the same amount of each currency.
Which "mean" to use and when? I'll try to boil it down to 3-4 rules of thumb and provide some more examples of the Pythagorean means. The relationship between the 3 means is HM < GM < AM for non-negative data with some variatio
627
Which "mean" to use and when?
A possible answer to your question ("how do I decide which mean is the most appropriate to use in a given context?") is the definition of mean as given by the Italian mathematician Oscar Chisini. How to Compute a Mean? The Chisini Approach and Its Applications is a paper with a more detailed explanation and some examples (mean travelling speed and others). Citation: R Graziani, P Veronese (2009). How to compute a mean? The Chisini approach and its applications. The American Statistician 63(1), pp. 33-36.
Which "mean" to use and when?
A possible answer to your question ("how do I decide which mean is the most appropriate to use in a given context?") is the definition of mean as given by the Italian mathematician Oscar Chisini. How
Which "mean" to use and when? A possible answer to your question ("how do I decide which mean is the most appropriate to use in a given context?") is the definition of mean as given by the Italian mathematician Oscar Chisini. How to Compute a Mean? The Chisini Approach and Its Applications is a paper with a more detailed explanation and some examples (mean travelling speed and others). Citation: R Graziani, P Veronese (2009). How to compute a mean? The Chisini approach and its applications. The American Statistician 63(1), pp. 33-36.
Which "mean" to use and when? A possible answer to your question ("how do I decide which mean is the most appropriate to use in a given context?") is the definition of mean as given by the Italian mathematician Oscar Chisini. How
628
Which "mean" to use and when?
I think a simple way to answer the question would be: If the mathematical structure is xy = k (an inverse relationship between variables) and you're looking for an average, then you need to use the harmonic mean--which amounts to a weighted arithmetic mean--consider Harmonic average = 2ab/(a+b) = a(b/a+b) + b(a/(a+b) For example: dollar cost averaging falls into this category because the amount of money you're investing (A) stays fixed, but the price per share (P) and number of shares (N) vary (A = PN). In fact, if you think of an arithmetic average as a number equally centered between two numbers, the harmonic average is also a number equally centered between two numbers but (and this is nice) the "center" is where the percentages (ratios) are equal. That is: (x - a)/a = (b -x)/b, where x is the harmonic average. If the mathematical structure is a direct variation y = kx, you use the arithmetic mean--which is what the harmonic mean reduces to in this case.
Which "mean" to use and when?
I think a simple way to answer the question would be: If the mathematical structure is xy = k (an inverse relationship between variables) and you're looking for an average, then you need to use the h
Which "mean" to use and when? I think a simple way to answer the question would be: If the mathematical structure is xy = k (an inverse relationship between variables) and you're looking for an average, then you need to use the harmonic mean--which amounts to a weighted arithmetic mean--consider Harmonic average = 2ab/(a+b) = a(b/a+b) + b(a/(a+b) For example: dollar cost averaging falls into this category because the amount of money you're investing (A) stays fixed, but the price per share (P) and number of shares (N) vary (A = PN). In fact, if you think of an arithmetic average as a number equally centered between two numbers, the harmonic average is also a number equally centered between two numbers but (and this is nice) the "center" is where the percentages (ratios) are equal. That is: (x - a)/a = (b -x)/b, where x is the harmonic average. If the mathematical structure is a direct variation y = kx, you use the arithmetic mean--which is what the harmonic mean reduces to in this case.
Which "mean" to use and when? I think a simple way to answer the question would be: If the mathematical structure is xy = k (an inverse relationship between variables) and you're looking for an average, then you need to use the h
629
What is the difference between data mining, statistics, machine learning and AI?
There is considerable overlap among these, but some distinctions can be made. Of necessity, I will have to over-simplify some things or give short-shrift to others, but I will do my best to give some sense of these areas. Firstly, Artificial Intelligence is fairly distinct from the rest. AI is the study of how to create intelligent agents. In practice, it is how to program a computer to behave and perform a task as an intelligent agent (say, a person) would. This does not have to involve learning or induction at all, it can just be a way to 'build a better mousetrap'. For example, AI applications have included programs to monitor and control ongoing processes (e.g., increase aspect A if it seems too low). Notice that AI can include darn-near anything that a machine does, so long as it doesn't do it 'stupidly'. In practice, however, most tasks that require intelligence require an ability to induce new knowledge from experiences. Thus, a large area within AI is machine learning. A computer program is said to learn some task from experience if its performance at the task improves with experience, according to some performance measure. Machine learning involves the study of algorithms that can extract information automatically (i.e., without on-line human guidance). It is certainly the case that some of these procedures include ideas derived directly from, or inspired by, classical statistics, but they don't have to be. Similarly to AI, machine learning is very broad and can include almost everything, so long as there is some inductive component to it. An example of a machine learning algorithm might be a Kalman filter. Data mining is an area that has taken much of its inspiration and techniques from machine learning (and some, also, from statistics), but is put to different ends. Data mining is carried out by a person, in a specific situation, on a particular data set, with a goal in mind. Typically, this person wants to leverage the power of the various pattern recognition techniques that have been developed in machine learning. Quite often, the data set is massive, complicated, and/or may have special problems (such as there are more variables than observations). Usually, the goal is either to discover / generate some preliminary insights in an area where there really was little knowledge beforehand, or to be able to predict future observations accurately. Moreover, data mining procedures could be either 'unsupervised' (we don't know the answer--discovery) or 'supervised' (we know the answer--prediction). Note that the goal is generally not to develop a more sophisticated understanding of the underlying data generating process. Common data mining techniques would include cluster analyses, classification and regression trees, and neural networks. I suppose I needn't say much to explain what statistics is on this site, but perhaps I can say a few things. Classical statistics (here I mean both frequentist and Bayesian) is a sub-topic within mathematics. I think of it as largely the intersection of what we know about probability and what we know about optimization. Although mathematical statistics can be studied as simply a Platonic object of inquiry, it is mostly understood as more practical and applied in character than other, more rarefied areas of mathematics. As such (and notably in contrast to data mining above), it is mostly employed towards better understanding some particular data generating process. Thus, it usually starts with a formally specified model, and from this are derived procedures to accurately extract that model from noisy instances (i.e., estimation--by optimizing some loss function) and to be able to distinguish it from other possibilities (i.e., inferences based on known properties of sampling distributions). The prototypical statistical technique is regression.
What is the difference between data mining, statistics, machine learning and AI?
There is considerable overlap among these, but some distinctions can be made. Of necessity, I will have to over-simplify some things or give short-shrift to others, but I will do my best to give some
What is the difference between data mining, statistics, machine learning and AI? There is considerable overlap among these, but some distinctions can be made. Of necessity, I will have to over-simplify some things or give short-shrift to others, but I will do my best to give some sense of these areas. Firstly, Artificial Intelligence is fairly distinct from the rest. AI is the study of how to create intelligent agents. In practice, it is how to program a computer to behave and perform a task as an intelligent agent (say, a person) would. This does not have to involve learning or induction at all, it can just be a way to 'build a better mousetrap'. For example, AI applications have included programs to monitor and control ongoing processes (e.g., increase aspect A if it seems too low). Notice that AI can include darn-near anything that a machine does, so long as it doesn't do it 'stupidly'. In practice, however, most tasks that require intelligence require an ability to induce new knowledge from experiences. Thus, a large area within AI is machine learning. A computer program is said to learn some task from experience if its performance at the task improves with experience, according to some performance measure. Machine learning involves the study of algorithms that can extract information automatically (i.e., without on-line human guidance). It is certainly the case that some of these procedures include ideas derived directly from, or inspired by, classical statistics, but they don't have to be. Similarly to AI, machine learning is very broad and can include almost everything, so long as there is some inductive component to it. An example of a machine learning algorithm might be a Kalman filter. Data mining is an area that has taken much of its inspiration and techniques from machine learning (and some, also, from statistics), but is put to different ends. Data mining is carried out by a person, in a specific situation, on a particular data set, with a goal in mind. Typically, this person wants to leverage the power of the various pattern recognition techniques that have been developed in machine learning. Quite often, the data set is massive, complicated, and/or may have special problems (such as there are more variables than observations). Usually, the goal is either to discover / generate some preliminary insights in an area where there really was little knowledge beforehand, or to be able to predict future observations accurately. Moreover, data mining procedures could be either 'unsupervised' (we don't know the answer--discovery) or 'supervised' (we know the answer--prediction). Note that the goal is generally not to develop a more sophisticated understanding of the underlying data generating process. Common data mining techniques would include cluster analyses, classification and regression trees, and neural networks. I suppose I needn't say much to explain what statistics is on this site, but perhaps I can say a few things. Classical statistics (here I mean both frequentist and Bayesian) is a sub-topic within mathematics. I think of it as largely the intersection of what we know about probability and what we know about optimization. Although mathematical statistics can be studied as simply a Platonic object of inquiry, it is mostly understood as more practical and applied in character than other, more rarefied areas of mathematics. As such (and notably in contrast to data mining above), it is mostly employed towards better understanding some particular data generating process. Thus, it usually starts with a formally specified model, and from this are derived procedures to accurately extract that model from noisy instances (i.e., estimation--by optimizing some loss function) and to be able to distinguish it from other possibilities (i.e., inferences based on known properties of sampling distributions). The prototypical statistical technique is regression.
What is the difference between data mining, statistics, machine learning and AI? There is considerable overlap among these, but some distinctions can be made. Of necessity, I will have to over-simplify some things or give short-shrift to others, but I will do my best to give some
630
What is the difference between data mining, statistics, machine learning and AI?
Many of the other answers have covered the main points but you asked for a hierarchy if any exists and the way I see it, although they are each disciplines in their own right, there is hierarchy no one seems to have mentioned yet since each builds upon the previous one. Statistics is just about the numbers, and quantifying the data. There are many tools for finding relevant properties of the data but this is pretty close to pure mathematics. Data Mining is about using Statistics as well as other programming methods to find patterns hidden in the data so that you can explain some phenomenon. Data Mining builds intuition about what is really happening in some data and is still little more towards math than programming, but uses both. Machine Learning uses Data Mining techniques and other learning algorithms to build models of what is happening behind some data so that it can predict future outcomes. Math is the basis for many of the algorithms, but this is more towards programming. Artificial Intelligence uses models built by Machine Learning and other ways to reason about the world and give rise to intelligent behavior whether this is playing a game or driving a robot/car. Artificial Intelligence has some goal to achieve by predicting how actions will affect the model of the world and chooses the actions that will best achieve that goal. Very programming based. In short Statistics quantifies numbers Data Mining explains patterns Machine Learning predicts with models Artificial Intelligence behaves and reasons Now this being said, there will be some AI problems which fall only into AI and similarly for the other fields but most of the interesting problems today (self driving cars for example) could easily and correctly be called all of these. Hope this clears up the relationship between them you asked about.
What is the difference between data mining, statistics, machine learning and AI?
Many of the other answers have covered the main points but you asked for a hierarchy if any exists and the way I see it, although they are each disciplines in their own right, there is hierarchy no on
What is the difference between data mining, statistics, machine learning and AI? Many of the other answers have covered the main points but you asked for a hierarchy if any exists and the way I see it, although they are each disciplines in their own right, there is hierarchy no one seems to have mentioned yet since each builds upon the previous one. Statistics is just about the numbers, and quantifying the data. There are many tools for finding relevant properties of the data but this is pretty close to pure mathematics. Data Mining is about using Statistics as well as other programming methods to find patterns hidden in the data so that you can explain some phenomenon. Data Mining builds intuition about what is really happening in some data and is still little more towards math than programming, but uses both. Machine Learning uses Data Mining techniques and other learning algorithms to build models of what is happening behind some data so that it can predict future outcomes. Math is the basis for many of the algorithms, but this is more towards programming. Artificial Intelligence uses models built by Machine Learning and other ways to reason about the world and give rise to intelligent behavior whether this is playing a game or driving a robot/car. Artificial Intelligence has some goal to achieve by predicting how actions will affect the model of the world and chooses the actions that will best achieve that goal. Very programming based. In short Statistics quantifies numbers Data Mining explains patterns Machine Learning predicts with models Artificial Intelligence behaves and reasons Now this being said, there will be some AI problems which fall only into AI and similarly for the other fields but most of the interesting problems today (self driving cars for example) could easily and correctly be called all of these. Hope this clears up the relationship between them you asked about.
What is the difference between data mining, statistics, machine learning and AI? Many of the other answers have covered the main points but you asked for a hierarchy if any exists and the way I see it, although they are each disciplines in their own right, there is hierarchy no on
631
What is the difference between data mining, statistics, machine learning and AI?
Statistics is concerned with probabilistic models, specifically inference on these models using data. Machine Learning is concerned with predicting a particular outcome given some data. Almost any reasonable machine learning method can be formulated as a formal probabilistic model, so in this sense machine learning is very much the same as statistics, but it differs in that it generally doesn't care about parameter estimates (just prediction) and it focuses on computational efficiency and large datasets. Data Mining is (as I understand it) applied machine learning. It focuses more on the practical aspects of deploying machine learning algorithms on large datasets. It is very much similar to machine learning. Artificial Intelligence is anything that is concerned with (some arbitrary definition of) intelligence in computers. So, it includes a lot of things. In general, probabilistic models (and thus statistics) have proven to be the most effective way to formally structure knowledge and understanding in a machine, to such an extent that all three of the others (AI, ML and DM) are today mostly subfields of statistics. Not the first discipline to become a shadow arm of statistics... (Economics, psychology, bioinformatics, etc.)
What is the difference between data mining, statistics, machine learning and AI?
Statistics is concerned with probabilistic models, specifically inference on these models using data. Machine Learning is concerned with predicting a particular outcome given some data. Almost any rea
What is the difference between data mining, statistics, machine learning and AI? Statistics is concerned with probabilistic models, specifically inference on these models using data. Machine Learning is concerned with predicting a particular outcome given some data. Almost any reasonable machine learning method can be formulated as a formal probabilistic model, so in this sense machine learning is very much the same as statistics, but it differs in that it generally doesn't care about parameter estimates (just prediction) and it focuses on computational efficiency and large datasets. Data Mining is (as I understand it) applied machine learning. It focuses more on the practical aspects of deploying machine learning algorithms on large datasets. It is very much similar to machine learning. Artificial Intelligence is anything that is concerned with (some arbitrary definition of) intelligence in computers. So, it includes a lot of things. In general, probabilistic models (and thus statistics) have proven to be the most effective way to formally structure knowledge and understanding in a machine, to such an extent that all three of the others (AI, ML and DM) are today mostly subfields of statistics. Not the first discipline to become a shadow arm of statistics... (Economics, psychology, bioinformatics, etc.)
What is the difference between data mining, statistics, machine learning and AI? Statistics is concerned with probabilistic models, specifically inference on these models using data. Machine Learning is concerned with predicting a particular outcome given some data. Almost any rea
632
What is the difference between data mining, statistics, machine learning and AI?
We can say that they are all related, but they are all different things. Although you can have things in common among them, such as that in statistics and data mining you use clustering methods. Let me try to briefly define each: Statistics is a very old discipline mainly based on classical mathematical methods, which can be used for the same purpose that data mining sometimes is which is classifying and grouping things. Data mining consists of building models in order to detect the patterns that allow us to classify or predict situations given an amount of facts or factors. Artificial intelligence (check Marvin Minsky*) is the discipline that tries to emulate how the brain works with programming methods, for example building a program that plays chess. Machine learning is the task of building knowledge and storing it in some form in the computer; that form can be of mathematical models, algorithms, etc... Anything that can help detect patterns.
What is the difference between data mining, statistics, machine learning and AI?
We can say that they are all related, but they are all different things. Although you can have things in common among them, such as that in statistics and data mining you use clustering methods. Let m
What is the difference between data mining, statistics, machine learning and AI? We can say that they are all related, but they are all different things. Although you can have things in common among them, such as that in statistics and data mining you use clustering methods. Let me try to briefly define each: Statistics is a very old discipline mainly based on classical mathematical methods, which can be used for the same purpose that data mining sometimes is which is classifying and grouping things. Data mining consists of building models in order to detect the patterns that allow us to classify or predict situations given an amount of facts or factors. Artificial intelligence (check Marvin Minsky*) is the discipline that tries to emulate how the brain works with programming methods, for example building a program that plays chess. Machine learning is the task of building knowledge and storing it in some form in the computer; that form can be of mathematical models, algorithms, etc... Anything that can help detect patterns.
What is the difference between data mining, statistics, machine learning and AI? We can say that they are all related, but they are all different things. Although you can have things in common among them, such as that in statistics and data mining you use clustering methods. Let m
633
What is the difference between data mining, statistics, machine learning and AI?
I'm most familiar with the machine-learning - data mining axis - so I'll concentrate on that: Machine learning tends to be interested in inference in non-standard situations, for instance non-i.i.d. data, active learning, semi-supervised learning, learning with structured data (for instance strings or graphs). ML also tends to be interested in theoretical bounds on what is learnable, which often forms the basis for the algorithms used (e.g. the support vector machine). ML tends to be of a Bayesian nature. Data mining is interested in finding patterns in data that you don't already know about. I'm not sure that is significantly different from exploratory data analysis in statistics, whereas in machine learning there is generally a more well-defined problem to solve. ML tends to be more interested in small datasets where over-fitting is the problem and data mining tends to be interested in large-scale datasets where the problem is dealing with the quantities of data. Statistics and machine learning provides many of the basic tools used by data miners.
What is the difference between data mining, statistics, machine learning and AI?
I'm most familiar with the machine-learning - data mining axis - so I'll concentrate on that: Machine learning tends to be interested in inference in non-standard situations, for instance non-i.i.d. d
What is the difference between data mining, statistics, machine learning and AI? I'm most familiar with the machine-learning - data mining axis - so I'll concentrate on that: Machine learning tends to be interested in inference in non-standard situations, for instance non-i.i.d. data, active learning, semi-supervised learning, learning with structured data (for instance strings or graphs). ML also tends to be interested in theoretical bounds on what is learnable, which often forms the basis for the algorithms used (e.g. the support vector machine). ML tends to be of a Bayesian nature. Data mining is interested in finding patterns in data that you don't already know about. I'm not sure that is significantly different from exploratory data analysis in statistics, whereas in machine learning there is generally a more well-defined problem to solve. ML tends to be more interested in small datasets where over-fitting is the problem and data mining tends to be interested in large-scale datasets where the problem is dealing with the quantities of data. Statistics and machine learning provides many of the basic tools used by data miners.
What is the difference between data mining, statistics, machine learning and AI? I'm most familiar with the machine-learning - data mining axis - so I'll concentrate on that: Machine learning tends to be interested in inference in non-standard situations, for instance non-i.i.d. d
634
What is the difference between data mining, statistics, machine learning and AI?
Here is my take at it. Let's start with the two very broad categories: anything that even just pretends to be smart is artificial intelligence (including ML and DM). anything that summarizes data is statistics, although you usually only apply this to methods that pay attention to the validity of the results (often used in ML and DM) Both ML and DM are usually both, AI and statistics, as they usually involve basic methods from both. Here are some of the differences: in machine learning, you have a well-defined objective (usually prediction) in data mining, you essentially have the objective "something I did not know before" Additionally, data mining usually involves much more data management, i.e. how to organize the data in efficient index structures and databases. Unfortunately, they are not that easy to separate. For example, there is "unsupervised learning", which is often more closely related to DM than to ML, as it cannot optimize towards the goal. On the other hand, DM methods are hard to evaluate (how do you rate something you do not know?) and often evaluated on the same tasks as machine learning, by leaving out some information. This, however, will usually make them appear to work worse than machine learning methods that can optimize towards the actual evaluation goal. Furthermore, they are often used in combinations. For example, a data mining method (say, clustering, or unsupervised outlier detection) is used to preprocess the data, then the machine learning method is applied on the preprocessed data to train better classifiers. Machine learning is usually much easier to evaluate: there is a goal such as score or class prediction. You can compute precision and recall. In data mining, most evaluation is done by leaving out some information (such as class labels) and then testing whether your method discovered the same structure. This is naive in the sense, as you assume that the class labels encode the structure of the data completely; you actually punish data mining algorithm that discover something new in your data. Another way of - indirectly - evaluating it, is how the discovered structure improves the performance of the actual ML algorithm (e.g. when partitioning data or removing outliers). Still, this evaluation is based on reproducing existing results, which is not really the data mining objective...
What is the difference between data mining, statistics, machine learning and AI?
Here is my take at it. Let's start with the two very broad categories: anything that even just pretends to be smart is artificial intelligence (including ML and DM). anything that summarizes data is
What is the difference between data mining, statistics, machine learning and AI? Here is my take at it. Let's start with the two very broad categories: anything that even just pretends to be smart is artificial intelligence (including ML and DM). anything that summarizes data is statistics, although you usually only apply this to methods that pay attention to the validity of the results (often used in ML and DM) Both ML and DM are usually both, AI and statistics, as they usually involve basic methods from both. Here are some of the differences: in machine learning, you have a well-defined objective (usually prediction) in data mining, you essentially have the objective "something I did not know before" Additionally, data mining usually involves much more data management, i.e. how to organize the data in efficient index structures and databases. Unfortunately, they are not that easy to separate. For example, there is "unsupervised learning", which is often more closely related to DM than to ML, as it cannot optimize towards the goal. On the other hand, DM methods are hard to evaluate (how do you rate something you do not know?) and often evaluated on the same tasks as machine learning, by leaving out some information. This, however, will usually make them appear to work worse than machine learning methods that can optimize towards the actual evaluation goal. Furthermore, they are often used in combinations. For example, a data mining method (say, clustering, or unsupervised outlier detection) is used to preprocess the data, then the machine learning method is applied on the preprocessed data to train better classifiers. Machine learning is usually much easier to evaluate: there is a goal such as score or class prediction. You can compute precision and recall. In data mining, most evaluation is done by leaving out some information (such as class labels) and then testing whether your method discovered the same structure. This is naive in the sense, as you assume that the class labels encode the structure of the data completely; you actually punish data mining algorithm that discover something new in your data. Another way of - indirectly - evaluating it, is how the discovered structure improves the performance of the actual ML algorithm (e.g. when partitioning data or removing outliers). Still, this evaluation is based on reproducing existing results, which is not really the data mining objective...
What is the difference between data mining, statistics, machine learning and AI? Here is my take at it. Let's start with the two very broad categories: anything that even just pretends to be smart is artificial intelligence (including ML and DM). anything that summarizes data is
635
What is the difference between data mining, statistics, machine learning and AI?
I'd add some observations to what's been said... AI is a very broad term for anything that has to do with machines doing reasoning-like or sentient-appearing activities, ranging from planning a task or cooperating with other entities, to learning to operate limbs to walk. A pithy definition is that AI is anything computer-related that we don't know how to do well yet. (Once we know how to do it well, it generally gets its own name and is no longer "AI".) It's my impression, contrary to Wikipedia, that Pattern Recognition and Machine Learning are the same field, but the former is practiced by computer-science folks while the latter is practiced by statisticians and engineers. (Many technical fields are discovered over and over by different subgroups, who often bring their own lingo and mindset to the table.) Data Mining, in my mind anyhow, takes Machine Learning/Pattern Recognition (the techniques that work with the data) and wrap them in database, infrastructure, and data validation/cleaning techniques.
What is the difference between data mining, statistics, machine learning and AI?
I'd add some observations to what's been said... AI is a very broad term for anything that has to do with machines doing reasoning-like or sentient-appearing activities, ranging from planning a task o
What is the difference between data mining, statistics, machine learning and AI? I'd add some observations to what's been said... AI is a very broad term for anything that has to do with machines doing reasoning-like or sentient-appearing activities, ranging from planning a task or cooperating with other entities, to learning to operate limbs to walk. A pithy definition is that AI is anything computer-related that we don't know how to do well yet. (Once we know how to do it well, it generally gets its own name and is no longer "AI".) It's my impression, contrary to Wikipedia, that Pattern Recognition and Machine Learning are the same field, but the former is practiced by computer-science folks while the latter is practiced by statisticians and engineers. (Many technical fields are discovered over and over by different subgroups, who often bring their own lingo and mindset to the table.) Data Mining, in my mind anyhow, takes Machine Learning/Pattern Recognition (the techniques that work with the data) and wrap them in database, infrastructure, and data validation/cleaning techniques.
What is the difference between data mining, statistics, machine learning and AI? I'd add some observations to what's been said... AI is a very broad term for anything that has to do with machines doing reasoning-like or sentient-appearing activities, ranging from planning a task o
636
What is the difference between data mining, statistics, machine learning and AI?
Sadly, the difference between these areas is largely where they're taught: statistics is based in maths depts, ai, machine learning in computer science depts, and data mining is more applied (used by business or marketing depts, developed by software companies). Firstly AI (although it could mean any intelligent system) has traditionally meant logic based approaches (eg expert systems) rather than statistical estimation. Statistics, based in maths depts, has had a very good theoretical understanding, together with strong applied experience in experimental sciences, where there is a clear scientific model, and statistics is needed to deal with the limited experimental data available. The focus has often been on squeezing the maximum information from very small data sets. furthermore there is a bias towards mathematical proofs: you will not get published unless you can prove things about your approach. This has tended to mean that statistics has lagged in the use of computers to automate analysis. Again, the lack of programming knowledge has prevented statisticians to work on large scale problems where computational issues become important (consider GPUs and distributed systems such as hadoop). I believe that areas such as bioinformatics have now moved statistics more in this direction. Finally I would say that statisticians are a more sceptical bunch: they do not claim that you discover knowledge with statistics- rather a scientist comes up with a hypothesis, and the statistician's job is to check that the hypothesis is supported by the data. Machine learning is taught in cs departments, which unfortunately do not teach the appropriate mathematics: multivariable calculus, probability, statistics and optimisation is not commonplace...one has vague 'glamorous' concepts such as learning from examples...rather than boring statistical estimation[ cf eg Elements of statistical learning page 30. This tends to mean that there is very little theoretical understanding and an explosion of algorithms as researchers can always find some dataset on which their algorithm proves better. So there are huge phases of hype as ML researchers chase the next big thing: neural networks, deep learning etc. Unfortunately there is a lot more money in CS departments (think google, Microsoft, together with the more marketable 'learning') so the more sceptical statisticians are ignored. Finally, there is an empiricist bent: basically there is an underlying belief that if you throw enough data at the algorithm it will 'learn' the correct predictions. Whilst I am biased against ML, there is a fundamental insight in ML which statisticians have ignored: that computers can revolutionise the application of statistics. There are two ways- a) automating the application of standard tests and models. Eg running a battery of models ( linear regression, random forests, etc trying different combinations of inputs, parameter settings etc). This hasn't really happened- though I suspect that competitors on kaggle develop their own automation techniques. b) applying standard statistical models to huge data: think of eg google translate, recommender systems etc (no one is claiming that eg people translate or recommend like that..but its a useful tool). The underlying statistical models are straightforward but there are enormous computational issues in applying these methods to billions of data points. Data mining is the culmination of this philosophy...developing automated ways of extracting knowledge from data. However, it has a more practical approach: essentially it is applied to behavioural data, where there is no overarching scientific theory (marketing, fraud detection, spam etc) and the aim is to automate the analysis of large volumes of data: no doubt a team of statisticians could produce better analyses given enough time, but it is more cost effective to use a computer. Furthermore as D. Hand explains it is the analysis of secondary data - data that is logged anyway rather than data that has been explicitly collected to answer a scientific question in a solid experimental design.Data mining statistics and more, D Hand So I would summarise that traditional AI is logic based rather than statistical, machine learning is statistics without theory and statistics is 'statistics without computers', and data mining is the development of automated tools for statistical analysis with minimal user intervention.
What is the difference between data mining, statistics, machine learning and AI?
Sadly, the difference between these areas is largely where they're taught: statistics is based in maths depts, ai, machine learning in computer science depts, and data mining is more applied (used by
What is the difference between data mining, statistics, machine learning and AI? Sadly, the difference between these areas is largely where they're taught: statistics is based in maths depts, ai, machine learning in computer science depts, and data mining is more applied (used by business or marketing depts, developed by software companies). Firstly AI (although it could mean any intelligent system) has traditionally meant logic based approaches (eg expert systems) rather than statistical estimation. Statistics, based in maths depts, has had a very good theoretical understanding, together with strong applied experience in experimental sciences, where there is a clear scientific model, and statistics is needed to deal with the limited experimental data available. The focus has often been on squeezing the maximum information from very small data sets. furthermore there is a bias towards mathematical proofs: you will not get published unless you can prove things about your approach. This has tended to mean that statistics has lagged in the use of computers to automate analysis. Again, the lack of programming knowledge has prevented statisticians to work on large scale problems where computational issues become important (consider GPUs and distributed systems such as hadoop). I believe that areas such as bioinformatics have now moved statistics more in this direction. Finally I would say that statisticians are a more sceptical bunch: they do not claim that you discover knowledge with statistics- rather a scientist comes up with a hypothesis, and the statistician's job is to check that the hypothesis is supported by the data. Machine learning is taught in cs departments, which unfortunately do not teach the appropriate mathematics: multivariable calculus, probability, statistics and optimisation is not commonplace...one has vague 'glamorous' concepts such as learning from examples...rather than boring statistical estimation[ cf eg Elements of statistical learning page 30. This tends to mean that there is very little theoretical understanding and an explosion of algorithms as researchers can always find some dataset on which their algorithm proves better. So there are huge phases of hype as ML researchers chase the next big thing: neural networks, deep learning etc. Unfortunately there is a lot more money in CS departments (think google, Microsoft, together with the more marketable 'learning') so the more sceptical statisticians are ignored. Finally, there is an empiricist bent: basically there is an underlying belief that if you throw enough data at the algorithm it will 'learn' the correct predictions. Whilst I am biased against ML, there is a fundamental insight in ML which statisticians have ignored: that computers can revolutionise the application of statistics. There are two ways- a) automating the application of standard tests and models. Eg running a battery of models ( linear regression, random forests, etc trying different combinations of inputs, parameter settings etc). This hasn't really happened- though I suspect that competitors on kaggle develop their own automation techniques. b) applying standard statistical models to huge data: think of eg google translate, recommender systems etc (no one is claiming that eg people translate or recommend like that..but its a useful tool). The underlying statistical models are straightforward but there are enormous computational issues in applying these methods to billions of data points. Data mining is the culmination of this philosophy...developing automated ways of extracting knowledge from data. However, it has a more practical approach: essentially it is applied to behavioural data, where there is no overarching scientific theory (marketing, fraud detection, spam etc) and the aim is to automate the analysis of large volumes of data: no doubt a team of statisticians could produce better analyses given enough time, but it is more cost effective to use a computer. Furthermore as D. Hand explains it is the analysis of secondary data - data that is logged anyway rather than data that has been explicitly collected to answer a scientific question in a solid experimental design.Data mining statistics and more, D Hand So I would summarise that traditional AI is logic based rather than statistical, machine learning is statistics without theory and statistics is 'statistics without computers', and data mining is the development of automated tools for statistical analysis with minimal user intervention.
What is the difference between data mining, statistics, machine learning and AI? Sadly, the difference between these areas is largely where they're taught: statistics is based in maths depts, ai, machine learning in computer science depts, and data mining is more applied (used by
637
What is the difference between data mining, statistics, machine learning and AI?
Data mining is about discovering hidden patterns or unknown knowledge, which can be used for decision making by people. Machine learning is about learning a model to classify new objects.
What is the difference between data mining, statistics, machine learning and AI?
Data mining is about discovering hidden patterns or unknown knowledge, which can be used for decision making by people. Machine learning is about learning a model to classify new objects.
What is the difference between data mining, statistics, machine learning and AI? Data mining is about discovering hidden patterns or unknown knowledge, which can be used for decision making by people. Machine learning is about learning a model to classify new objects.
What is the difference between data mining, statistics, machine learning and AI? Data mining is about discovering hidden patterns or unknown knowledge, which can be used for decision making by people. Machine learning is about learning a model to classify new objects.
638
What is the difference between data mining, statistics, machine learning and AI?
In my opinion, Artificial Intelligence could be considered as the "superset" of fields such as Machine Learning, Data Mining, Pattern Recognition etc. Statistics, is a field of mathematics that includes all the mathematical models, techniques and theorems that are being used in AI. Machine Learning is a field of AI that includes all the algorithms that applies the above mentioned Statistical Models and makes sense of the data, that is, predictive analytics such as clustering and classifiaction. Data Mining is the science that uses all the techniques above(machine learning mainly) in order to extract useful and important patterns from data. Data Mining usually has to do with extracting useful information from massive datasets, that is, Big Data.
What is the difference between data mining, statistics, machine learning and AI?
In my opinion, Artificial Intelligence could be considered as the "superset" of fields such as Machine Learning, Data Mining, Pattern Recognition etc. Statistics, is a field of mathematics that inclu
What is the difference between data mining, statistics, machine learning and AI? In my opinion, Artificial Intelligence could be considered as the "superset" of fields such as Machine Learning, Data Mining, Pattern Recognition etc. Statistics, is a field of mathematics that includes all the mathematical models, techniques and theorems that are being used in AI. Machine Learning is a field of AI that includes all the algorithms that applies the above mentioned Statistical Models and makes sense of the data, that is, predictive analytics such as clustering and classifiaction. Data Mining is the science that uses all the techniques above(machine learning mainly) in order to extract useful and important patterns from data. Data Mining usually has to do with extracting useful information from massive datasets, that is, Big Data.
What is the difference between data mining, statistics, machine learning and AI? In my opinion, Artificial Intelligence could be considered as the "superset" of fields such as Machine Learning, Data Mining, Pattern Recognition etc. Statistics, is a field of mathematics that inclu
639
What is the difference between data mining, statistics, machine learning and AI?
How about: teaching machines to learn Recognise meaningful patterns in data : data mining Predict outcome from known patterns : ML Find new features to remap raw data : AI This bird brain really needs simple definitions.
What is the difference between data mining, statistics, machine learning and AI?
How about: teaching machines to learn Recognise meaningful patterns in data : data mining Predict outcome from known patterns : ML Find new features to remap raw data : AI This bird brain really needs
What is the difference between data mining, statistics, machine learning and AI? How about: teaching machines to learn Recognise meaningful patterns in data : data mining Predict outcome from known patterns : ML Find new features to remap raw data : AI This bird brain really needs simple definitions.
What is the difference between data mining, statistics, machine learning and AI? How about: teaching machines to learn Recognise meaningful patterns in data : data mining Predict outcome from known patterns : ML Find new features to remap raw data : AI This bird brain really needs
640
What is the difference between data mining, statistics, machine learning and AI?
Often data mining tries to "predict" some future data, or "explaining" why something happens. Statistics are more used to validate hypothesis in my eyes. But this is a subjective discussion. One obvious difference between statisticians and data miners can be found in the type of summary statistics they look at. Stats will often limit themselves to R² and accuracy, while data miners will look at AUC, ROC curves, lift curves etc and might also be concerned by employing a cost-related accuracy curve. Data mining packages (for instance the open source Weka), have built in techniques for input selection, support vector machines classification, etc while these are for the most part just absent in statistical packages like JMP. I recently when to a course on "data mining in jmp" from the jmp people, and although it is a visually strong package, some essential data mining pre/post/mid techniques are just missing. Input selection was done manually, to get insight in the data, still in data mining, it is just your intention to release algorithms, smartly, on large data and automatically see what comes out. The course was obviously taught by statistics people, which emphasised the different mindset between the two.
What is the difference between data mining, statistics, machine learning and AI?
Often data mining tries to "predict" some future data, or "explaining" why something happens. Statistics are more used to validate hypothesis in my eyes. But this is a subjective discussion. One obv
What is the difference between data mining, statistics, machine learning and AI? Often data mining tries to "predict" some future data, or "explaining" why something happens. Statistics are more used to validate hypothesis in my eyes. But this is a subjective discussion. One obvious difference between statisticians and data miners can be found in the type of summary statistics they look at. Stats will often limit themselves to R² and accuracy, while data miners will look at AUC, ROC curves, lift curves etc and might also be concerned by employing a cost-related accuracy curve. Data mining packages (for instance the open source Weka), have built in techniques for input selection, support vector machines classification, etc while these are for the most part just absent in statistical packages like JMP. I recently when to a course on "data mining in jmp" from the jmp people, and although it is a visually strong package, some essential data mining pre/post/mid techniques are just missing. Input selection was done manually, to get insight in the data, still in data mining, it is just your intention to release algorithms, smartly, on large data and automatically see what comes out. The course was obviously taught by statistics people, which emphasised the different mindset between the two.
What is the difference between data mining, statistics, machine learning and AI? Often data mining tries to "predict" some future data, or "explaining" why something happens. Statistics are more used to validate hypothesis in my eyes. But this is a subjective discussion. One obv
641
What is the difference between data mining, statistics, machine learning and AI?
With all due respect to former answers, I believe that a huge part of the answer is still missing and it is in front of our eyes. Let me try to have a go at it: In data mining, just like the name sounds, you mine data. Now mining means extracting knowledge from it, but also in general that usually means you are calculating some measures or statistics in the data, like Jaccard Index as an example. In Machine Learning, you do not only mine or extract, you learn. Now learning theory has its roots in statistics, but takes it further than that. In learning, you have a task that learns based on finite sample data and it can generalize to unseen data. Your Facebook image recongnition can still tag you in your photo even though every image has new background, new textures and so forth. You cannot use any data mining approaches on this problem. In Artificial Intelligence, you definitely learn from data like in Machine Leaning, but then you need to perform other more higher level tasks as well, like planning. You need to find policies based on what you have learned, and take it further than that. You cannot play a game of chess or Go by just learning good moves, you need to start finding policies like what is a good initial position that will lead to more chances of winning, even though every game is a new game, and no two other game will follow the same set of moves.
What is the difference between data mining, statistics, machine learning and AI?
With all due respect to former answers, I believe that a huge part of the answer is still missing and it is in front of our eyes. Let me try to have a go at it: In data mining, just like the name soun
What is the difference between data mining, statistics, machine learning and AI? With all due respect to former answers, I believe that a huge part of the answer is still missing and it is in front of our eyes. Let me try to have a go at it: In data mining, just like the name sounds, you mine data. Now mining means extracting knowledge from it, but also in general that usually means you are calculating some measures or statistics in the data, like Jaccard Index as an example. In Machine Learning, you do not only mine or extract, you learn. Now learning theory has its roots in statistics, but takes it further than that. In learning, you have a task that learns based on finite sample data and it can generalize to unseen data. Your Facebook image recongnition can still tag you in your photo even though every image has new background, new textures and so forth. You cannot use any data mining approaches on this problem. In Artificial Intelligence, you definitely learn from data like in Machine Leaning, but then you need to perform other more higher level tasks as well, like planning. You need to find policies based on what you have learned, and take it further than that. You cannot play a game of chess or Go by just learning good moves, you need to start finding policies like what is a good initial position that will lead to more chances of winning, even though every game is a new game, and no two other game will follow the same set of moves.
What is the difference between data mining, statistics, machine learning and AI? With all due respect to former answers, I believe that a huge part of the answer is still missing and it is in front of our eyes. Let me try to have a go at it: In data mining, just like the name soun
642
R's lmer cheat sheet
What's the difference between (~1 +....) and (1 | ...) and (0 | ...) etc.? Say you have variable V1 predicted by categorical variable V2, which is treated as a random effect, and continuous variable V3, which is treated as a linear fixed effect. Using lmer syntax, simplest model (M1) is: V1 ~ (1|V2) + V3 This model will estimate: P1: A global intercept P2: Random effect intercepts for V2 (i.e. for each level of V2, that level's intercept's deviation from the global intercept) P3: A single global estimate for the effect (slope) of V3 The next most complex model (M2) is: V1 ~ (1|V2) + V3 + (0+V3|V2) This model estimates all the parameters from M1, but will additionally estimate: P4: The effect of V3 within each level of V2 (more specifically, the degree to which the V3 effect within a given level deviates from the global effect of V3), while enforcing a zero correlation between the intercept deviations and V3 effect deviations across levels of V2. This latter restriction is relaxed in a final most complex model (M3): V1 ~ (1+V3|V2) + V3 In which all parameters from M2 are estimated while allowing correlation between the intercept deviations and V3 effect deviations within levels of V2. Thus, in M3, an additional parameter is estimated: P5: The correlation between intercept deviations and V3 deviations across levels of V2 Usually model pairs like M2 and M3 are computed then compared to evaluate the evidence for correlations between fixed effects (including the global intercept). Now consider adding another fixed effect predictor, V4. The model: V1 ~ (1+V3*V4|V2) + V3*V4 would estimate: P1: A global intercept P2: A single global estimate for the effect of V3 P3: A single global estimate for the effect of V4 P4: A single global estimate for the interaction between V3 and V4 P5: Deviations of the intercept from P1 in each level of V2 P6: Deviations of the V3 effect from P2 in each level of V2 P7: Deviations of the V4 effect from P3 in each level of V2 P8: Deviations of the V3-by-V4 interaction from P4 in each level of V2 P9 Correlation between P5 and P6 across levels of V2 P10 Correlation between P5 and P7 across levels of V2 P11 Correlation between P5 and P8 across levels of V2 P12 Correlation between P6 and P7 across levels of V2 P13 Correlation between P6 and P8 across levels of V2 P14 Correlation between P7 and P8 across levels of V2 Phew, That's a lot of parameters! And I didn't even bother to list the variance parameters estimated by the model. What's more, if you have a categorical variable with more than 2 levels that you want to model as a fixed effect, instead of a single effect for that variable you will always be estimating k-1 effects (where k is the number of levels), thereby exploding the number of parameters to be estimated by the model even further.
R's lmer cheat sheet
What's the difference between (~1 +....) and (1 | ...) and (0 | ...) etc.? Say you have variable V1 predicted by categorical variable V2, which is treated as a random effect, and continuous variable
R's lmer cheat sheet What's the difference between (~1 +....) and (1 | ...) and (0 | ...) etc.? Say you have variable V1 predicted by categorical variable V2, which is treated as a random effect, and continuous variable V3, which is treated as a linear fixed effect. Using lmer syntax, simplest model (M1) is: V1 ~ (1|V2) + V3 This model will estimate: P1: A global intercept P2: Random effect intercepts for V2 (i.e. for each level of V2, that level's intercept's deviation from the global intercept) P3: A single global estimate for the effect (slope) of V3 The next most complex model (M2) is: V1 ~ (1|V2) + V3 + (0+V3|V2) This model estimates all the parameters from M1, but will additionally estimate: P4: The effect of V3 within each level of V2 (more specifically, the degree to which the V3 effect within a given level deviates from the global effect of V3), while enforcing a zero correlation between the intercept deviations and V3 effect deviations across levels of V2. This latter restriction is relaxed in a final most complex model (M3): V1 ~ (1+V3|V2) + V3 In which all parameters from M2 are estimated while allowing correlation between the intercept deviations and V3 effect deviations within levels of V2. Thus, in M3, an additional parameter is estimated: P5: The correlation between intercept deviations and V3 deviations across levels of V2 Usually model pairs like M2 and M3 are computed then compared to evaluate the evidence for correlations between fixed effects (including the global intercept). Now consider adding another fixed effect predictor, V4. The model: V1 ~ (1+V3*V4|V2) + V3*V4 would estimate: P1: A global intercept P2: A single global estimate for the effect of V3 P3: A single global estimate for the effect of V4 P4: A single global estimate for the interaction between V3 and V4 P5: Deviations of the intercept from P1 in each level of V2 P6: Deviations of the V3 effect from P2 in each level of V2 P7: Deviations of the V4 effect from P3 in each level of V2 P8: Deviations of the V3-by-V4 interaction from P4 in each level of V2 P9 Correlation between P5 and P6 across levels of V2 P10 Correlation between P5 and P7 across levels of V2 P11 Correlation between P5 and P8 across levels of V2 P12 Correlation between P6 and P7 across levels of V2 P13 Correlation between P6 and P8 across levels of V2 P14 Correlation between P7 and P8 across levels of V2 Phew, That's a lot of parameters! And I didn't even bother to list the variance parameters estimated by the model. What's more, if you have a categorical variable with more than 2 levels that you want to model as a fixed effect, instead of a single effect for that variable you will always be estimating k-1 effects (where k is the number of levels), thereby exploding the number of parameters to be estimated by the model even further.
R's lmer cheat sheet What's the difference between (~1 +....) and (1 | ...) and (0 | ...) etc.? Say you have variable V1 predicted by categorical variable V2, which is treated as a random effect, and continuous variable
643
R's lmer cheat sheet
The general trick is, as mentioned in another answer, is that the formula follows the form dependent ~ independent | grouping. The groupingis generally a random factor, you can include fixed factors without any grouping and you can have additional random factors without any fixed factor (an intercept-only model). A + between factors indicates no interaction, a * indicates interaction. For random factors, you have three basic variants: Intercepts only by random factor: (1 | random.factor) Slopes only by random factor: (0 + fixed.factor | random.factor) Intercepts and slopes by random factor: (1 + fixed.factor | random.factor) Note that variant 3 has the slope and the intercept calculated in the same grouping, i.e. at the same time. If we want the slope and the intercept calculated independently, i.e. without any assumed correlation between the two, we need a fourth variant: Intercept and slope, separately, by random factor: (1 | random.factor) + (0 + fixed.factor | random.factor). An alternative way to write this is using the double-bar notation fixed.factor + (fixed.factor || random.factor). There's also a nice summary in another response to this question that you should look at. If you're up to digging into the math a bit, Barr et al. (2013) summarize the lmer syntax quite nicely in their Table 1, adapted here to meet the constraints of tableless markdown. That paper dealt with psycholinguistic data, so the two random effects are Subjectand Item. Models and equivalent lme4 formula syntax: $Y_{si} = β_0 + β_{1}X_{i} + e_{si}$ N/A (Not a mixed-effects model) $Y_{si} = β_0 + S_{0s} + β_{1}X_{i} + e_{si} $ Y ∼ X+(1∣Subject) $Y_{si} = β_0 + S_{0s} + (β_{1} + S_{1s})X_i + e_{si}$ Y ∼ X+(1 + X∣Subject) $Y_{si} = β_0 + S_{0s} + I_{0i} + (β_{1} + S_{1s})X_i + e_{si}$ Y ∼ X+(1 + X∣Subject)+(1∣Item) $Y_{si} = β_0 + S_{0s} + I_{0i} + β_{1}X_{i} + e_{si}$ Y ∼ X+(1∣Subject)+(1∣Item) As (4), but $S_{0s}$, $S_{1s}$ independent Y ∼ X+(1∣Subject)+(0 + X∣ Subject)+(1∣Item) $Y_{si} = β_0 + I_{0i} + (β_{1} + S_{1s})X_i + e_{si}$ Y ∼ X+(0 + X∣Subject)+(1∣Item) References: Barr, Dale J, R. Levy, C. Scheepers und H. J. Tily (2013). Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language, 68:255– 278.
R's lmer cheat sheet
The general trick is, as mentioned in another answer, is that the formula follows the form dependent ~ independent | grouping. The groupingis generally a random factor, you can include fixed factors w
R's lmer cheat sheet The general trick is, as mentioned in another answer, is that the formula follows the form dependent ~ independent | grouping. The groupingis generally a random factor, you can include fixed factors without any grouping and you can have additional random factors without any fixed factor (an intercept-only model). A + between factors indicates no interaction, a * indicates interaction. For random factors, you have three basic variants: Intercepts only by random factor: (1 | random.factor) Slopes only by random factor: (0 + fixed.factor | random.factor) Intercepts and slopes by random factor: (1 + fixed.factor | random.factor) Note that variant 3 has the slope and the intercept calculated in the same grouping, i.e. at the same time. If we want the slope and the intercept calculated independently, i.e. without any assumed correlation between the two, we need a fourth variant: Intercept and slope, separately, by random factor: (1 | random.factor) + (0 + fixed.factor | random.factor). An alternative way to write this is using the double-bar notation fixed.factor + (fixed.factor || random.factor). There's also a nice summary in another response to this question that you should look at. If you're up to digging into the math a bit, Barr et al. (2013) summarize the lmer syntax quite nicely in their Table 1, adapted here to meet the constraints of tableless markdown. That paper dealt with psycholinguistic data, so the two random effects are Subjectand Item. Models and equivalent lme4 formula syntax: $Y_{si} = β_0 + β_{1}X_{i} + e_{si}$ N/A (Not a mixed-effects model) $Y_{si} = β_0 + S_{0s} + β_{1}X_{i} + e_{si} $ Y ∼ X+(1∣Subject) $Y_{si} = β_0 + S_{0s} + (β_{1} + S_{1s})X_i + e_{si}$ Y ∼ X+(1 + X∣Subject) $Y_{si} = β_0 + S_{0s} + I_{0i} + (β_{1} + S_{1s})X_i + e_{si}$ Y ∼ X+(1 + X∣Subject)+(1∣Item) $Y_{si} = β_0 + S_{0s} + I_{0i} + β_{1}X_{i} + e_{si}$ Y ∼ X+(1∣Subject)+(1∣Item) As (4), but $S_{0s}$, $S_{1s}$ independent Y ∼ X+(1∣Subject)+(0 + X∣ Subject)+(1∣Item) $Y_{si} = β_0 + I_{0i} + (β_{1} + S_{1s})X_i + e_{si}$ Y ∼ X+(0 + X∣Subject)+(1∣Item) References: Barr, Dale J, R. Levy, C. Scheepers und H. J. Tily (2013). Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language, 68:255– 278.
R's lmer cheat sheet The general trick is, as mentioned in another answer, is that the formula follows the form dependent ~ independent | grouping. The groupingis generally a random factor, you can include fixed factors w
644
R's lmer cheat sheet
The | symbol indicates a grouping factor in mixed methods. As per Pinheiro & Bates: ...The formula also designates a response and, when available, a primary covariate. It is given as response ~ primary | grouping where response is an expression for the response, primary is an expression for the primary covariate, and grouping is an expression for the grouping factor. Depending on which method you use to perform mixed methods analysis in R, you may need to create a groupedData object to be able to use the grouping in the analysis (see the nlme package for details, lme4 doesn't seem to need this). I can't speak to the way you have specified your lmer model statements because I don't know your data. However, having multiple (1|foo) in the model line is unusual from what I have seen. What are you trying to model?
R's lmer cheat sheet
The | symbol indicates a grouping factor in mixed methods. As per Pinheiro & Bates: ...The formula also designates a response and, when available, a primary covariate. It is given as response ~ prima
R's lmer cheat sheet The | symbol indicates a grouping factor in mixed methods. As per Pinheiro & Bates: ...The formula also designates a response and, when available, a primary covariate. It is given as response ~ primary | grouping where response is an expression for the response, primary is an expression for the primary covariate, and grouping is an expression for the grouping factor. Depending on which method you use to perform mixed methods analysis in R, you may need to create a groupedData object to be able to use the grouping in the analysis (see the nlme package for details, lme4 doesn't seem to need this). I can't speak to the way you have specified your lmer model statements because I don't know your data. However, having multiple (1|foo) in the model line is unusual from what I have seen. What are you trying to model?
R's lmer cheat sheet The | symbol indicates a grouping factor in mixed methods. As per Pinheiro & Bates: ...The formula also designates a response and, when available, a primary covariate. It is given as response ~ prima
645
Can principal component analysis be applied to datasets containing a mix of continuous and categorical variables?
Although a PCA applied on binary data would yield results comparable to those obtained from a Multiple Correspondence Analysis (factor scores and eigenvalues are linearly related), there are more appropriate techniques to deal with mixed data types, namely Multiple Factor Analysis for mixed data available in the FactoMineR R package (FAMD()). If your variables can be considered as structured subsets of descriptive attributes, then Multiple Factor Analysis (MFA()) is also an option. The challenge with categorical variables is to find a suitable way to represent distances between variable categories and individuals in the factorial space. To overcome this problem, you can look for a non-linear transformation of each variable--whether it be nominal, ordinal, polynomial, or numerical--with optimal scaling. This is well explained in Gifi Methods for Optimal Scaling in R: The Package homals, and an implementation is available in the corresponding R package homals.
Can principal component analysis be applied to datasets containing a mix of continuous and categoric
Although a PCA applied on binary data would yield results comparable to those obtained from a Multiple Correspondence Analysis (factor scores and eigenvalues are linearly related), there are more appr
Can principal component analysis be applied to datasets containing a mix of continuous and categorical variables? Although a PCA applied on binary data would yield results comparable to those obtained from a Multiple Correspondence Analysis (factor scores and eigenvalues are linearly related), there are more appropriate techniques to deal with mixed data types, namely Multiple Factor Analysis for mixed data available in the FactoMineR R package (FAMD()). If your variables can be considered as structured subsets of descriptive attributes, then Multiple Factor Analysis (MFA()) is also an option. The challenge with categorical variables is to find a suitable way to represent distances between variable categories and individuals in the factorial space. To overcome this problem, you can look for a non-linear transformation of each variable--whether it be nominal, ordinal, polynomial, or numerical--with optimal scaling. This is well explained in Gifi Methods for Optimal Scaling in R: The Package homals, and an implementation is available in the corresponding R package homals.
Can principal component analysis be applied to datasets containing a mix of continuous and categoric Although a PCA applied on binary data would yield results comparable to those obtained from a Multiple Correspondence Analysis (factor scores and eigenvalues are linearly related), there are more appr
646
Can principal component analysis be applied to datasets containing a mix of continuous and categorical variables?
A Google search "pca for discrete variables" gives this nice overview by S. Kolenikov (@StasK) and G. Angeles. To add to chl answer, the PC analysis is really analysis of eigenvectors of covariance matrix. So the problem is how to calculate the "correct" covariance matrix. One of the approaches is to use polychoric correlation.
Can principal component analysis be applied to datasets containing a mix of continuous and categoric
A Google search "pca for discrete variables" gives this nice overview by S. Kolenikov (@StasK) and G. Angeles. To add to chl answer, the PC analysis is really analysis of eigenvectors of covariance ma
Can principal component analysis be applied to datasets containing a mix of continuous and categorical variables? A Google search "pca for discrete variables" gives this nice overview by S. Kolenikov (@StasK) and G. Angeles. To add to chl answer, the PC analysis is really analysis of eigenvectors of covariance matrix. So the problem is how to calculate the "correct" covariance matrix. One of the approaches is to use polychoric correlation.
Can principal component analysis be applied to datasets containing a mix of continuous and categoric A Google search "pca for discrete variables" gives this nice overview by S. Kolenikov (@StasK) and G. Angeles. To add to chl answer, the PC analysis is really analysis of eigenvectors of covariance ma
647
Can principal component analysis be applied to datasets containing a mix of continuous and categorical variables?
I would suggest having a look at Linting & Kooij, 2012 "Non linear principal component analysis with CATPCA: a tutorial", Journal of Personality Assessment; 94(1). Abstract This article is set up as a tutorial for nonlinear principal components analysis (NLPCA), systematically guiding the reader through the process of analyzing actual data on personality assessment by the Rorschach Inkblot Test. NLPCA is a more flexible alternative to linear PCA that can handle the analysis of possibly nonlinearly related variables with different types of measurement level. The method is particularly suited to analyze nominal (qualitative) and ordinal (e.g., Likert-type) data, possibly combined with numeric data. The program CATPCA from the Categories module in SPSS is used in the analyses, but the method description can easily be generalized to other software packages.
Can principal component analysis be applied to datasets containing a mix of continuous and categoric
I would suggest having a look at Linting & Kooij, 2012 "Non linear principal component analysis with CATPCA: a tutorial", Journal of Personality Assessment; 94(1). Abstract This article is set up as
Can principal component analysis be applied to datasets containing a mix of continuous and categorical variables? I would suggest having a look at Linting & Kooij, 2012 "Non linear principal component analysis with CATPCA: a tutorial", Journal of Personality Assessment; 94(1). Abstract This article is set up as a tutorial for nonlinear principal components analysis (NLPCA), systematically guiding the reader through the process of analyzing actual data on personality assessment by the Rorschach Inkblot Test. NLPCA is a more flexible alternative to linear PCA that can handle the analysis of possibly nonlinearly related variables with different types of measurement level. The method is particularly suited to analyze nominal (qualitative) and ordinal (e.g., Likert-type) data, possibly combined with numeric data. The program CATPCA from the Categories module in SPSS is used in the analyses, but the method description can easily be generalized to other software packages.
Can principal component analysis be applied to datasets containing a mix of continuous and categoric I would suggest having a look at Linting & Kooij, 2012 "Non linear principal component analysis with CATPCA: a tutorial", Journal of Personality Assessment; 94(1). Abstract This article is set up as
648
Can principal component analysis be applied to datasets containing a mix of continuous and categorical variables?
Continuing on what @Martin F commented, recently I came across with the nonlinear PCAs. I was looking into Nonlinear PCAs as a possible alternative when a continuous variable approaches distribution of an ordinal variable as the data gets sparser (it happens in genetics a lot of times when the minor allele frequency of the variable gets lower and lower and you are left with very low number of counts in which you cant really justify a distribution of a continuous variable and you have to loosen the distributional assumptions by making it either an ordinal variable or a categorical variable.) Non linear PCA can handle both of these conditions but after discussing with statistical maestros in genetics faculty, the consensus call was that the Nonlinear PCAs are not used much often and the behavior of those PCAs is not yet tested extensively (maybe they were referring only to genetics, so please take it with grain of salt). Indeed it is a fascinating option. I hope I have added 2 cents (fortunately relevant) to discussion.
Can principal component analysis be applied to datasets containing a mix of continuous and categoric
Continuing on what @Martin F commented, recently I came across with the nonlinear PCAs. I was looking into Nonlinear PCAs as a possible alternative when a continuous variable approaches distribution o
Can principal component analysis be applied to datasets containing a mix of continuous and categorical variables? Continuing on what @Martin F commented, recently I came across with the nonlinear PCAs. I was looking into Nonlinear PCAs as a possible alternative when a continuous variable approaches distribution of an ordinal variable as the data gets sparser (it happens in genetics a lot of times when the minor allele frequency of the variable gets lower and lower and you are left with very low number of counts in which you cant really justify a distribution of a continuous variable and you have to loosen the distributional assumptions by making it either an ordinal variable or a categorical variable.) Non linear PCA can handle both of these conditions but after discussing with statistical maestros in genetics faculty, the consensus call was that the Nonlinear PCAs are not used much often and the behavior of those PCAs is not yet tested extensively (maybe they were referring only to genetics, so please take it with grain of salt). Indeed it is a fascinating option. I hope I have added 2 cents (fortunately relevant) to discussion.
Can principal component analysis be applied to datasets containing a mix of continuous and categoric Continuing on what @Martin F commented, recently I came across with the nonlinear PCAs. I was looking into Nonlinear PCAs as a possible alternative when a continuous variable approaches distribution o
649
Can principal component analysis be applied to datasets containing a mix of continuous and categorical variables?
PCAmixdata #Rstats package: Implements principal component analysis, orthogonal rotation and multiple factor analysis for a mixture of quantitative and qualitative variables. Example from vignette shows results for both continuous and categorical output
Can principal component analysis be applied to datasets containing a mix of continuous and categoric
PCAmixdata #Rstats package: Implements principal component analysis, orthogonal rotation and multiple factor analysis for a mixture of quantitative and qualitative variables. Example from vignette
Can principal component analysis be applied to datasets containing a mix of continuous and categorical variables? PCAmixdata #Rstats package: Implements principal component analysis, orthogonal rotation and multiple factor analysis for a mixture of quantitative and qualitative variables. Example from vignette shows results for both continuous and categorical output
Can principal component analysis be applied to datasets containing a mix of continuous and categoric PCAmixdata #Rstats package: Implements principal component analysis, orthogonal rotation and multiple factor analysis for a mixture of quantitative and qualitative variables. Example from vignette
650
Can principal component analysis be applied to datasets containing a mix of continuous and categorical variables?
There is a recently developed approach to such problems: Generalized Low Rank Models. One of papers that use this technique is even called PCA on a Data Frame. PCA can be posed like this: For $n$ x $m$ matrix $M$ find $n$ x $k$ matrix $\hat{X}$ and $k$ x $m$ matrix $\hat{Y}$ (this encodes rank $k$ e constraint implicitly) such that $\hat{X}, \hat{Y}$ = $\underset{X, Y}{argmin} \| M - XY \|_F^2$. The 'generalized' from GLRM stands for changing $\| \cdot \|_F^2$ to something else and adding a regularization term.
Can principal component analysis be applied to datasets containing a mix of continuous and categoric
There is a recently developed approach to such problems: Generalized Low Rank Models. One of papers that use this technique is even called PCA on a Data Frame. PCA can be posed like this: For $n$ x $
Can principal component analysis be applied to datasets containing a mix of continuous and categorical variables? There is a recently developed approach to such problems: Generalized Low Rank Models. One of papers that use this technique is even called PCA on a Data Frame. PCA can be posed like this: For $n$ x $m$ matrix $M$ find $n$ x $k$ matrix $\hat{X}$ and $k$ x $m$ matrix $\hat{Y}$ (this encodes rank $k$ e constraint implicitly) such that $\hat{X}, \hat{Y}$ = $\underset{X, Y}{argmin} \| M - XY \|_F^2$. The 'generalized' from GLRM stands for changing $\| \cdot \|_F^2$ to something else and adding a regularization term.
Can principal component analysis be applied to datasets containing a mix of continuous and categoric There is a recently developed approach to such problems: Generalized Low Rank Models. One of papers that use this technique is even called PCA on a Data Frame. PCA can be posed like this: For $n$ x $
651
In linear regression, when is it appropriate to use the log of an independent variable instead of the actual values?
I always hesitate to jump into a thread with as many excellent responses as this, but it strikes me that few of the answers provide any reason to prefer the logarithm to some other transformation that "squashes" the data, such as a root or reciprocal. Before getting to that, let's recapitulate the wisdom in the existing answers in a more general way. Some non-linear re-expression of the dependent variable is indicated when any of the following apply: The residuals have a skewed distribution. The purpose of a transformation is to obtain residuals that are approximately symmetrically distributed (about zero, of course). The spread of the residuals changes systematically with the values of the dependent variable ("heteroscedasticity"). The purpose of the transformation is to remove that systematic change in spread, achieving approximate "homoscedasticity." To linearize a relationship. When scientific theory indicates. For example, chemistry often suggests expressing concentrations as logarithms (giving activities or even the well-known pH). When a more nebulous statistical theory suggests the residuals reflect "random errors" that do not accumulate additively. To simplify a model. For example, sometimes a logarithm can simplify the number and complexity of "interaction" terms. (These indications can conflict with one another; in such cases, judgment is needed.) So, when is a logarithm specifically indicated instead of some other transformation? The residuals have a "strongly" positively skewed distribution. In his book on EDA, John Tukey provides quantitative ways to estimate the transformation (within the family of Box-Cox, or power, transformations) based on rank statistics of the residuals. It really comes down to the fact that if taking the log symmetrizes the residuals, it was probably the right form of re-expression; otherwise, some other re-expression is needed. When the SD of the residuals is directly proportional to the fitted values (and not to some power of the fitted values). When the relationship is close to exponential. When residuals are believed to reflect multiplicatively accumulating errors. You really want a model in which marginal changes in the explanatory variables are interpreted in terms of multiplicative (percentage) changes in the dependent variable. Finally, some non - reasons to use a re-expression: Making outliers not look like outliers. An outlier is a datum that does not fit some parsimonious, relatively simple description of the data. Changing one's description in order to make outliers look better is usually an incorrect reversal of priorities: first obtain a scientifically valid, statistically good description of the data and then explore any outliers. Don't let the occasional outlier determine how to describe the rest of the data! Because the software automatically did it. (Enough said!) Because all the data are positive. (Positivity often implies positive skewness, but it does not have to. Furthermore, other transformations can work better. For example, a root often works best with counted data.) To make "bad" data (perhaps of low quality) appear well behaved. To be able to plot the data. (If a transformation is needed to be able to plot the data, it's probably needed for one or more good reasons already mentioned. If the only reason for the transformation truly is for plotting, go ahead and do it--but only to plot the data. Leave the data untransformed for analysis.)
In linear regression, when is it appropriate to use the log of an independent variable instead of th
I always hesitate to jump into a thread with as many excellent responses as this, but it strikes me that few of the answers provide any reason to prefer the logarithm to some other transformation that
In linear regression, when is it appropriate to use the log of an independent variable instead of the actual values? I always hesitate to jump into a thread with as many excellent responses as this, but it strikes me that few of the answers provide any reason to prefer the logarithm to some other transformation that "squashes" the data, such as a root or reciprocal. Before getting to that, let's recapitulate the wisdom in the existing answers in a more general way. Some non-linear re-expression of the dependent variable is indicated when any of the following apply: The residuals have a skewed distribution. The purpose of a transformation is to obtain residuals that are approximately symmetrically distributed (about zero, of course). The spread of the residuals changes systematically with the values of the dependent variable ("heteroscedasticity"). The purpose of the transformation is to remove that systematic change in spread, achieving approximate "homoscedasticity." To linearize a relationship. When scientific theory indicates. For example, chemistry often suggests expressing concentrations as logarithms (giving activities or even the well-known pH). When a more nebulous statistical theory suggests the residuals reflect "random errors" that do not accumulate additively. To simplify a model. For example, sometimes a logarithm can simplify the number and complexity of "interaction" terms. (These indications can conflict with one another; in such cases, judgment is needed.) So, when is a logarithm specifically indicated instead of some other transformation? The residuals have a "strongly" positively skewed distribution. In his book on EDA, John Tukey provides quantitative ways to estimate the transformation (within the family of Box-Cox, or power, transformations) based on rank statistics of the residuals. It really comes down to the fact that if taking the log symmetrizes the residuals, it was probably the right form of re-expression; otherwise, some other re-expression is needed. When the SD of the residuals is directly proportional to the fitted values (and not to some power of the fitted values). When the relationship is close to exponential. When residuals are believed to reflect multiplicatively accumulating errors. You really want a model in which marginal changes in the explanatory variables are interpreted in terms of multiplicative (percentage) changes in the dependent variable. Finally, some non - reasons to use a re-expression: Making outliers not look like outliers. An outlier is a datum that does not fit some parsimonious, relatively simple description of the data. Changing one's description in order to make outliers look better is usually an incorrect reversal of priorities: first obtain a scientifically valid, statistically good description of the data and then explore any outliers. Don't let the occasional outlier determine how to describe the rest of the data! Because the software automatically did it. (Enough said!) Because all the data are positive. (Positivity often implies positive skewness, but it does not have to. Furthermore, other transformations can work better. For example, a root often works best with counted data.) To make "bad" data (perhaps of low quality) appear well behaved. To be able to plot the data. (If a transformation is needed to be able to plot the data, it's probably needed for one or more good reasons already mentioned. If the only reason for the transformation truly is for plotting, go ahead and do it--but only to plot the data. Leave the data untransformed for analysis.)
In linear regression, when is it appropriate to use the log of an independent variable instead of th I always hesitate to jump into a thread with as many excellent responses as this, but it strikes me that few of the answers provide any reason to prefer the logarithm to some other transformation that
652
In linear regression, when is it appropriate to use the log of an independent variable instead of the actual values?
I always tell students there are three reasons to transform a variable by taking the natural logarithm. The reason for logging the variable will determine whether you want to log the independent variable(s), dependent or both. To be clear throughout I'm talking about taking the natural logarithm. Firstly, to improve model fit as other posters have noted. For instance if your residuals aren't normally distributed then taking the logarithm of a skewed variable may improve the fit by altering the scale and making the variable more "normally" distributed. For instance, earnings is truncated at zero and often exhibits positive skew. If the variable has negative skew you could firstly invert the variable before taking the logarithm. I'm thinking here particularly of Likert scales that are inputed as continuous variables. While this usually applies to the dependent variable you occasionally have problems with the residuals (e.g. heteroscedasticity) caused by an independent variable which can be sometimes corrected by taking the logarithm of that variable. For example when running a model that explained lecturer evaluations on a set of lecturer and class covariates the variable "class size" (i.e. the number of students in the lecture) had outliers which induced heteroscedasticity because the variance in the lecturer evaluations was smaller in larger cohorts than smaller cohorts. Logging the student variable would help, although in this example either calculating Robust Standard Errors or using Weighted Least Squares may make interpretation easier. The second reason for logging one or more variables in the model is for interpretation. I call this convenience reason. If you log both your dependent (Y) and independent (X) variable(s) your regression coefficients ($\beta$) will be elasticities and interpretation would go as follows: a 1% increase in X would lead to a ceteris paribus $\beta$% increase in Y (on average). Logging only one side of the regression "equation" would lead to alternative interpretations as outlined below: Y and X -- a one unit increase in X would lead to a $\beta$ increase/decrease in Y Log Y and Log X -- a 1% increase in X would lead to a $\beta$% increase/decrease in Y Log Y and X -- a one unit increase in X would lead to a $\beta*100$ % increase/decrease in Y Y and Log X -- a 1% increase in X would lead to a $\beta/100$ increase/decrease in Y And finally there could be a theoretical reason for doing so. For example some models that we would like to estimate are multiplicative and therefore nonlinear. Taking logarithms allows these models to be estimated by linear regression. Good examples of this include the Cobb-Douglas production function in economics and the Mincer Equation in education. The Cobb-Douglas production function explains how inputs are converted into outputs: $$Y = A L^\alpha K^\beta $$ where $Y$ is the total production or output of some entity e.g. firm, farm, etc. $A$ is the total factor productivity (the change in output not caused by the inputs e.g. by technology change or weather) $L$ is the labour input $K$ is the capital input $\alpha$ & $\beta$ are output elasticities. Taking logarithms of this makes the function easy to estimate using OLS linear regression as such: $$\log(Y) = \log(A) + \alpha\log(L) + \beta\log(K)$$
In linear regression, when is it appropriate to use the log of an independent variable instead of th
I always tell students there are three reasons to transform a variable by taking the natural logarithm. The reason for logging the variable will determine whether you want to log the independent varia
In linear regression, when is it appropriate to use the log of an independent variable instead of the actual values? I always tell students there are three reasons to transform a variable by taking the natural logarithm. The reason for logging the variable will determine whether you want to log the independent variable(s), dependent or both. To be clear throughout I'm talking about taking the natural logarithm. Firstly, to improve model fit as other posters have noted. For instance if your residuals aren't normally distributed then taking the logarithm of a skewed variable may improve the fit by altering the scale and making the variable more "normally" distributed. For instance, earnings is truncated at zero and often exhibits positive skew. If the variable has negative skew you could firstly invert the variable before taking the logarithm. I'm thinking here particularly of Likert scales that are inputed as continuous variables. While this usually applies to the dependent variable you occasionally have problems with the residuals (e.g. heteroscedasticity) caused by an independent variable which can be sometimes corrected by taking the logarithm of that variable. For example when running a model that explained lecturer evaluations on a set of lecturer and class covariates the variable "class size" (i.e. the number of students in the lecture) had outliers which induced heteroscedasticity because the variance in the lecturer evaluations was smaller in larger cohorts than smaller cohorts. Logging the student variable would help, although in this example either calculating Robust Standard Errors or using Weighted Least Squares may make interpretation easier. The second reason for logging one or more variables in the model is for interpretation. I call this convenience reason. If you log both your dependent (Y) and independent (X) variable(s) your regression coefficients ($\beta$) will be elasticities and interpretation would go as follows: a 1% increase in X would lead to a ceteris paribus $\beta$% increase in Y (on average). Logging only one side of the regression "equation" would lead to alternative interpretations as outlined below: Y and X -- a one unit increase in X would lead to a $\beta$ increase/decrease in Y Log Y and Log X -- a 1% increase in X would lead to a $\beta$% increase/decrease in Y Log Y and X -- a one unit increase in X would lead to a $\beta*100$ % increase/decrease in Y Y and Log X -- a 1% increase in X would lead to a $\beta/100$ increase/decrease in Y And finally there could be a theoretical reason for doing so. For example some models that we would like to estimate are multiplicative and therefore nonlinear. Taking logarithms allows these models to be estimated by linear regression. Good examples of this include the Cobb-Douglas production function in economics and the Mincer Equation in education. The Cobb-Douglas production function explains how inputs are converted into outputs: $$Y = A L^\alpha K^\beta $$ where $Y$ is the total production or output of some entity e.g. firm, farm, etc. $A$ is the total factor productivity (the change in output not caused by the inputs e.g. by technology change or weather) $L$ is the labour input $K$ is the capital input $\alpha$ & $\beta$ are output elasticities. Taking logarithms of this makes the function easy to estimate using OLS linear regression as such: $$\log(Y) = \log(A) + \alpha\log(L) + \beta\log(K)$$
In linear regression, when is it appropriate to use the log of an independent variable instead of th I always tell students there are three reasons to transform a variable by taking the natural logarithm. The reason for logging the variable will determine whether you want to log the independent varia
653
In linear regression, when is it appropriate to use the log of an independent variable instead of the actual values?
For more on whuber's excellent point about reasons to prefer the logarithm to some other transformations such as a root or reciprocal, but focussing on the unique interpretability of the regression coefficients resulting from log-transformation compared to other transformations, see: Oliver N. Keene. The log transformation is special. Statistics in Medicine 1995; 14(8):811-819. DOI:10.1002/sim.4780140810. (PDF of dubious legality available at https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.530.9640&rep=rep1&type=pdf). If you log the independent variable x to base b, you can interpret the regression coefficient (and CI) as the change in the dependent variable y per b-fold increase in x. (Logs to base 2 are therefore often useful as they correspond to the change in y per doubling in x, or logs to base 10 if x varies over many orders of magnitude, which is rarer). Other transformations, such as square root, have no such simple interpretation. If you log the dependent variable y (not the original question but one which several of the previous answers have addressed), then I find Tim Cole's idea of 'sympercents' attractive for presenting the results (i even used them in a paper once), though they don't seem to have caught on all that widely: Tim J Cole. Sympercents: symmetric percentage differences on the 100 log(e) scale simplify the presentation of log transformed data. Statistics in Medicine 2000; 19(22):3109-3125. DOI:10.1002/1097-0258(20001130)19:22<3109::AID-SIM558>3.0.CO;2-F [I'm so glad Stat Med stopped using SICIs as DOIs...]
In linear regression, when is it appropriate to use the log of an independent variable instead of th
For more on whuber's excellent point about reasons to prefer the logarithm to some other transformations such as a root or reciprocal, but focussing on the unique interpretability of the regression co
In linear regression, when is it appropriate to use the log of an independent variable instead of the actual values? For more on whuber's excellent point about reasons to prefer the logarithm to some other transformations such as a root or reciprocal, but focussing on the unique interpretability of the regression coefficients resulting from log-transformation compared to other transformations, see: Oliver N. Keene. The log transformation is special. Statistics in Medicine 1995; 14(8):811-819. DOI:10.1002/sim.4780140810. (PDF of dubious legality available at https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.530.9640&rep=rep1&type=pdf). If you log the independent variable x to base b, you can interpret the regression coefficient (and CI) as the change in the dependent variable y per b-fold increase in x. (Logs to base 2 are therefore often useful as they correspond to the change in y per doubling in x, or logs to base 10 if x varies over many orders of magnitude, which is rarer). Other transformations, such as square root, have no such simple interpretation. If you log the dependent variable y (not the original question but one which several of the previous answers have addressed), then I find Tim Cole's idea of 'sympercents' attractive for presenting the results (i even used them in a paper once), though they don't seem to have caught on all that widely: Tim J Cole. Sympercents: symmetric percentage differences on the 100 log(e) scale simplify the presentation of log transformed data. Statistics in Medicine 2000; 19(22):3109-3125. DOI:10.1002/1097-0258(20001130)19:22<3109::AID-SIM558>3.0.CO;2-F [I'm so glad Stat Med stopped using SICIs as DOIs...]
In linear regression, when is it appropriate to use the log of an independent variable instead of th For more on whuber's excellent point about reasons to prefer the logarithm to some other transformations such as a root or reciprocal, but focussing on the unique interpretability of the regression co
654
In linear regression, when is it appropriate to use the log of an independent variable instead of the actual values?
One typically takes the log of an input variable to scale it and change the distribution (e.g. to make it normally distributed). It cannot be done blindly however; you need to be careful when making any scaling to ensure that the results are still interpretable. This is discussed in most introductory statistics texts. You can also read Andrew Gelman's paper on "Scaling regression inputs by dividing by two standard deviations" for a discussion on this. He also has a very nice discussion on this at the beginning of "Data Analysis Using Regression and Multilevel/Hierarchical Models". Taking the log is not an appropriate method for dealing with bad data/outliers.
In linear regression, when is it appropriate to use the log of an independent variable instead of th
One typically takes the log of an input variable to scale it and change the distribution (e.g. to make it normally distributed). It cannot be done blindly however; you need to be careful when making
In linear regression, when is it appropriate to use the log of an independent variable instead of the actual values? One typically takes the log of an input variable to scale it and change the distribution (e.g. to make it normally distributed). It cannot be done blindly however; you need to be careful when making any scaling to ensure that the results are still interpretable. This is discussed in most introductory statistics texts. You can also read Andrew Gelman's paper on "Scaling regression inputs by dividing by two standard deviations" for a discussion on this. He also has a very nice discussion on this at the beginning of "Data Analysis Using Regression and Multilevel/Hierarchical Models". Taking the log is not an appropriate method for dealing with bad data/outliers.
In linear regression, when is it appropriate to use the log of an independent variable instead of th One typically takes the log of an input variable to scale it and change the distribution (e.g. to make it normally distributed). It cannot be done blindly however; you need to be careful when making
655
In linear regression, when is it appropriate to use the log of an independent variable instead of the actual values?
You tend to take logs of the data when there is a problem with the residuals. For example, if you plot the residuals against a particular covariate and observe an increasing/decreasing pattern (a funnel shape), then a transformation may be appropriate. Non-random residuals usually indicate that your model assumptions are wrong, i.e. non-normal data. Some data types automatically lend themselves to logarithmic transformations. For example, I usually take logs when dealing with concentrations or age. Although transformations aren't primarily used to deal outliers, they do help since taking logs squashes your data.
In linear regression, when is it appropriate to use the log of an independent variable instead of th
You tend to take logs of the data when there is a problem with the residuals. For example, if you plot the residuals against a particular covariate and observe an increasing/decreasing pattern (a funn
In linear regression, when is it appropriate to use the log of an independent variable instead of the actual values? You tend to take logs of the data when there is a problem with the residuals. For example, if you plot the residuals against a particular covariate and observe an increasing/decreasing pattern (a funnel shape), then a transformation may be appropriate. Non-random residuals usually indicate that your model assumptions are wrong, i.e. non-normal data. Some data types automatically lend themselves to logarithmic transformations. For example, I usually take logs when dealing with concentrations or age. Although transformations aren't primarily used to deal outliers, they do help since taking logs squashes your data.
In linear regression, when is it appropriate to use the log of an independent variable instead of th You tend to take logs of the data when there is a problem with the residuals. For example, if you plot the residuals against a particular covariate and observe an increasing/decreasing pattern (a funn
656
In linear regression, when is it appropriate to use the log of an independent variable instead of the actual values?
Transformation of an independent variable $X$ is one occasion where one can just be empirical without distorting inference as long as one is honest about the number of degrees of freedom in play. One way is to use regression splines for continuous $X$ not already known to act linearly. To me it's not a question of log vs. original scale; it's a question of which transformation of $X$ fits the data. Normality of residuals is not a criterion here. When $X$ is extremely skewed, cubing $X$ as is needed in cubic spline functions results in extreme values that can sometimes cause numerical problems. I solve this by fitting the cubic spline function on $\sqrt[3]{X}$. The R rms package considers the innermost variable as the predictor, so plotting predicted values will have $X$ on the $x$-axis. Example: require(rms) dd <- datadist(mydata); options(datadist='dd') cr <- function(x) x ^ (1/3) f <- ols(y ~ rcs(cr(X), 5), data=mydata) ggplot(Predict(f)) # plot spline of cr(X) against X This fits a restricted cubic spline in $\sqrt[3]{X}$ with 5 knots at default quantile locations. The $X$ fit has 4 d.f. (one linear term, 3 nonlinear terms). Confidence bands and tests of association respect these 4 d.f., fully recognizing "transformation uncertainty".
In linear regression, when is it appropriate to use the log of an independent variable instead of th
Transformation of an independent variable $X$ is one occasion where one can just be empirical without distorting inference as long as one is honest about the number of degrees of freedom in play. One
In linear regression, when is it appropriate to use the log of an independent variable instead of the actual values? Transformation of an independent variable $X$ is one occasion where one can just be empirical without distorting inference as long as one is honest about the number of degrees of freedom in play. One way is to use regression splines for continuous $X$ not already known to act linearly. To me it's not a question of log vs. original scale; it's a question of which transformation of $X$ fits the data. Normality of residuals is not a criterion here. When $X$ is extremely skewed, cubing $X$ as is needed in cubic spline functions results in extreme values that can sometimes cause numerical problems. I solve this by fitting the cubic spline function on $\sqrt[3]{X}$. The R rms package considers the innermost variable as the predictor, so plotting predicted values will have $X$ on the $x$-axis. Example: require(rms) dd <- datadist(mydata); options(datadist='dd') cr <- function(x) x ^ (1/3) f <- ols(y ~ rcs(cr(X), 5), data=mydata) ggplot(Predict(f)) # plot spline of cr(X) against X This fits a restricted cubic spline in $\sqrt[3]{X}$ with 5 knots at default quantile locations. The $X$ fit has 4 d.f. (one linear term, 3 nonlinear terms). Confidence bands and tests of association respect these 4 d.f., fully recognizing "transformation uncertainty".
In linear regression, when is it appropriate to use the log of an independent variable instead of th Transformation of an independent variable $X$ is one occasion where one can just be empirical without distorting inference as long as one is honest about the number of degrees of freedom in play. One
657
In linear regression, when is it appropriate to use the log of an independent variable instead of the actual values?
I would like to respond to user1690130's question that was left as a comment to the first answer on Oct 26 '12 and reads as follows: "What about variables like population density in a region or the child-teacher ratio for each school district or the number of homicides per 1000 in the population? I have seen professors take the log of these variables. It is not clear to me why. For example, isn't the homicide rate already a percentage? The log would the the percentage change of the rate? Why would the log of child-teacher ratio be preferred?" I was looking to answer a similar problem and wanted to share what my old stats coursebook (Jeffrey Wooldridge. 2006. Introductory Econometrics - A Modern Approach, 4th Edition. Chapter 6 Multiple Regression Analysis: Further Issues. 191) says about it. Wooldridge advises: Variables that appear in a proportion or percent form, such as the unemployment rate, the participation rate in a pension plan, the percentage of students passing a standardized exam, and the arrest rate on reported crimes - can appear in either the original or logarithmic form, although there is a tendency to use them in level forms. This is because any regression coefficients involving the original variable - whether it is the dependent or the independent variable - will have a percentage point change interpretation. If we use, say, log(unem) in a regression, where unem is the percentage of unemployed individuals, we must be very careful to distinguish between a percentage point change and a percentage change. Remember, if unem goes from 8 to 9, this is an increase of one percentage point, but a 12.5% increase from the initial unemployment level. Using the log means that we are looking at the percentage change in the unemployment rate: log(9) - log(8) = 0.118 or 11.8%, which is the logarithmic approximation to the actual 12.5% increase. Based on this and piggybanking on whuber's earlier comment to user1690130's question, I would avoid using the logarithm of a density or percentage rate variable to keep interpretation simple unless using the log form produces a major tradeoff such as being able to reduce skewness of the density or rate variable.
In linear regression, when is it appropriate to use the log of an independent variable instead of th
I would like to respond to user1690130's question that was left as a comment to the first answer on Oct 26 '12 and reads as follows: "What about variables like population density in a region or the ch
In linear regression, when is it appropriate to use the log of an independent variable instead of the actual values? I would like to respond to user1690130's question that was left as a comment to the first answer on Oct 26 '12 and reads as follows: "What about variables like population density in a region or the child-teacher ratio for each school district or the number of homicides per 1000 in the population? I have seen professors take the log of these variables. It is not clear to me why. For example, isn't the homicide rate already a percentage? The log would the the percentage change of the rate? Why would the log of child-teacher ratio be preferred?" I was looking to answer a similar problem and wanted to share what my old stats coursebook (Jeffrey Wooldridge. 2006. Introductory Econometrics - A Modern Approach, 4th Edition. Chapter 6 Multiple Regression Analysis: Further Issues. 191) says about it. Wooldridge advises: Variables that appear in a proportion or percent form, such as the unemployment rate, the participation rate in a pension plan, the percentage of students passing a standardized exam, and the arrest rate on reported crimes - can appear in either the original or logarithmic form, although there is a tendency to use them in level forms. This is because any regression coefficients involving the original variable - whether it is the dependent or the independent variable - will have a percentage point change interpretation. If we use, say, log(unem) in a regression, where unem is the percentage of unemployed individuals, we must be very careful to distinguish between a percentage point change and a percentage change. Remember, if unem goes from 8 to 9, this is an increase of one percentage point, but a 12.5% increase from the initial unemployment level. Using the log means that we are looking at the percentage change in the unemployment rate: log(9) - log(8) = 0.118 or 11.8%, which is the logarithmic approximation to the actual 12.5% increase. Based on this and piggybanking on whuber's earlier comment to user1690130's question, I would avoid using the logarithm of a density or percentage rate variable to keep interpretation simple unless using the log form produces a major tradeoff such as being able to reduce skewness of the density or rate variable.
In linear regression, when is it appropriate to use the log of an independent variable instead of th I would like to respond to user1690130's question that was left as a comment to the first answer on Oct 26 '12 and reads as follows: "What about variables like population density in a region or the ch
658
In linear regression, when is it appropriate to use the log of an independent variable instead of the actual values?
Shane's point that taking the log to deal with bad data is well taken. As is Colin's regarding the importance of normal residuals. In practice I find that usually you can get normal residuals if the input and output variables are also relatively normal. In practice this means eyeballing the distribution of the transformed and untransformed datasets and assuring oneself that they have become more normal and/or conducting tests of normality (e.g. Shapiro-Wilk or Kolmogorov-Smirnov tests) and determining whether the outcome is more normal. Interpretablity and tradition are also important. For example, in cognitive psychology log transforms of reaction time are often used, however, to me at least, the interpretation of a log RT is unclear. Furthermore, one should be cautious using log transformed values as the shift in scale can change a main effect into an interaction and vice versa.
In linear regression, when is it appropriate to use the log of an independent variable instead of th
Shane's point that taking the log to deal with bad data is well taken. As is Colin's regarding the importance of normal residuals. In practice I find that usually you can get normal residuals if the
In linear regression, when is it appropriate to use the log of an independent variable instead of the actual values? Shane's point that taking the log to deal with bad data is well taken. As is Colin's regarding the importance of normal residuals. In practice I find that usually you can get normal residuals if the input and output variables are also relatively normal. In practice this means eyeballing the distribution of the transformed and untransformed datasets and assuring oneself that they have become more normal and/or conducting tests of normality (e.g. Shapiro-Wilk or Kolmogorov-Smirnov tests) and determining whether the outcome is more normal. Interpretablity and tradition are also important. For example, in cognitive psychology log transforms of reaction time are often used, however, to me at least, the interpretation of a log RT is unclear. Furthermore, one should be cautious using log transformed values as the shift in scale can change a main effect into an interaction and vice versa.
In linear regression, when is it appropriate to use the log of an independent variable instead of th Shane's point that taking the log to deal with bad data is well taken. As is Colin's regarding the importance of normal residuals. In practice I find that usually you can get normal residuals if the
659
What does the hidden layer in a neural network compute?
Three sentence version: Each layer can apply any function you want to the previous layer (usually a linear transformation followed by a squashing nonlinearity). The hidden layers' job is to transform the inputs into something that the output layer can use. The output layer transforms the hidden layer activations into whatever scale you wanted your output to be on. Like you're 5: If you want a computer to tell you if there's a bus in a picture, the computer might have an easier time if it had the right tools. So your bus detector might be made of a wheel detector (to help tell you it's a vehicle) and a box detector (since the bus is shaped like a big box) and a size detector (to tell you it's too big to be a car). These are the three elements of your hidden layer: they're not part of the raw image, they're tools you designed to help you identify busses. If all three of those detectors turn on (or perhaps if they're especially active), then there's a good chance you have a bus in front of you. Neural nets are useful because there are good tools (like backpropagation) for building lots of detectors and putting them together. Like you're an adult A feed-forward neural network applies a series of functions to the data. The exact functions will depend on the neural network you're using: most frequently, these functions each compute a linear transformation of the previous layer, followed by a squashing nonlinearity. Sometimes the functions will do something else (like computing logical functions in your examples, or averaging over adjacent pixels in an image). So the roles of the different layers could depend on what functions are being computed, but I'll try to be very general. Let's call the input vector $x$, the hidden layer activations $h$, and the output activation $y$. You have some function $f$ that maps from $x$ to $h$ and another function $g$ that maps from $h$ to $y$. So the hidden layer's activation is $f(x)$ and the output of the network is $g(f(x))$. Why have two functions ($f$ and $g$) instead of just one? If the level of complexity per function is limited, then $g(f(x))$ can compute things that $f$ and $g$ can't do individually. An example with logical functions: For example, if we only allow $f$ and $g$ to be simple logical operators like "AND", "OR", and "NAND", then you can't compute other functions like "XOR" with just one of them. On the other hand, we could compute "XOR" if we were willing to layer these functions on top of each other: First layer functions: Make sure that at least one element is "TRUE" (using OR) Make sure that they're not all "TRUE" (using NAND) Second layer function: Make sure that both of the first-layer criteria are satisfied (using AND) The network's output is just the result of this second function. The first layer transforms the inputs into something that the second layer can use so that the whole network can perform XOR. An example with images: Slide 61 from this talk--also available here as a single image--shows (one way to visualize) what the different hidden layers in a particular neural network are looking for. The first layer looks for short pieces of edges in the image: these are very easy to find from raw pixel data, but they're not very useful by themselves for telling you if you're looking at a face or a bus or an elephant. The next layer composes the edges: if the edges from the bottom hidden layer fit together in a certain way, then one of the eye-detectors in the middle of left-most column might turn on. It would be hard to make a single layer that was so good at finding something so specific from the raw pixels: eye detectors are much easier to build out of edge detectors than out of raw pixels. The next layer up composes the eye detectors and the nose detectors into faces. In other words, these will light up when the eye detectors and nose detectors from the previous layer turn on with the right patterns. These are very good at looking for particular kinds of faces: if one or more of them lights up, then your output layer should report that a face is present. This is useful because face detectors are easy to build out of eye detectors and nose detectors, but really hard to build out of pixel intensities. So each layer gets you farther and farther from the raw pixels and closer to your ultimate goal (e.g. face detection or bus detection). Answers to assorted other questions "Why are some layers in the input layer connected to the hidden layer and some are not?" The disconnected nodes in the network are called "bias" nodes. There's a really nice explanation here. The short answer is that they're like intercept terms in regression. "Where do the "eye detector" pictures in the image example come from?" I haven't double-checked the specific images I linked to, but in general, these visualizations show the set of pixels in the input layer that maximize the activity of the corresponding neuron. So if we think of the neuron as an eye detector, this is the image that the neuron considers to be most eye-like. Folks usually find these pixel sets with an optimization (hill-climbing) procedure. In this paper by some Google folks with one of the world's largest neural nets, they show a "face detector" neuron and a "cat detector" neuron this way, as well as a second way: They also show the actual images that activate the neuron most strongly (figure 3, figure 16). The second approach is nice because it shows how flexible and nonlinear the network is--these high-level "detectors" are sensitive to all these images, even though they don't particularly look similar at the pixel level. Let me know if anything here is unclear or if you have any more questions.
What does the hidden layer in a neural network compute?
Three sentence version: Each layer can apply any function you want to the previous layer (usually a linear transformation followed by a squashing nonlinearity). The hidden layers' job is to transfor
What does the hidden layer in a neural network compute? Three sentence version: Each layer can apply any function you want to the previous layer (usually a linear transformation followed by a squashing nonlinearity). The hidden layers' job is to transform the inputs into something that the output layer can use. The output layer transforms the hidden layer activations into whatever scale you wanted your output to be on. Like you're 5: If you want a computer to tell you if there's a bus in a picture, the computer might have an easier time if it had the right tools. So your bus detector might be made of a wheel detector (to help tell you it's a vehicle) and a box detector (since the bus is shaped like a big box) and a size detector (to tell you it's too big to be a car). These are the three elements of your hidden layer: they're not part of the raw image, they're tools you designed to help you identify busses. If all three of those detectors turn on (or perhaps if they're especially active), then there's a good chance you have a bus in front of you. Neural nets are useful because there are good tools (like backpropagation) for building lots of detectors and putting them together. Like you're an adult A feed-forward neural network applies a series of functions to the data. The exact functions will depend on the neural network you're using: most frequently, these functions each compute a linear transformation of the previous layer, followed by a squashing nonlinearity. Sometimes the functions will do something else (like computing logical functions in your examples, or averaging over adjacent pixels in an image). So the roles of the different layers could depend on what functions are being computed, but I'll try to be very general. Let's call the input vector $x$, the hidden layer activations $h$, and the output activation $y$. You have some function $f$ that maps from $x$ to $h$ and another function $g$ that maps from $h$ to $y$. So the hidden layer's activation is $f(x)$ and the output of the network is $g(f(x))$. Why have two functions ($f$ and $g$) instead of just one? If the level of complexity per function is limited, then $g(f(x))$ can compute things that $f$ and $g$ can't do individually. An example with logical functions: For example, if we only allow $f$ and $g$ to be simple logical operators like "AND", "OR", and "NAND", then you can't compute other functions like "XOR" with just one of them. On the other hand, we could compute "XOR" if we were willing to layer these functions on top of each other: First layer functions: Make sure that at least one element is "TRUE" (using OR) Make sure that they're not all "TRUE" (using NAND) Second layer function: Make sure that both of the first-layer criteria are satisfied (using AND) The network's output is just the result of this second function. The first layer transforms the inputs into something that the second layer can use so that the whole network can perform XOR. An example with images: Slide 61 from this talk--also available here as a single image--shows (one way to visualize) what the different hidden layers in a particular neural network are looking for. The first layer looks for short pieces of edges in the image: these are very easy to find from raw pixel data, but they're not very useful by themselves for telling you if you're looking at a face or a bus or an elephant. The next layer composes the edges: if the edges from the bottom hidden layer fit together in a certain way, then one of the eye-detectors in the middle of left-most column might turn on. It would be hard to make a single layer that was so good at finding something so specific from the raw pixels: eye detectors are much easier to build out of edge detectors than out of raw pixels. The next layer up composes the eye detectors and the nose detectors into faces. In other words, these will light up when the eye detectors and nose detectors from the previous layer turn on with the right patterns. These are very good at looking for particular kinds of faces: if one or more of them lights up, then your output layer should report that a face is present. This is useful because face detectors are easy to build out of eye detectors and nose detectors, but really hard to build out of pixel intensities. So each layer gets you farther and farther from the raw pixels and closer to your ultimate goal (e.g. face detection or bus detection). Answers to assorted other questions "Why are some layers in the input layer connected to the hidden layer and some are not?" The disconnected nodes in the network are called "bias" nodes. There's a really nice explanation here. The short answer is that they're like intercept terms in regression. "Where do the "eye detector" pictures in the image example come from?" I haven't double-checked the specific images I linked to, but in general, these visualizations show the set of pixels in the input layer that maximize the activity of the corresponding neuron. So if we think of the neuron as an eye detector, this is the image that the neuron considers to be most eye-like. Folks usually find these pixel sets with an optimization (hill-climbing) procedure. In this paper by some Google folks with one of the world's largest neural nets, they show a "face detector" neuron and a "cat detector" neuron this way, as well as a second way: They also show the actual images that activate the neuron most strongly (figure 3, figure 16). The second approach is nice because it shows how flexible and nonlinear the network is--these high-level "detectors" are sensitive to all these images, even though they don't particularly look similar at the pixel level. Let me know if anything here is unclear or if you have any more questions.
What does the hidden layer in a neural network compute? Three sentence version: Each layer can apply any function you want to the previous layer (usually a linear transformation followed by a squashing nonlinearity). The hidden layers' job is to transfor
660
What does the hidden layer in a neural network compute?
I'm going to describe my view of this in two steps: The input-to-hidden step and the hidden-to-output step. I'll do the hidden-to-output step first because it seems less interesting (to me). Hidden-to-Output The output of the hidden layer could be different things, but for now let's suppose that they come out of sigmoidal activation functions. So they are values between 0 and 1, and for many inputs they may just be 0's and 1's. I like to think of the transformation between these hidden neurons' outputs and the output layer as just a translation (in the linguistic sense, not the geometric sense). This is certainly true if the transformation is invertible, and if not then something was lost in translation. But you basically just have the hidden neurons' outputs seen from a different perspective. Input-to-Hidden Let's say you have 3 input neurons (just so I can easily write some equations here) and some hidden neurons. Each hidden neuron gets as input a weighted sum of inputs, so for example maybe hidden_1 = 10 * (input_1) + 0 * (input_2) + 2 * (input_3) This means that the value of hidden_1 is very sensitive to the value of input_1, not at all sensitive to input_2 and only slightly sensitive to input_3. So you could say that hidden_1 is capturing a particular aspect of the input, which you might call the "input_1 is important" aspect. The output from hidden_1 is usually formed by passing the input through some function, so let's say you are using a sigmoid function. This function takes on values between 0 and 1; so think of it as a switch which says that either input_1 is important or it isn't. So that's what the hidden layer does! It extracts aspects, or features of the input space. Now weights can be negative too! Which means that you can get aspects like "input_1 is important BUT ALSO input_2 takes away that importance": hidden_2 = 10 * (input_1) - 10 * (input_2 ) + 0 * (input_3) or input_1 and input_3 have "shared" importance: hidden_3 = 5 * (input_1) + 0 * (input_2) + 5 * (input_3) More Geometry If you know some linear algebra, you can think geometrically in terms of projecting along certain directions. In the example above, I projected along the input_1 direction. Let's look at hidden_1 again, from above. Once the value at input_1 is big enough, the output of the sigmoid activation function will just stay at 1, it won't get any bigger. In other words, more and more input_1 will make no difference to the output. Similarly, if it moves in the opposite (i.e. negative) direction, then after a point the output will be unaffected. Ok, fine. But suppose we don't want sensitivity in the direction of infinity in certain a direction, and we want it to be activated only for a certain range on a line. Meaning for very negative values there is no effect, and for very positive values there is no effect, but for values between say, 5 and 16 you want it to wake up. This is where you would use a radial basis function for your activation function. Summary The hidden layer extracts features of the input space, and the output layer translates them into the desired context. There may be much more to it than this, what with multi-layer networks and such, but this is what I understand so far. EDIT: This page with its wonderful interactive graphs does a better job than my long and cumbersome answer above could ever do: http://neuralnetworksanddeeplearning.com/chap4.html
What does the hidden layer in a neural network compute?
I'm going to describe my view of this in two steps: The input-to-hidden step and the hidden-to-output step. I'll do the hidden-to-output step first because it seems less interesting (to me). Hidden-to
What does the hidden layer in a neural network compute? I'm going to describe my view of this in two steps: The input-to-hidden step and the hidden-to-output step. I'll do the hidden-to-output step first because it seems less interesting (to me). Hidden-to-Output The output of the hidden layer could be different things, but for now let's suppose that they come out of sigmoidal activation functions. So they are values between 0 and 1, and for many inputs they may just be 0's and 1's. I like to think of the transformation between these hidden neurons' outputs and the output layer as just a translation (in the linguistic sense, not the geometric sense). This is certainly true if the transformation is invertible, and if not then something was lost in translation. But you basically just have the hidden neurons' outputs seen from a different perspective. Input-to-Hidden Let's say you have 3 input neurons (just so I can easily write some equations here) and some hidden neurons. Each hidden neuron gets as input a weighted sum of inputs, so for example maybe hidden_1 = 10 * (input_1) + 0 * (input_2) + 2 * (input_3) This means that the value of hidden_1 is very sensitive to the value of input_1, not at all sensitive to input_2 and only slightly sensitive to input_3. So you could say that hidden_1 is capturing a particular aspect of the input, which you might call the "input_1 is important" aspect. The output from hidden_1 is usually formed by passing the input through some function, so let's say you are using a sigmoid function. This function takes on values between 0 and 1; so think of it as a switch which says that either input_1 is important or it isn't. So that's what the hidden layer does! It extracts aspects, or features of the input space. Now weights can be negative too! Which means that you can get aspects like "input_1 is important BUT ALSO input_2 takes away that importance": hidden_2 = 10 * (input_1) - 10 * (input_2 ) + 0 * (input_3) or input_1 and input_3 have "shared" importance: hidden_3 = 5 * (input_1) + 0 * (input_2) + 5 * (input_3) More Geometry If you know some linear algebra, you can think geometrically in terms of projecting along certain directions. In the example above, I projected along the input_1 direction. Let's look at hidden_1 again, from above. Once the value at input_1 is big enough, the output of the sigmoid activation function will just stay at 1, it won't get any bigger. In other words, more and more input_1 will make no difference to the output. Similarly, if it moves in the opposite (i.e. negative) direction, then after a point the output will be unaffected. Ok, fine. But suppose we don't want sensitivity in the direction of infinity in certain a direction, and we want it to be activated only for a certain range on a line. Meaning for very negative values there is no effect, and for very positive values there is no effect, but for values between say, 5 and 16 you want it to wake up. This is where you would use a radial basis function for your activation function. Summary The hidden layer extracts features of the input space, and the output layer translates them into the desired context. There may be much more to it than this, what with multi-layer networks and such, but this is what I understand so far. EDIT: This page with its wonderful interactive graphs does a better job than my long and cumbersome answer above could ever do: http://neuralnetworksanddeeplearning.com/chap4.html
What does the hidden layer in a neural network compute? I'm going to describe my view of this in two steps: The input-to-hidden step and the hidden-to-output step. I'll do the hidden-to-output step first because it seems less interesting (to me). Hidden-to
661
What does the hidden layer in a neural network compute?
I'll try to add to the intuitive operational description... A good intuitive way to think about a neural network is to think about what a linear regression model attempts to do. A linear regression will take some inputs and come up with a linear model which takes each input value times some model optimal weighting coefficients and tries to map the sum of those results to an output response that closely matches the true output. The coefficients are determined by finding the values which will minimize some error metric between the desired output value and the value that is learned by the model. Another way to say it is that the linear model will try to create coefficient multipliers for each input and sum all of them to try to determine the relationship between the (multiple) input and (typically single) output values. That same model can almost be thought of as the basic building block of a neural network; a single unit perceptron. But the single unit perceptron has one more piece that will process the sum of the weighted data in a non-linear manner. It typically uses a squashing function (sigmoid, or tanh) to accomplish this. So you have the basic unit of the hidden layer, which is a block that will sum a set of weighted inputs-- it then passes the summed response to a non-linear function to create an (hidden layer) output node response. The bias unit is just as in linear regression, a constant offset which is added to each node to be processed. Because of the non-linear processing block, you are no longer limited to linear only responses (as in the linear regression model). Ok, but when you have many of the single perceptron units working together, each can have different input weight multipliers and different responses (even though ALL process the same set of inputs with the same non-linear block previously described). What makes the responses different is that each has different coefficient weights that are learned by the neural network via training (some forms include gradient descent). The result of all of the perceptrons are then processed again and passed to an output layer, just as the individual blocks were processed. The question then is how are the correct weights determined for all of the blocks? A common way to learn the correct weights is by starting with random weights and measuring the error response between the true actual output and the learned model output. The error will typically get passed backwards through the network and the feedback algorithm will individually increase or decrease those weights by some proportion to the error. The network will repeatedly iterate by passing forward, measuring the output response, then updating (passing backwards weight adjustments) and correcting the weights until some satisfactory error level is reached. At that point you have a regression model that can be more flexible than a linear regression model, it is what is commonly called a universal function approximator. One of the ways that really helped me to learn how a neural network truly operates is to study the code of a neural network implementation and build it. One of the best basic code explanations can be found in the neural network chapter of (the freely available) 'The Scientist and Engineer's guide to DSP' Ch. 26. It is mostly written in very basic language (I think it was fortran) that really helps you to see what is going on.
What does the hidden layer in a neural network compute?
I'll try to add to the intuitive operational description... A good intuitive way to think about a neural network is to think about what a linear regression model attempts to do. A linear regression w
What does the hidden layer in a neural network compute? I'll try to add to the intuitive operational description... A good intuitive way to think about a neural network is to think about what a linear regression model attempts to do. A linear regression will take some inputs and come up with a linear model which takes each input value times some model optimal weighting coefficients and tries to map the sum of those results to an output response that closely matches the true output. The coefficients are determined by finding the values which will minimize some error metric between the desired output value and the value that is learned by the model. Another way to say it is that the linear model will try to create coefficient multipliers for each input and sum all of them to try to determine the relationship between the (multiple) input and (typically single) output values. That same model can almost be thought of as the basic building block of a neural network; a single unit perceptron. But the single unit perceptron has one more piece that will process the sum of the weighted data in a non-linear manner. It typically uses a squashing function (sigmoid, or tanh) to accomplish this. So you have the basic unit of the hidden layer, which is a block that will sum a set of weighted inputs-- it then passes the summed response to a non-linear function to create an (hidden layer) output node response. The bias unit is just as in linear regression, a constant offset which is added to each node to be processed. Because of the non-linear processing block, you are no longer limited to linear only responses (as in the linear regression model). Ok, but when you have many of the single perceptron units working together, each can have different input weight multipliers and different responses (even though ALL process the same set of inputs with the same non-linear block previously described). What makes the responses different is that each has different coefficient weights that are learned by the neural network via training (some forms include gradient descent). The result of all of the perceptrons are then processed again and passed to an output layer, just as the individual blocks were processed. The question then is how are the correct weights determined for all of the blocks? A common way to learn the correct weights is by starting with random weights and measuring the error response between the true actual output and the learned model output. The error will typically get passed backwards through the network and the feedback algorithm will individually increase or decrease those weights by some proportion to the error. The network will repeatedly iterate by passing forward, measuring the output response, then updating (passing backwards weight adjustments) and correcting the weights until some satisfactory error level is reached. At that point you have a regression model that can be more flexible than a linear regression model, it is what is commonly called a universal function approximator. One of the ways that really helped me to learn how a neural network truly operates is to study the code of a neural network implementation and build it. One of the best basic code explanations can be found in the neural network chapter of (the freely available) 'The Scientist and Engineer's guide to DSP' Ch. 26. It is mostly written in very basic language (I think it was fortran) that really helps you to see what is going on.
What does the hidden layer in a neural network compute? I'll try to add to the intuitive operational description... A good intuitive way to think about a neural network is to think about what a linear regression model attempts to do. A linear regression w
662
What does the hidden layer in a neural network compute?
Let us take the case of classification. What the output layer is trying to do is estimate the conditional probability that your sample belongs to a given class, i.e. how likely is for that sample to belong to a given class. In geometrical terms, combining layers in a non-linear fashion via the threshold functions allows the neural networks to solve non-convex problems (speech recognition, object recognition, and so on), which are the most interesting ones. In other words, the output units are able to generate non-convex decision functions like those depicted here. One can view the units in hidden layers as learning complex features from data that allow the output layer to be able to better discern one class from another, to generate more acurate decision boundaries. For example, in the case of face recognition, units in the first layers learn edge like features (detect edges at given orientations and positions) and higher layer learn to combine those to become detectors for facial features like the nose, mouth or eyes. The weights of each hidden unit represent those features, and its output (assuming it is a sigmoid) represents the probability that that feature is present in your sample. In general, the meaning of the outputs of output and hidden layers depend on the problem you are trying to solve (regression, classification) and the loss function you employ (cross entropy, least squared errors, ...)
What does the hidden layer in a neural network compute?
Let us take the case of classification. What the output layer is trying to do is estimate the conditional probability that your sample belongs to a given class, i.e. how likely is for that sample to b
What does the hidden layer in a neural network compute? Let us take the case of classification. What the output layer is trying to do is estimate the conditional probability that your sample belongs to a given class, i.e. how likely is for that sample to belong to a given class. In geometrical terms, combining layers in a non-linear fashion via the threshold functions allows the neural networks to solve non-convex problems (speech recognition, object recognition, and so on), which are the most interesting ones. In other words, the output units are able to generate non-convex decision functions like those depicted here. One can view the units in hidden layers as learning complex features from data that allow the output layer to be able to better discern one class from another, to generate more acurate decision boundaries. For example, in the case of face recognition, units in the first layers learn edge like features (detect edges at given orientations and positions) and higher layer learn to combine those to become detectors for facial features like the nose, mouth or eyes. The weights of each hidden unit represent those features, and its output (assuming it is a sigmoid) represents the probability that that feature is present in your sample. In general, the meaning of the outputs of output and hidden layers depend on the problem you are trying to solve (regression, classification) and the loss function you employ (cross entropy, least squared errors, ...)
What does the hidden layer in a neural network compute? Let us take the case of classification. What the output layer is trying to do is estimate the conditional probability that your sample belongs to a given class, i.e. how likely is for that sample to b
663
Generative vs. discriminative
The fundamental difference between discriminative models and generative models is: Discriminative models learn the (hard or soft) boundary between classes Generative models model the distribution of individual classes To answer your direct questions: SVMs (Support Vector Machines) and DTs (Decision Trees) are discriminative because they learn explicit boundaries between classes. SVM is a maximal margin classifier, meaning that it learns a decision boundary that maximizes the distance between samples of the two classes, given a kernel. The distance between a sample and the learned decision boundary can be used to make the SVM a "soft" classifier. DTs learn the decision boundary by recursively partitioning the space in a manner that maximizes the information gain (or another criterion). It is possible to make a generative form of logistic regression in this manner. Note that you are not using the full generative model to make classification decisions, though. There are a number of advantages generative models may offer, depending on the application. Say you are dealing with non-stationary distributions, where the online test data may be generated by different underlying distributions than the training data. It is typically more straightforward to detect distribution changes and update a generative model accordingly than do this for a decision boundary in an SVM, especially if the online updates need to be unsupervised. Discriminative models also do not generally function for outlier detection, though generative models generally do. What's best for a specific application should, of course, be evaluated based on the application. (This quote is convoluted, but this is what I think it's trying to say) Generative models are typically specified as probabilistic graphical models, which offer rich representations of the independence relations in the dataset. Discriminative models do not offer such clear representations of relations between features and classes in the dataset. Instead of using resources to fully model each class, they focus on richly modeling the boundary between classes. Given the same amount of capacity (say, bits in a computer program executing the model), a discriminative model thus may yield more complex representations of this boundary than a generative model.
Generative vs. discriminative
The fundamental difference between discriminative models and generative models is: Discriminative models learn the (hard or soft) boundary between classes Generative models model the distribution of
Generative vs. discriminative The fundamental difference between discriminative models and generative models is: Discriminative models learn the (hard or soft) boundary between classes Generative models model the distribution of individual classes To answer your direct questions: SVMs (Support Vector Machines) and DTs (Decision Trees) are discriminative because they learn explicit boundaries between classes. SVM is a maximal margin classifier, meaning that it learns a decision boundary that maximizes the distance between samples of the two classes, given a kernel. The distance between a sample and the learned decision boundary can be used to make the SVM a "soft" classifier. DTs learn the decision boundary by recursively partitioning the space in a manner that maximizes the information gain (or another criterion). It is possible to make a generative form of logistic regression in this manner. Note that you are not using the full generative model to make classification decisions, though. There are a number of advantages generative models may offer, depending on the application. Say you are dealing with non-stationary distributions, where the online test data may be generated by different underlying distributions than the training data. It is typically more straightforward to detect distribution changes and update a generative model accordingly than do this for a decision boundary in an SVM, especially if the online updates need to be unsupervised. Discriminative models also do not generally function for outlier detection, though generative models generally do. What's best for a specific application should, of course, be evaluated based on the application. (This quote is convoluted, but this is what I think it's trying to say) Generative models are typically specified as probabilistic graphical models, which offer rich representations of the independence relations in the dataset. Discriminative models do not offer such clear representations of relations between features and classes in the dataset. Instead of using resources to fully model each class, they focus on richly modeling the boundary between classes. Given the same amount of capacity (say, bits in a computer program executing the model), a discriminative model thus may yield more complex representations of this boundary than a generative model.
Generative vs. discriminative The fundamental difference between discriminative models and generative models is: Discriminative models learn the (hard or soft) boundary between classes Generative models model the distribution of
664
Generative vs. discriminative
(hamner's answer is great, so just cross-posting my answer from MetaOptimize for completeness.) I think of generative algorithms as providing a model of how the data is actually generated (I think of them as giving you a model of both $P(X|Y)$ and $P(Y)$, rather than of $P(X, Y)$, though I guess it's equivalent), and discriminative algorithms as simply providing classification splits (and not necessarily in a probabilistic manner). Compare, for instance, Gaussian mixture models and k-mean clustering. In the former, we have a nice probabilistic model for how points are generated (pick a component with some probability, and then emit a point by sampling from the component's Gaussian distribution), but there's nothing we can really say about the latter. Note that generative algorithms have discriminative properties, since you can get $P(Y|X)$ once you have $P(X|Y)$ and $P(Y)$ (by Bayes' Theorem), though discriminative algorithms don't really have generative properties. 1: Discriminative algorithms allow you to classify points, without providing a model of how the points are actually generated. So these could be either: probabilistic algorithms try to learn $P(Y|X)$ (e.g., logistic regression); or non-probabilistic algorithms that try to learn the mappings directly from the points to the classes (e.g., perceptron and SVMs simply give you a separating hyperplane, but no model of generating new points). So yes, discriminative classifiers are any classifiers that aren't generative. Another way of thinking about this is that generative algorithms make some kind of structure assumptions on your model, but discriminative algorithms make fewer assumptions. For example, Naive Bayes assumes conditional independence of your features, while logistic regression (the discriminative "counterpart" of Naive Bayes) does not. 2: Yes, Naive Bayes is generative because it captures $P(X|Y)$ and $P(Y)$. For example, if we know that $P(Y = English) = 0.7$ and $P(Y = French) = 0.3$, along with English and French word probabilities, then we can now generate a new document by first choosing the language of the document (English with probability 0.7, French with probability 0.3), and then generating words according to the chosen language's word probabilities. Yes, I guess you could make logistic regression generative in that fashion, but it's only because you're adding something to logistic regression that's not already there. That is, when you're performing a Naive Bayes classification, you're directly computing $P(Y|X) \propto P(X|Y) P(Y)$ (the terms on the right, $P(X|Y)$ and $P(Y)$, are what allow you to generate a new document); but when you're computing $P(Y|X)$ in logistic regression, you're not computing these two things, you're just applying a logistic function to a dot product. 3: Generative models often outperform discriminative models on smaller datasets because their generative assumptions place some structure on your model that prevent overfitting. For example, let's consider Naive Bayes vs. Logistic Regression. The Naive Bayes assumption is of course rarely satisfied, so logistic regression will tend to outperform Naive Bayes as your dataset grows (since it can capture dependencies that Naive Bayes can't). But when you only have a small data set, logistic regression might pick up on spurious patterns that don't really exist, so the Naive Bayes acts as a kind of regularizer on your model that prevents overfitting. There's a paper by Andrew Ng and Michael Jordan on discriminative vs. generative classifiers that talks about this more. 4: I think what it means is that generative models can actually learn the underlying structure of the data if you specify your model correctly and the model actually holds, but discriminative models can outperform in case your generative assumptions are not satisfied (since discriminative algorithms are less tied to a particular structure, and the real world is messy and assumptions are rarely perfectly satisfied anyways). (I'd probably just ignore these quotes if they're confusing.)
Generative vs. discriminative
(hamner's answer is great, so just cross-posting my answer from MetaOptimize for completeness.) I think of generative algorithms as providing a model of how the data is actually generated (I think of
Generative vs. discriminative (hamner's answer is great, so just cross-posting my answer from MetaOptimize for completeness.) I think of generative algorithms as providing a model of how the data is actually generated (I think of them as giving you a model of both $P(X|Y)$ and $P(Y)$, rather than of $P(X, Y)$, though I guess it's equivalent), and discriminative algorithms as simply providing classification splits (and not necessarily in a probabilistic manner). Compare, for instance, Gaussian mixture models and k-mean clustering. In the former, we have a nice probabilistic model for how points are generated (pick a component with some probability, and then emit a point by sampling from the component's Gaussian distribution), but there's nothing we can really say about the latter. Note that generative algorithms have discriminative properties, since you can get $P(Y|X)$ once you have $P(X|Y)$ and $P(Y)$ (by Bayes' Theorem), though discriminative algorithms don't really have generative properties. 1: Discriminative algorithms allow you to classify points, without providing a model of how the points are actually generated. So these could be either: probabilistic algorithms try to learn $P(Y|X)$ (e.g., logistic regression); or non-probabilistic algorithms that try to learn the mappings directly from the points to the classes (e.g., perceptron and SVMs simply give you a separating hyperplane, but no model of generating new points). So yes, discriminative classifiers are any classifiers that aren't generative. Another way of thinking about this is that generative algorithms make some kind of structure assumptions on your model, but discriminative algorithms make fewer assumptions. For example, Naive Bayes assumes conditional independence of your features, while logistic regression (the discriminative "counterpart" of Naive Bayes) does not. 2: Yes, Naive Bayes is generative because it captures $P(X|Y)$ and $P(Y)$. For example, if we know that $P(Y = English) = 0.7$ and $P(Y = French) = 0.3$, along with English and French word probabilities, then we can now generate a new document by first choosing the language of the document (English with probability 0.7, French with probability 0.3), and then generating words according to the chosen language's word probabilities. Yes, I guess you could make logistic regression generative in that fashion, but it's only because you're adding something to logistic regression that's not already there. That is, when you're performing a Naive Bayes classification, you're directly computing $P(Y|X) \propto P(X|Y) P(Y)$ (the terms on the right, $P(X|Y)$ and $P(Y)$, are what allow you to generate a new document); but when you're computing $P(Y|X)$ in logistic regression, you're not computing these two things, you're just applying a logistic function to a dot product. 3: Generative models often outperform discriminative models on smaller datasets because their generative assumptions place some structure on your model that prevent overfitting. For example, let's consider Naive Bayes vs. Logistic Regression. The Naive Bayes assumption is of course rarely satisfied, so logistic regression will tend to outperform Naive Bayes as your dataset grows (since it can capture dependencies that Naive Bayes can't). But when you only have a small data set, logistic regression might pick up on spurious patterns that don't really exist, so the Naive Bayes acts as a kind of regularizer on your model that prevents overfitting. There's a paper by Andrew Ng and Michael Jordan on discriminative vs. generative classifiers that talks about this more. 4: I think what it means is that generative models can actually learn the underlying structure of the data if you specify your model correctly and the model actually holds, but discriminative models can outperform in case your generative assumptions are not satisfied (since discriminative algorithms are less tied to a particular structure, and the real world is messy and assumptions are rarely perfectly satisfied anyways). (I'd probably just ignore these quotes if they're confusing.)
Generative vs. discriminative (hamner's answer is great, so just cross-posting my answer from MetaOptimize for completeness.) I think of generative algorithms as providing a model of how the data is actually generated (I think of
665
Generative vs. discriminative
As an additional point to the above answers, when the aim of algorithm is only to classify, then discriminative approach may be better and less expensive than generative approach, as following the generative approach to model input distribution can result in requiring too much training data to model complexities in distribution which are unimportant for computing the posteriors required for decision making.
Generative vs. discriminative
As an additional point to the above answers, when the aim of algorithm is only to classify, then discriminative approach may be better and less expensive than generative approach, as following the gen
Generative vs. discriminative As an additional point to the above answers, when the aim of algorithm is only to classify, then discriminative approach may be better and less expensive than generative approach, as following the generative approach to model input distribution can result in requiring too much training data to model complexities in distribution which are unimportant for computing the posteriors required for decision making.
Generative vs. discriminative As an additional point to the above answers, when the aim of algorithm is only to classify, then discriminative approach may be better and less expensive than generative approach, as following the gen
666
PCA on correlation or covariance?
You tend to use the covariance matrix when the variable scales are similar and the correlation matrix when variables are on different scales. Using the correlation matrix is equivalent to standardizing each of the variables (to mean 0 and standard deviation 1). In general, PCA with and without standardizing will give different results. Especially when the scales are different. As an example, take a look at this R heptathlon data set. Some of the variables have an average value of about 1.8 (the high jump), whereas other variables (run 800m) are around 120. library(HSAUR) heptathlon[,-8] # look at heptathlon data (excluding 'score' variable) This outputs: hurdles highjump shot run200m longjump javelin run800m Joyner-Kersee (USA) 12.69 1.86 15.80 22.56 7.27 45.66 128.51 John (GDR) 12.85 1.80 16.23 23.65 6.71 42.56 126.12 Behmer (GDR) 13.20 1.83 14.20 23.10 6.68 44.54 124.20 Sablovskaite (URS) 13.61 1.80 15.23 23.92 6.25 42.78 132.24 Choubenkova (URS) 13.51 1.74 14.76 23.93 6.32 47.46 127.90 ... Now let's do PCA on covariance and on correlation: # scale=T bases the PCA on the correlation matrix hep.PC.cor = prcomp(heptathlon[,-8], scale=TRUE) hep.PC.cov = prcomp(heptathlon[,-8], scale=FALSE) biplot(hep.PC.cov) biplot(hep.PC.cor) Notice that PCA on covariance is dominated by run800m and javelin: PC1 is almost equal to run800m (and explains $82\%$ of the variance) and PC2 is almost equal to javelin (together they explain $97\%$). PCA on correlation is much more informative and reveals some structure in the data and relationships between variables (but note that the explained variances drop to $64\%$ and $71\%$). Notice also that the outlying individuals (in this data set) are outliers regardless of whether the covariance or correlation matrix is used.
PCA on correlation or covariance?
You tend to use the covariance matrix when the variable scales are similar and the correlation matrix when variables are on different scales. Using the correlation matrix is equivalent to standardizin
PCA on correlation or covariance? You tend to use the covariance matrix when the variable scales are similar and the correlation matrix when variables are on different scales. Using the correlation matrix is equivalent to standardizing each of the variables (to mean 0 and standard deviation 1). In general, PCA with and without standardizing will give different results. Especially when the scales are different. As an example, take a look at this R heptathlon data set. Some of the variables have an average value of about 1.8 (the high jump), whereas other variables (run 800m) are around 120. library(HSAUR) heptathlon[,-8] # look at heptathlon data (excluding 'score' variable) This outputs: hurdles highjump shot run200m longjump javelin run800m Joyner-Kersee (USA) 12.69 1.86 15.80 22.56 7.27 45.66 128.51 John (GDR) 12.85 1.80 16.23 23.65 6.71 42.56 126.12 Behmer (GDR) 13.20 1.83 14.20 23.10 6.68 44.54 124.20 Sablovskaite (URS) 13.61 1.80 15.23 23.92 6.25 42.78 132.24 Choubenkova (URS) 13.51 1.74 14.76 23.93 6.32 47.46 127.90 ... Now let's do PCA on covariance and on correlation: # scale=T bases the PCA on the correlation matrix hep.PC.cor = prcomp(heptathlon[,-8], scale=TRUE) hep.PC.cov = prcomp(heptathlon[,-8], scale=FALSE) biplot(hep.PC.cov) biplot(hep.PC.cor) Notice that PCA on covariance is dominated by run800m and javelin: PC1 is almost equal to run800m (and explains $82\%$ of the variance) and PC2 is almost equal to javelin (together they explain $97\%$). PCA on correlation is much more informative and reveals some structure in the data and relationships between variables (but note that the explained variances drop to $64\%$ and $71\%$). Notice also that the outlying individuals (in this data set) are outliers regardless of whether the covariance or correlation matrix is used.
PCA on correlation or covariance? You tend to use the covariance matrix when the variable scales are similar and the correlation matrix when variables are on different scales. Using the correlation matrix is equivalent to standardizin
667
PCA on correlation or covariance?
Bernard Flury, in his excellent book introducing multivariate analysis, described this as an anti-property of principal components. It's actually worse than choosing between correlation or covariance. If you changed the units (e.g. US style gallons, inches etc. and EU style litres, centimetres) you will get substantively different projections of the data. The argument against automatically using correlation matrices is that it is quite a brutal way of standardising your data. The problem with automatically using the covariance matrix, which is very apparent with that heptathalon data, is that the variables with the highest variance will dominate the first principal component (the variance maximising property). So the "best" method to use is based on a subjective choice, careful thought and some experience.
PCA on correlation or covariance?
Bernard Flury, in his excellent book introducing multivariate analysis, described this as an anti-property of principal components. It's actually worse than choosing between correlation or covarianc
PCA on correlation or covariance? Bernard Flury, in his excellent book introducing multivariate analysis, described this as an anti-property of principal components. It's actually worse than choosing between correlation or covariance. If you changed the units (e.g. US style gallons, inches etc. and EU style litres, centimetres) you will get substantively different projections of the data. The argument against automatically using correlation matrices is that it is quite a brutal way of standardising your data. The problem with automatically using the covariance matrix, which is very apparent with that heptathalon data, is that the variables with the highest variance will dominate the first principal component (the variance maximising property). So the "best" method to use is based on a subjective choice, careful thought and some experience.
PCA on correlation or covariance? Bernard Flury, in his excellent book introducing multivariate analysis, described this as an anti-property of principal components. It's actually worse than choosing between correlation or covarianc
668
PCA on correlation or covariance?
UNTRANSFORMED (RAW) DATA: If you have variables with widely varying scales for raw, untransformed data, that is, caloric intake per day, gene expression, ELISA/Luminex in units of ug/dl, ng/dl, based on several orders of magnitude of protein expression, then use correlation as an input to PCA. However, if all of your data are based on e.g. gene expression from the same platform with similar range and scale, or you are working with log equity asset returns, then using correlation will throw out a tremendous amount of information. You actually don't need to think about the difference of using the correlation matrix $\mathbf{R}$ or covariance matrix $\mathbf{C}$ as an input to PCA, but rather, look at the diagonal values of $\mathbf{C}$ and $\mathbf{R}$. You may observe a variance of $100$ for one variable, and $10$ on another -- which are on the diagonal of $\mathbf{C}$. But when looking at the correlations, the diagonal contains all ones, so the variance of each variable is essentially changed to $1$ as you use the $\mathbf{R}$ matrix. TRANSFORMED DATA: If the data have been transformed via normalization, percentiles, or mean-zero standardization (i.e., $Z$-scores), so that the range and scale of all the continuous variables is the same, then you could use the Covariance matrix $\mathbf{C}$ without any problems. (correlation will mean-zero standardize variables). Recall, however, that these transformations will not remove skewness (i.e., left or right tails in histograms) in your variables prior to running PCA. Typical PCA analysis does not involve removal of skewness; however, some readers may need to remove skewness to meet strict normality constraints. In summary, use the correlation matrix $\mathbf{R}$ when within-variable range and scale widely differs, and use the covariance matrix $\mathbf{C}$ to preserve variance if the range and scale of variables is similar or in the same units of measure. SKEWED VARIABLES: If any of the variables are skewed with left or right tails in their histograms, i.e., the Shapiro-Wilk or Lilliefors normality test is significant $(P<0.05)$, then there may be some issues if you need to apply the normality assumption. In this case, use the van der Waerden scores (transforms) determined from each variable. The van der Waerden (VDW) score for a single observation is merely the inverse cumulative (standard) normal mapping of the observation's percentile value. For example, say you have $n=100$ observations for a continuous variable, you can determine the VDW scores using: First, sort the values in ascending order, then assign ranks, so you would obtain ranks of $R_i=1,2,\ldots,100.$ Next, determine the percentile for each observation as $pct_i=R_i/(n+1)$. Once the percentile values are obtained, input them into the inverse mapping function for the CDF of the standard normal distribution, i.e., $N(0,1)$, to obtain the $Z$-score for each, using $Z_i=\Phi^{-1}(pct_i)$. For example, if you plug in a $pct_i$ value 0.025, you will get $-1.96=\Phi^{-1}(0.025)$. Same goes for a plugin value of $pct_i=0.975$, you'll get $1.96=\Phi^{-1}(0.975)$. Use of VDW scores is very popular in genetics, where many variables are transformed into VDW scores, and then input into analyses. The advantage of using VDW scores is that skewness and outlier effects are removed from the data, and can be used if the goal is to perform an analysis under the contraints of normality -- and every variable needs to be purely standard normal distributed with no skewness or outliers.
PCA on correlation or covariance?
UNTRANSFORMED (RAW) DATA: If you have variables with widely varying scales for raw, untransformed data, that is, caloric intake per day, gene expression, ELISA/Luminex in units of ug/dl, ng/dl, based
PCA on correlation or covariance? UNTRANSFORMED (RAW) DATA: If you have variables with widely varying scales for raw, untransformed data, that is, caloric intake per day, gene expression, ELISA/Luminex in units of ug/dl, ng/dl, based on several orders of magnitude of protein expression, then use correlation as an input to PCA. However, if all of your data are based on e.g. gene expression from the same platform with similar range and scale, or you are working with log equity asset returns, then using correlation will throw out a tremendous amount of information. You actually don't need to think about the difference of using the correlation matrix $\mathbf{R}$ or covariance matrix $\mathbf{C}$ as an input to PCA, but rather, look at the diagonal values of $\mathbf{C}$ and $\mathbf{R}$. You may observe a variance of $100$ for one variable, and $10$ on another -- which are on the diagonal of $\mathbf{C}$. But when looking at the correlations, the diagonal contains all ones, so the variance of each variable is essentially changed to $1$ as you use the $\mathbf{R}$ matrix. TRANSFORMED DATA: If the data have been transformed via normalization, percentiles, or mean-zero standardization (i.e., $Z$-scores), so that the range and scale of all the continuous variables is the same, then you could use the Covariance matrix $\mathbf{C}$ without any problems. (correlation will mean-zero standardize variables). Recall, however, that these transformations will not remove skewness (i.e., left or right tails in histograms) in your variables prior to running PCA. Typical PCA analysis does not involve removal of skewness; however, some readers may need to remove skewness to meet strict normality constraints. In summary, use the correlation matrix $\mathbf{R}$ when within-variable range and scale widely differs, and use the covariance matrix $\mathbf{C}$ to preserve variance if the range and scale of variables is similar or in the same units of measure. SKEWED VARIABLES: If any of the variables are skewed with left or right tails in their histograms, i.e., the Shapiro-Wilk or Lilliefors normality test is significant $(P<0.05)$, then there may be some issues if you need to apply the normality assumption. In this case, use the van der Waerden scores (transforms) determined from each variable. The van der Waerden (VDW) score for a single observation is merely the inverse cumulative (standard) normal mapping of the observation's percentile value. For example, say you have $n=100$ observations for a continuous variable, you can determine the VDW scores using: First, sort the values in ascending order, then assign ranks, so you would obtain ranks of $R_i=1,2,\ldots,100.$ Next, determine the percentile for each observation as $pct_i=R_i/(n+1)$. Once the percentile values are obtained, input them into the inverse mapping function for the CDF of the standard normal distribution, i.e., $N(0,1)$, to obtain the $Z$-score for each, using $Z_i=\Phi^{-1}(pct_i)$. For example, if you plug in a $pct_i$ value 0.025, you will get $-1.96=\Phi^{-1}(0.025)$. Same goes for a plugin value of $pct_i=0.975$, you'll get $1.96=\Phi^{-1}(0.975)$. Use of VDW scores is very popular in genetics, where many variables are transformed into VDW scores, and then input into analyses. The advantage of using VDW scores is that skewness and outlier effects are removed from the data, and can be used if the goal is to perform an analysis under the contraints of normality -- and every variable needs to be purely standard normal distributed with no skewness or outliers.
PCA on correlation or covariance? UNTRANSFORMED (RAW) DATA: If you have variables with widely varying scales for raw, untransformed data, that is, caloric intake per day, gene expression, ELISA/Luminex in units of ug/dl, ng/dl, based
669
PCA on correlation or covariance?
A common answer is to suggest that covariance is used when variables are on the same scale, and correlation when their scales are different. However, this is only true when scale of the variables isn't a factor. Otherwise, why would anyone ever do covariance PCA? It would be safer to always perform correlation PCA. Imagine that your variables have different units of measure, such as meters and kilograms. It shouldn't matter whether you use meters or centimeters in this case, so you could argue that correlation matrix should be used. Consider now population of people in different states. The units of measure are the same - counts (number) of people. Now, the scales could be different: Vermont has 600K and CA - 38M people. Should we use correlation matrix here? It depends. In some applications we do want to adjust for the size of the state. Using the covariance matrix is one way for building factors that account for the size of the state. Hence, my answer is to use covariance matrix when variance of the original variable is important, and use correlation when it is not.
PCA on correlation or covariance?
A common answer is to suggest that covariance is used when variables are on the same scale, and correlation when their scales are different. However, this is only true when scale of the variables isn'
PCA on correlation or covariance? A common answer is to suggest that covariance is used when variables are on the same scale, and correlation when their scales are different. However, this is only true when scale of the variables isn't a factor. Otherwise, why would anyone ever do covariance PCA? It would be safer to always perform correlation PCA. Imagine that your variables have different units of measure, such as meters and kilograms. It shouldn't matter whether you use meters or centimeters in this case, so you could argue that correlation matrix should be used. Consider now population of people in different states. The units of measure are the same - counts (number) of people. Now, the scales could be different: Vermont has 600K and CA - 38M people. Should we use correlation matrix here? It depends. In some applications we do want to adjust for the size of the state. Using the covariance matrix is one way for building factors that account for the size of the state. Hence, my answer is to use covariance matrix when variance of the original variable is important, and use correlation when it is not.
PCA on correlation or covariance? A common answer is to suggest that covariance is used when variables are on the same scale, and correlation when their scales are different. However, this is only true when scale of the variables isn'
670
PCA on correlation or covariance?
I personally find it very valuable to discuss these options in light of the maximum-likelihood principal component analysis model (MLPCA) [1,2]. In MLPCA one applies a scaling (or even a rotation) such that the measurement errors in the measured variables are independent and distributed according to the standard normal distribution. This scaling is also known as maximum likelihood scaling (MALS) [3]. In some case, the PCA model and the parameter defining the MALS scaling/rotation can be estimated together [4]. To interpret correlation-based and covariance-based PCA, one can then argue that: Covariance-based PCA is equivalent to MLPCA whenever the variance-covariance matrix of the measurement errors is assumed diagonal with equal elements on its diagonal. The measurement error variance parameter can then be estimated by applying the probabilistic principal component analysis (PPCA) model [5]. I find this a reasonable assumption in several cases I have studied, specifically when all measurements are of the same type of variable (e.g. all flows, all temperatures, all concentrations, or all absorbance measurements). Indeed, it can be safe to assume that the measurement errors for such variables are distributed independently and identically. Correlation-based PCA is equivalent to MLPCA whenever the variance-covariance matrix of the measurement errors is assumed diagonal with each element on the diagonal proportional to the overall variance of the corresponding measured variable. While this is a popular method, I personally find the proportionality assumption unreasonable in most cases I study. As a consequence, this means I cannot interpret correlation-based PCA as an MLPCA model. In the cases where (1) the implied assumptions of covariance-based PCA do not apply and (2) an MLPCA interpretation is valuable, I recommend to use one of the MLPCA methods instead [1-4]. Correlation-based and covariance-based PCA will produce the exact same results -apart from a scalar multiplier- when the individual variances for each variable are all exactly equal to each other. When these individual variances are similar but not the same, both methods will produce similar results. As stressed above already, the ultimate choice depends on the assumptions you are making. In addition, the utility of any particular model depends also on the context and purpose of your analysis. To quote George E. P. Box: "All models are wrong, but some are useful". [1] Wentzell, P. D., Andrews, D. T., Hamilton, D. C., Faber, K., & Kowalski, B. R. (1997). Maximum likelihood principal component analysis. Journal of Chemometrics, 11(4), 339-366. [2] Wentzell, P. D., & Lohnes, M. T. (1999). Maximum likelihood principal component analysis with correlated measurement errors: theoretical and practical considerations. Chemometrics and Intelligent Laboratory Systems, 45(1-2), 65-85. [3] Hoefsloot, H. C., Verouden, M. P., Westerhuis, J. A., & Smilde, A. K. (2006). Maximum likelihood scaling (MALS). Journal of Chemometrics, 20(3‐4), 120-127. [4] Narasimhan, S., & Shah, S. L. (2008). Model identification and error covariance matrix estimation from noisy data using PCA. Control Engineering Practice, 16(1), 146-155. [5] Tipping, M. E., & Bishop, C. M. (1999). Probabilistic principal component analysis. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 61(3), 611-622.
PCA on correlation or covariance?
I personally find it very valuable to discuss these options in light of the maximum-likelihood principal component analysis model (MLPCA) [1,2]. In MLPCA one applies a scaling (or even a rotation) suc
PCA on correlation or covariance? I personally find it very valuable to discuss these options in light of the maximum-likelihood principal component analysis model (MLPCA) [1,2]. In MLPCA one applies a scaling (or even a rotation) such that the measurement errors in the measured variables are independent and distributed according to the standard normal distribution. This scaling is also known as maximum likelihood scaling (MALS) [3]. In some case, the PCA model and the parameter defining the MALS scaling/rotation can be estimated together [4]. To interpret correlation-based and covariance-based PCA, one can then argue that: Covariance-based PCA is equivalent to MLPCA whenever the variance-covariance matrix of the measurement errors is assumed diagonal with equal elements on its diagonal. The measurement error variance parameter can then be estimated by applying the probabilistic principal component analysis (PPCA) model [5]. I find this a reasonable assumption in several cases I have studied, specifically when all measurements are of the same type of variable (e.g. all flows, all temperatures, all concentrations, or all absorbance measurements). Indeed, it can be safe to assume that the measurement errors for such variables are distributed independently and identically. Correlation-based PCA is equivalent to MLPCA whenever the variance-covariance matrix of the measurement errors is assumed diagonal with each element on the diagonal proportional to the overall variance of the corresponding measured variable. While this is a popular method, I personally find the proportionality assumption unreasonable in most cases I study. As a consequence, this means I cannot interpret correlation-based PCA as an MLPCA model. In the cases where (1) the implied assumptions of covariance-based PCA do not apply and (2) an MLPCA interpretation is valuable, I recommend to use one of the MLPCA methods instead [1-4]. Correlation-based and covariance-based PCA will produce the exact same results -apart from a scalar multiplier- when the individual variances for each variable are all exactly equal to each other. When these individual variances are similar but not the same, both methods will produce similar results. As stressed above already, the ultimate choice depends on the assumptions you are making. In addition, the utility of any particular model depends also on the context and purpose of your analysis. To quote George E. P. Box: "All models are wrong, but some are useful". [1] Wentzell, P. D., Andrews, D. T., Hamilton, D. C., Faber, K., & Kowalski, B. R. (1997). Maximum likelihood principal component analysis. Journal of Chemometrics, 11(4), 339-366. [2] Wentzell, P. D., & Lohnes, M. T. (1999). Maximum likelihood principal component analysis with correlated measurement errors: theoretical and practical considerations. Chemometrics and Intelligent Laboratory Systems, 45(1-2), 65-85. [3] Hoefsloot, H. C., Verouden, M. P., Westerhuis, J. A., & Smilde, A. K. (2006). Maximum likelihood scaling (MALS). Journal of Chemometrics, 20(3‐4), 120-127. [4] Narasimhan, S., & Shah, S. L. (2008). Model identification and error covariance matrix estimation from noisy data using PCA. Control Engineering Practice, 16(1), 146-155. [5] Tipping, M. E., & Bishop, C. M. (1999). Probabilistic principal component analysis. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 61(3), 611-622.
PCA on correlation or covariance? I personally find it very valuable to discuss these options in light of the maximum-likelihood principal component analysis model (MLPCA) [1,2]. In MLPCA one applies a scaling (or even a rotation) suc
671
PCA on correlation or covariance?
Straight and simple: if the scales are similar use cov-PCA, if not, use corr-PCA; otherwise, you better have a defense for not. If in doubt, use an F-test for the equality of the variances (ANOVA). If it fails the F-test, use corr; otherwise, use cov.
PCA on correlation or covariance?
Straight and simple: if the scales are similar use cov-PCA, if not, use corr-PCA; otherwise, you better have a defense for not. If in doubt, use an F-test for the equality of the variances (ANOVA). If
PCA on correlation or covariance? Straight and simple: if the scales are similar use cov-PCA, if not, use corr-PCA; otherwise, you better have a defense for not. If in doubt, use an F-test for the equality of the variances (ANOVA). If it fails the F-test, use corr; otherwise, use cov.
PCA on correlation or covariance? Straight and simple: if the scales are similar use cov-PCA, if not, use corr-PCA; otherwise, you better have a defense for not. If in doubt, use an F-test for the equality of the variances (ANOVA). If
672
PCA on correlation or covariance?
The arguments based on scale (for variables expressed in the same physical units) seem rather weak. Imagine a set of (dimensionless) variables whose standard deviations vary between 0.001 and 0.1. Compared to a standardized value of 1, these both seem to be 'small' and comparable levels of fluctuations. However, when you express them in decibel, this gives a range of -60 dB against -10 and 0 dB, respectively. Then this would probably then be classified as a 'large range' -- especially if you would include a standard deviation close to 0, i.e., minus infinity dB. My suggestion would be to do BOTH a correlation- and covariance-based PCA. If the two give the same (or very similar, whatever this may mean) PCs, then you can be reassured you've got an answer that is meaningul. If they give widely different PCs don't use PCA, because two different answers to one problem is not sensible way to solve questions.
PCA on correlation or covariance?
The arguments based on scale (for variables expressed in the same physical units) seem rather weak. Imagine a set of (dimensionless) variables whose standard deviations vary between 0.001 and 0.1. Com
PCA on correlation or covariance? The arguments based on scale (for variables expressed in the same physical units) seem rather weak. Imagine a set of (dimensionless) variables whose standard deviations vary between 0.001 and 0.1. Compared to a standardized value of 1, these both seem to be 'small' and comparable levels of fluctuations. However, when you express them in decibel, this gives a range of -60 dB against -10 and 0 dB, respectively. Then this would probably then be classified as a 'large range' -- especially if you would include a standard deviation close to 0, i.e., minus infinity dB. My suggestion would be to do BOTH a correlation- and covariance-based PCA. If the two give the same (or very similar, whatever this may mean) PCs, then you can be reassured you've got an answer that is meaningul. If they give widely different PCs don't use PCA, because two different answers to one problem is not sensible way to solve questions.
PCA on correlation or covariance? The arguments based on scale (for variables expressed in the same physical units) seem rather weak. Imagine a set of (dimensionless) variables whose standard deviations vary between 0.001 and 0.1. Com
673
How exactly does one “control for other variables”?
There are many ways to control for variables. The easiest, and one you came up with, is to stratify your data so you have sub-groups with similar characteristics - there are then methods to pool those results together to get a single "answer". This works if you have a very small number of variables you want to control for, but as you've rightly discovered, this rapidly falls apart as you split your data into smaller and smaller chunks. A more common approach is to include the variables you want to control for in a regression model. For example, if you have a regression model that can be conceptually described as: BMI = Impatience + Race + Gender + Socioeconomic Status + IQ The estimate you will get for Impatience will be the effect of Impatience within levels of the other covariates - regression allows you to essentially smooth over places where you don't have much data (the problem with the stratification approach), though this should be done with caution. There are yet more sophisticated ways of controlling for other variables, but odds are when someone says "controlled for other variables", they mean they were included in a regression model. Alright, you've asked for an example you can work on, to see how this goes. I'll walk you through it step by step. All you need is a copy of R installed. First, we need some data. Cut and paste the following chunks of code into R. Keep in mind this is a contrived example I made up on the spot, but it shows the process. covariate <- sample(0:1, 100, replace=TRUE) exposure <- runif(100,0,1)+(0.3*covariate) outcome <- 2.0+(0.5*exposure)+(0.25*covariate) That's your data. Note that we already know the relationship between the outcome, the exposure, and the covariate - that's the point of many simulation studies (of which this is an extremely basic example. You start with a structure you know, and you make sure your method can get you the right answer. Now then, onto the regression model. Type the following: lm(outcome~exposure) Did you get an Intercept = 2.0 and an exposure = 0.6766? Or something close to it, given there will be some random variation in the data? Good - this answer is wrong. We know it's wrong. Why is it wrong? We have failed to control for a variable that effects the outcome and the exposure. It's a binary variable, make it anything you please - gender, smoker/non-smoker, etc. Now run this model: lm(outcome~exposure+covariate) This time you should get coefficients of Intercept = 2.00, exposure = 0.50 and a covariate of 0.25. This, as we know, is the right answer. You've controlled for other variables. Now, what happens when we don't know if we've taken care of all of the variables that we need to (we never really do)? This is called residual confounding, and its a concern in most observational studies - that we have controlled imperfectly, and our answer, while close to right, isn't exact. Does that help more?
How exactly does one “control for other variables”?
There are many ways to control for variables. The easiest, and one you came up with, is to stratify your data so you have sub-groups with similar characteristics - there are then methods to pool those
How exactly does one “control for other variables”? There are many ways to control for variables. The easiest, and one you came up with, is to stratify your data so you have sub-groups with similar characteristics - there are then methods to pool those results together to get a single "answer". This works if you have a very small number of variables you want to control for, but as you've rightly discovered, this rapidly falls apart as you split your data into smaller and smaller chunks. A more common approach is to include the variables you want to control for in a regression model. For example, if you have a regression model that can be conceptually described as: BMI = Impatience + Race + Gender + Socioeconomic Status + IQ The estimate you will get for Impatience will be the effect of Impatience within levels of the other covariates - regression allows you to essentially smooth over places where you don't have much data (the problem with the stratification approach), though this should be done with caution. There are yet more sophisticated ways of controlling for other variables, but odds are when someone says "controlled for other variables", they mean they were included in a regression model. Alright, you've asked for an example you can work on, to see how this goes. I'll walk you through it step by step. All you need is a copy of R installed. First, we need some data. Cut and paste the following chunks of code into R. Keep in mind this is a contrived example I made up on the spot, but it shows the process. covariate <- sample(0:1, 100, replace=TRUE) exposure <- runif(100,0,1)+(0.3*covariate) outcome <- 2.0+(0.5*exposure)+(0.25*covariate) That's your data. Note that we already know the relationship between the outcome, the exposure, and the covariate - that's the point of many simulation studies (of which this is an extremely basic example. You start with a structure you know, and you make sure your method can get you the right answer. Now then, onto the regression model. Type the following: lm(outcome~exposure) Did you get an Intercept = 2.0 and an exposure = 0.6766? Or something close to it, given there will be some random variation in the data? Good - this answer is wrong. We know it's wrong. Why is it wrong? We have failed to control for a variable that effects the outcome and the exposure. It's a binary variable, make it anything you please - gender, smoker/non-smoker, etc. Now run this model: lm(outcome~exposure+covariate) This time you should get coefficients of Intercept = 2.00, exposure = 0.50 and a covariate of 0.25. This, as we know, is the right answer. You've controlled for other variables. Now, what happens when we don't know if we've taken care of all of the variables that we need to (we never really do)? This is called residual confounding, and its a concern in most observational studies - that we have controlled imperfectly, and our answer, while close to right, isn't exact. Does that help more?
How exactly does one “control for other variables”? There are many ways to control for variables. The easiest, and one you came up with, is to stratify your data so you have sub-groups with similar characteristics - there are then methods to pool those
674
How exactly does one “control for other variables”?
Introduction I like @EpiGrad's answer (+1) but let me take a different perspective. In the following I am referring to this PDF document: "Multiple Regression Analysis: Estimation", which has a section on "A 'Partialling Out' Interpretation of Multiple Regression" (p. 83f.). Unfortunately, I have no idea who is the author of this chapter and I will refer to it as REGCHAPTER. A similar explanation can be found in Kohler/Kreuter (2009) "Data Analysis Using Stata", chapter 8.2.3 "What does 'under control' mean?". I will use @EpiGrad's example to explain this approach. R code and results can be found in the Appendix. It also should be noted that "controling for other variables" does only make sense when the explanatory variables are moderately correlated (collinearity). In the aforementioned example, the Product-Moment correlation between exposure and covariate is 0.50, i.e., > cor(covariate, exposure) [1] 0.5036915 Residuals I assume that you have a basic understanding of the concept of residuals in regression analysis. Here is the Wikipedia explanation: " If one runs a regression on some data, then the deviations of the dependent variable observations from the fitted function are the residuals". What does 'under control' mean? Controlling for the variable covariate, the effect (regression weight) of exposure on outcome can be described as follows (I am sloppy and skip most indices and all hats, please refer to the above mentioned text for a precise description): $\newcommand{\resid}{{\rm resid}}\newcommand{\covariate}{{\rm covariate}}$ $$\beta_1=\frac{\sum \resid_{i1} \cdot y_i}{\sum \resid^2_{i1}}$$ $\resid_{i1}$ are the residuals when we regress exposure on covariate, i.e., $${\rm exposure} = {\rm const.} + \beta_{\covariate} \cdot \covariate + \resid$$ The "residuals [..] are the part of $x_{i1}$ that is uncorrelated with $x_{i2}$. [...] Thus, $\hat{\beta}_1$ measures the sample relationship between $y$ and $x_1$ after $x_2$ has been partialled out" (REGCHAPTER 84). "Partialled out" means "controlled for". I will demonstrate this idea using @EpiGrad's example data. First, I will regress exposure on covariate. Since I am only interested in the residuals lmEC.resid, I omit the output. summary(lmEC <- lm(exposure ~ covariate)) lmEC.resid <- residuals(lmEC) The next step is to regress outcome on these residuals (lmEC.resid): [output omitted] Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 2.45074 0.02058 119.095 < 2e-16 *** lmEC.resid 0.50000 0.07612 6.569 2.45e-09 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 [output omitted] As you can see, the regression weight for lmEC.resid (see column Estimate, $\beta_{lmEC.resid}=0.50$) in this simple regression is equal to the multiple regression weight for covariate, which also is $0.50$ (see @EpiGrad's answer or the R output below). Appendix R Code set.seed(1) covariate <- sample(0:1, 100, replace=TRUE) exposure <- runif(100,0,1)+(0.3*covariate) outcome <- 2.0+(0.5*exposure)+(0.25*covariate) ## Simple regression analysis summary(lm(outcome ~ exposure)) ## Multiple regression analysis summary(lm(outcome ~ exposure + covariate)) ## Correlation between covariate and exposure cor(covariate, exposure) ## "Partialling-out" approach ## Regress exposure on covariate summary(lmEC <- lm(exposure ~ covariate)) ## Save residuals lmEC.resid <- residuals(lmEC) ## Regress outcome on residuals summary(lm(outcome ~ lmEC.resid)) ## Check formula sum(lmEC.resid*outcome)/(sum(lmEC.resid^2)) R Output > set.seed(1) > covariate <- sample(0:1, 100, replace=TRUE) > exposure <- runif(100,0,1)+(0.3*covariate) > outcome <- 2.0+(0.5*exposure)+(0.25*covariate) > > ## Simple regression analysis > summary(lm(outcome ~ exposure)) Call: lm(formula = outcome ~ exposure) Residuals: Min 1Q Median 3Q Max -0.183265 -0.090531 0.001628 0.085434 0.187535 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.98702 0.02549 77.96 <2e-16 *** exposure 0.70103 0.03483 20.13 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.109 on 98 degrees of freedom Multiple R-squared: 0.8052, Adjusted R-squared: 0.8032 F-statistic: 405.1 on 1 and 98 DF, p-value: < 2.2e-16 > > ## Multiple regression analysis > summary(lm(outcome ~ exposure + covariate)) Call: lm(formula = outcome ~ exposure + covariate) Residuals: Min 1Q Median 3Q Max -7.765e-16 -7.450e-18 4.630e-18 1.553e-17 4.895e-16 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 2.000e+00 2.221e-17 9.006e+16 <2e-16 *** exposure 5.000e-01 3.508e-17 1.425e+16 <2e-16 *** covariate 2.500e-01 2.198e-17 1.138e+16 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 9.485e-17 on 97 degrees of freedom Multiple R-squared: 1, Adjusted R-squared: 1 F-statistic: 3.322e+32 on 2 and 97 DF, p-value: < 2.2e-16 > > ## Correlation between covariate and exposure > cor(covariate, exposure) [1] 0.5036915 > > ## "Partialling-out" approach > ## Regress exposure on covariate > summary(lmEC <- lm(exposure ~ covariate)) Call: lm(formula = exposure ~ covariate) Residuals: Min 1Q Median 3Q Max -0.49695 -0.24113 0.00857 0.21629 0.46715 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.51003 0.03787 13.468 < 2e-16 *** covariate 0.31550 0.05466 5.772 9.2e-08 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.2731 on 98 degrees of freedom Multiple R-squared: 0.2537, Adjusted R-squared: 0.2461 F-statistic: 33.32 on 1 and 98 DF, p-value: 9.198e-08 > ## Save residuals > lmEC.resid <- residuals(lmEC) > ## Regress outcome on residuals > summary(lm(outcome ~ lmEC.resid)) Call: lm(formula = outcome ~ lmEC.resid) Residuals: Min 1Q Median 3Q Max -0.1957 -0.1957 -0.1957 0.2120 0.2120 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 2.45074 0.02058 119.095 < 2e-16 *** lmEC.resid 0.50000 0.07612 6.569 2.45e-09 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.2058 on 98 degrees of freedom Multiple R-squared: 0.3057, Adjusted R-squared: 0.2986 F-statistic: 43.15 on 1 and 98 DF, p-value: 2.45e-09 > > ## Check formula > sum(lmEC.resid*outcome)/(sum(lmEC.resid^2)) [1] 0.5 >
How exactly does one “control for other variables”?
Introduction I like @EpiGrad's answer (+1) but let me take a different perspective. In the following I am referring to this PDF document: "Multiple Regression Analysis: Estimation", which has a sectio
How exactly does one “control for other variables”? Introduction I like @EpiGrad's answer (+1) but let me take a different perspective. In the following I am referring to this PDF document: "Multiple Regression Analysis: Estimation", which has a section on "A 'Partialling Out' Interpretation of Multiple Regression" (p. 83f.). Unfortunately, I have no idea who is the author of this chapter and I will refer to it as REGCHAPTER. A similar explanation can be found in Kohler/Kreuter (2009) "Data Analysis Using Stata", chapter 8.2.3 "What does 'under control' mean?". I will use @EpiGrad's example to explain this approach. R code and results can be found in the Appendix. It also should be noted that "controling for other variables" does only make sense when the explanatory variables are moderately correlated (collinearity). In the aforementioned example, the Product-Moment correlation between exposure and covariate is 0.50, i.e., > cor(covariate, exposure) [1] 0.5036915 Residuals I assume that you have a basic understanding of the concept of residuals in regression analysis. Here is the Wikipedia explanation: " If one runs a regression on some data, then the deviations of the dependent variable observations from the fitted function are the residuals". What does 'under control' mean? Controlling for the variable covariate, the effect (regression weight) of exposure on outcome can be described as follows (I am sloppy and skip most indices and all hats, please refer to the above mentioned text for a precise description): $\newcommand{\resid}{{\rm resid}}\newcommand{\covariate}{{\rm covariate}}$ $$\beta_1=\frac{\sum \resid_{i1} \cdot y_i}{\sum \resid^2_{i1}}$$ $\resid_{i1}$ are the residuals when we regress exposure on covariate, i.e., $${\rm exposure} = {\rm const.} + \beta_{\covariate} \cdot \covariate + \resid$$ The "residuals [..] are the part of $x_{i1}$ that is uncorrelated with $x_{i2}$. [...] Thus, $\hat{\beta}_1$ measures the sample relationship between $y$ and $x_1$ after $x_2$ has been partialled out" (REGCHAPTER 84). "Partialled out" means "controlled for". I will demonstrate this idea using @EpiGrad's example data. First, I will regress exposure on covariate. Since I am only interested in the residuals lmEC.resid, I omit the output. summary(lmEC <- lm(exposure ~ covariate)) lmEC.resid <- residuals(lmEC) The next step is to regress outcome on these residuals (lmEC.resid): [output omitted] Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 2.45074 0.02058 119.095 < 2e-16 *** lmEC.resid 0.50000 0.07612 6.569 2.45e-09 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 [output omitted] As you can see, the regression weight for lmEC.resid (see column Estimate, $\beta_{lmEC.resid}=0.50$) in this simple regression is equal to the multiple regression weight for covariate, which also is $0.50$ (see @EpiGrad's answer or the R output below). Appendix R Code set.seed(1) covariate <- sample(0:1, 100, replace=TRUE) exposure <- runif(100,0,1)+(0.3*covariate) outcome <- 2.0+(0.5*exposure)+(0.25*covariate) ## Simple regression analysis summary(lm(outcome ~ exposure)) ## Multiple regression analysis summary(lm(outcome ~ exposure + covariate)) ## Correlation between covariate and exposure cor(covariate, exposure) ## "Partialling-out" approach ## Regress exposure on covariate summary(lmEC <- lm(exposure ~ covariate)) ## Save residuals lmEC.resid <- residuals(lmEC) ## Regress outcome on residuals summary(lm(outcome ~ lmEC.resid)) ## Check formula sum(lmEC.resid*outcome)/(sum(lmEC.resid^2)) R Output > set.seed(1) > covariate <- sample(0:1, 100, replace=TRUE) > exposure <- runif(100,0,1)+(0.3*covariate) > outcome <- 2.0+(0.5*exposure)+(0.25*covariate) > > ## Simple regression analysis > summary(lm(outcome ~ exposure)) Call: lm(formula = outcome ~ exposure) Residuals: Min 1Q Median 3Q Max -0.183265 -0.090531 0.001628 0.085434 0.187535 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.98702 0.02549 77.96 <2e-16 *** exposure 0.70103 0.03483 20.13 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.109 on 98 degrees of freedom Multiple R-squared: 0.8052, Adjusted R-squared: 0.8032 F-statistic: 405.1 on 1 and 98 DF, p-value: < 2.2e-16 > > ## Multiple regression analysis > summary(lm(outcome ~ exposure + covariate)) Call: lm(formula = outcome ~ exposure + covariate) Residuals: Min 1Q Median 3Q Max -7.765e-16 -7.450e-18 4.630e-18 1.553e-17 4.895e-16 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 2.000e+00 2.221e-17 9.006e+16 <2e-16 *** exposure 5.000e-01 3.508e-17 1.425e+16 <2e-16 *** covariate 2.500e-01 2.198e-17 1.138e+16 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 9.485e-17 on 97 degrees of freedom Multiple R-squared: 1, Adjusted R-squared: 1 F-statistic: 3.322e+32 on 2 and 97 DF, p-value: < 2.2e-16 > > ## Correlation between covariate and exposure > cor(covariate, exposure) [1] 0.5036915 > > ## "Partialling-out" approach > ## Regress exposure on covariate > summary(lmEC <- lm(exposure ~ covariate)) Call: lm(formula = exposure ~ covariate) Residuals: Min 1Q Median 3Q Max -0.49695 -0.24113 0.00857 0.21629 0.46715 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.51003 0.03787 13.468 < 2e-16 *** covariate 0.31550 0.05466 5.772 9.2e-08 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.2731 on 98 degrees of freedom Multiple R-squared: 0.2537, Adjusted R-squared: 0.2461 F-statistic: 33.32 on 1 and 98 DF, p-value: 9.198e-08 > ## Save residuals > lmEC.resid <- residuals(lmEC) > ## Regress outcome on residuals > summary(lm(outcome ~ lmEC.resid)) Call: lm(formula = outcome ~ lmEC.resid) Residuals: Min 1Q Median 3Q Max -0.1957 -0.1957 -0.1957 0.2120 0.2120 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 2.45074 0.02058 119.095 < 2e-16 *** lmEC.resid 0.50000 0.07612 6.569 2.45e-09 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.2058 on 98 degrees of freedom Multiple R-squared: 0.3057, Adjusted R-squared: 0.2986 F-statistic: 43.15 on 1 and 98 DF, p-value: 2.45e-09 > > ## Check formula > sum(lmEC.resid*outcome)/(sum(lmEC.resid^2)) [1] 0.5 >
How exactly does one “control for other variables”? Introduction I like @EpiGrad's answer (+1) but let me take a different perspective. In the following I am referring to this PDF document: "Multiple Regression Analysis: Estimation", which has a sectio
675
How exactly does one “control for other variables”?
Of course some math will be involved, but it's not much: Euclid would have understood it well. All you really need to know is how to add and rescale vectors. Although this goes by the name of "linear algebra" nowadays, you only need to visualize it in two dimensions. This enables us to avoid the matrix machinery of linear algebra and focus on the concepts. A Geometric Story In the first figure, $y$ is the sum of $y_{\cdot 1}$ and $\alpha x_1$. (A vector $x_1$ scaled by a numeric factor $\alpha$; Greek letters $\alpha$ (alpha), $\beta$ (beta), and $\gamma$ (gamma) will refer to such numerical scale factors.) This figure actually began with the original vectors (shown as solid lines) $x_1$ and $y$. The least-squares "match" of $y$ to $x_1$ is found by taking the multiple of $x_1$ that comes closest to $y$ in the plane of the figure. That's how $\alpha$ was found. Taking this match away from $y$ left $y_{\cdot 1}$, the residual of $y$ with respect to $x_1$. ( The dot "$\cdot$" will consistently indicate which vectors have been "matched," "taken out," or "controlled for.") We can match other vectors to $x_1$. Here is a picture where $x_2$ was matched to $x_1$, expressing it as a multiple $\beta$ of $x_1$ plus its residual $x_{2\cdot 1}$: (It does not matter that the plane containing $x_1$ and $x_2$ could differ from the plane containing $x_1$ and $y$: these two figures are obtained independently of each other. All they are guaranteed to have in common is the vector $x_1$.) Similarly, any number of vectors $x_3, x_4, \ldots$ can be matched to $x_1$. Now consider the plane containing the two residuals $y_{\cdot 1}$ and $x_{2 \cdot 1}$. I will orient the picture to make $x_{2\cdot 1}$ horizontal, just as I oriented the previous pictures to make $x_1$ horizontal, because this time $x_{2\cdot 1}$ will play the role of matcher: Observe that in each of the three cases, the residual is perpendicular to the match. (If it were not, we could adjust the match to get it even closer to $y$, $x_2$, or $y_{\cdot 1}$.) The key idea is that by the time we get to the last figure, both vectors involved ($x_{2\cdot 1}$ and $y_{\cdot 1}$) are already perpendicular to $x_1$, by construction. Thus any subsequent adjustment to $y_{\cdot 1}$ involves changes that are all perpendicular to $x_1$. As a result, the new match $\gamma x_{2\cdot 1}$ and the new residual $y_{\cdot 12}$ remain perpendicular to $x_1$. (If other vectors are involved, we would proceed in the same way to match their residuals $x_{3\cdot 1}, x_{4\cdot 1}, \ldots$ to $x_2$.) There is one more important point to make. This construction has produced a residual $y_{\cdot 12}$ which is perpendicular to both $x_1$ and $x_2$. This means that $y_{\cdot 12}$ is also the residual in the space (three-dimensional Euclidean realm) spanned by $x_1, x_2,$ and $y$. That is, this two-step process of matching and taking residuals must have found the location in the $x_1, x_2$ plane that is closest to $y$. Since in this geometric description it does not matter which of $x_1$ and $x_2$ came first, we conclude that if the process had been done in the other order, starting with $x_2$ as the matcher and then using $x_1$, the result would have been the same. (If there are additional vectors, we would continue this "take out a matcher" process until each of those vectors had had its turn to be the matcher. In every case the operations would be the same as shown here and would always occur in a plane.) Application to Multiple Regression This geometric process has a direct multiple regression interpretation, because columns of numbers act exactly like geometric vectors. They have all the properties we require of vectors (axiomatically) and therefore can be thought of and manipulated in the same way with perfect mathematical accuracy and rigor. In a multiple regression setting with variables $X_1$, $X_2, \ldots$, and $Y$, the objective is to find a combination of $X_1$ and $X_2$ (etc) that comes closest to $Y$. Geometrically, all such combinations of $X_1$ and $X_2$ (etc) correspond to points in the $X_1, X_2, \ldots$ space. Fitting multiple regression coefficients is nothing more than projecting ("matching") vectors. The geometric argument has shown that Matching can be done sequentially and The order in which matching is done does not matter. The process of "taking out" a matcher by replacing all other vectors by their residuals is often referred to as "controlling" for the matcher. As we saw in the figures, once a matcher has been controlled for, all subsequent calculations make adjustments that are perpendicular to that matcher. If you like, you may think of "controlling" as "accounting (in the least square sense) for the contribution/influence/effect/association of a matcher on all the other variables." References You can see all this in action with data and working code in the answer at https://stats.stackexchange.com/a/46508. That answer might appeal more to people who prefer arithmetic over plane pictures. (The arithmetic to adjust the coefficients as matchers are sequentially brought in is straightforward nonetheless.) The language of matching is from Fred Mosteller and John Tukey.
How exactly does one “control for other variables”?
Of course some math will be involved, but it's not much: Euclid would have understood it well. All you really need to know is how to add and rescale vectors. Although this goes by the name of "linea
How exactly does one “control for other variables”? Of course some math will be involved, but it's not much: Euclid would have understood it well. All you really need to know is how to add and rescale vectors. Although this goes by the name of "linear algebra" nowadays, you only need to visualize it in two dimensions. This enables us to avoid the matrix machinery of linear algebra and focus on the concepts. A Geometric Story In the first figure, $y$ is the sum of $y_{\cdot 1}$ and $\alpha x_1$. (A vector $x_1$ scaled by a numeric factor $\alpha$; Greek letters $\alpha$ (alpha), $\beta$ (beta), and $\gamma$ (gamma) will refer to such numerical scale factors.) This figure actually began with the original vectors (shown as solid lines) $x_1$ and $y$. The least-squares "match" of $y$ to $x_1$ is found by taking the multiple of $x_1$ that comes closest to $y$ in the plane of the figure. That's how $\alpha$ was found. Taking this match away from $y$ left $y_{\cdot 1}$, the residual of $y$ with respect to $x_1$. ( The dot "$\cdot$" will consistently indicate which vectors have been "matched," "taken out," or "controlled for.") We can match other vectors to $x_1$. Here is a picture where $x_2$ was matched to $x_1$, expressing it as a multiple $\beta$ of $x_1$ plus its residual $x_{2\cdot 1}$: (It does not matter that the plane containing $x_1$ and $x_2$ could differ from the plane containing $x_1$ and $y$: these two figures are obtained independently of each other. All they are guaranteed to have in common is the vector $x_1$.) Similarly, any number of vectors $x_3, x_4, \ldots$ can be matched to $x_1$. Now consider the plane containing the two residuals $y_{\cdot 1}$ and $x_{2 \cdot 1}$. I will orient the picture to make $x_{2\cdot 1}$ horizontal, just as I oriented the previous pictures to make $x_1$ horizontal, because this time $x_{2\cdot 1}$ will play the role of matcher: Observe that in each of the three cases, the residual is perpendicular to the match. (If it were not, we could adjust the match to get it even closer to $y$, $x_2$, or $y_{\cdot 1}$.) The key idea is that by the time we get to the last figure, both vectors involved ($x_{2\cdot 1}$ and $y_{\cdot 1}$) are already perpendicular to $x_1$, by construction. Thus any subsequent adjustment to $y_{\cdot 1}$ involves changes that are all perpendicular to $x_1$. As a result, the new match $\gamma x_{2\cdot 1}$ and the new residual $y_{\cdot 12}$ remain perpendicular to $x_1$. (If other vectors are involved, we would proceed in the same way to match their residuals $x_{3\cdot 1}, x_{4\cdot 1}, \ldots$ to $x_2$.) There is one more important point to make. This construction has produced a residual $y_{\cdot 12}$ which is perpendicular to both $x_1$ and $x_2$. This means that $y_{\cdot 12}$ is also the residual in the space (three-dimensional Euclidean realm) spanned by $x_1, x_2,$ and $y$. That is, this two-step process of matching and taking residuals must have found the location in the $x_1, x_2$ plane that is closest to $y$. Since in this geometric description it does not matter which of $x_1$ and $x_2$ came first, we conclude that if the process had been done in the other order, starting with $x_2$ as the matcher and then using $x_1$, the result would have been the same. (If there are additional vectors, we would continue this "take out a matcher" process until each of those vectors had had its turn to be the matcher. In every case the operations would be the same as shown here and would always occur in a plane.) Application to Multiple Regression This geometric process has a direct multiple regression interpretation, because columns of numbers act exactly like geometric vectors. They have all the properties we require of vectors (axiomatically) and therefore can be thought of and manipulated in the same way with perfect mathematical accuracy and rigor. In a multiple regression setting with variables $X_1$, $X_2, \ldots$, and $Y$, the objective is to find a combination of $X_1$ and $X_2$ (etc) that comes closest to $Y$. Geometrically, all such combinations of $X_1$ and $X_2$ (etc) correspond to points in the $X_1, X_2, \ldots$ space. Fitting multiple regression coefficients is nothing more than projecting ("matching") vectors. The geometric argument has shown that Matching can be done sequentially and The order in which matching is done does not matter. The process of "taking out" a matcher by replacing all other vectors by their residuals is often referred to as "controlling" for the matcher. As we saw in the figures, once a matcher has been controlled for, all subsequent calculations make adjustments that are perpendicular to that matcher. If you like, you may think of "controlling" as "accounting (in the least square sense) for the contribution/influence/effect/association of a matcher on all the other variables." References You can see all this in action with data and working code in the answer at https://stats.stackexchange.com/a/46508. That answer might appeal more to people who prefer arithmetic over plane pictures. (The arithmetic to adjust the coefficients as matchers are sequentially brought in is straightforward nonetheless.) The language of matching is from Fred Mosteller and John Tukey.
How exactly does one “control for other variables”? Of course some math will be involved, but it's not much: Euclid would have understood it well. All you really need to know is how to add and rescale vectors. Although this goes by the name of "linea
676
How exactly does one “control for other variables”?
There is an excellent discussion so far of covariate adjustment as a means of "controlling for other variables". But I think that is only part of the story. In fact, there are many (other) design, model, and machine learning based strategies to address the impact of a number of possible confounding variables. This is a brief survey of some of the most important (non-adjustment) topics. While adjustment is the most widely used means of "controlling" for other variables, I think a good statistician should have an understanding of what it does (and doesn't do) in the context of other processes and procedures. Matching: Matching is a method of designing a paired analysis where observations are grouped into sets of 2 who are otherwise similar in their most important aspects. For instance, you might sample two individuals who are concordant in their education, income, professional tenure, age, marital status, (etc. etc.) but who are discordant in terms of their impatience. For binary exposures, the simple paired-t test suffices to test for a mean difference in their BMI controlling for all the matching features. If you are modeling a continuous exposure, an analogous measure would be a regression model through the origin for the differences. See Carlin 2005 $$E[Y_1 - Y_2] = \beta_0 (X_1 - X_2)$$ Weighting Weighting is yet another univariate analysis which models the association between a continuous or binary predictor $X$ and an outcome $Y$ so that the distribution of exposure levels is homogeneous between groups. These results are typically reported as standardized such as age-standardized mortality for two countries or several hospitals. Indirect standardization calculates an expected outcome distribution from the rates obtained in a "control" or "healthy" population that is projected to the distribution of strata in the referent population. Direct standardization goes the other way. These methods are typically used for a binary outcome. Propensity score weighting accounts of the probability of a binary exposure and controls for those variables in that regard. It is similar to direct standardization for an exposure. See Rothman, Modern Epidemiology 3rd edition. Randomization and Quasirandomization It's a subtle point, but if you are actually able to randomize people to a certain experimental condition, then the impact of other variables is mitigated. It's a remarkably stronger condition, because you do not even need to know what those other variables are. In that sense, you have "controlled" for their influence. This is not possible in observational research, but it turns out that propensity score methods create a simple probabilistic measure for exposure that allows one to weight, adjust, or match participants so that they can be analyzed in the same fashion as a quasi-randomized study. See Rosenbaum, Rubin 1983. Microsimulation Another way of simulating data that might have been obtained from a randomized study is to perform microsimulation. Here, one can actually turn their attention to larger and more sophisticated, machine learning like models. A term which Judea Pearl has coined that I like is "Oracle Models": complex networks which are capable of generating predictions and forecast for a number of features and outcomes. It turns out one can "fold down" the information of such an oracle model to simulate outcomes in a balanced cohort of people who represent a randomized cohort, balanced in their "control variable" distribution, and using simple t-test routines to assess the magnitude and precision of possible differences. See Rutter , Zaslavsky, and Feuer 2012 Matching, weighting, and covariate adjustment in a regression model all estimate the same associations, and thus all can be claimed to be ways of "controlling" for other variables.
How exactly does one “control for other variables”?
There is an excellent discussion so far of covariate adjustment as a means of "controlling for other variables". But I think that is only part of the story. In fact, there are many (other) design, mod
How exactly does one “control for other variables”? There is an excellent discussion so far of covariate adjustment as a means of "controlling for other variables". But I think that is only part of the story. In fact, there are many (other) design, model, and machine learning based strategies to address the impact of a number of possible confounding variables. This is a brief survey of some of the most important (non-adjustment) topics. While adjustment is the most widely used means of "controlling" for other variables, I think a good statistician should have an understanding of what it does (and doesn't do) in the context of other processes and procedures. Matching: Matching is a method of designing a paired analysis where observations are grouped into sets of 2 who are otherwise similar in their most important aspects. For instance, you might sample two individuals who are concordant in their education, income, professional tenure, age, marital status, (etc. etc.) but who are discordant in terms of their impatience. For binary exposures, the simple paired-t test suffices to test for a mean difference in their BMI controlling for all the matching features. If you are modeling a continuous exposure, an analogous measure would be a regression model through the origin for the differences. See Carlin 2005 $$E[Y_1 - Y_2] = \beta_0 (X_1 - X_2)$$ Weighting Weighting is yet another univariate analysis which models the association between a continuous or binary predictor $X$ and an outcome $Y$ so that the distribution of exposure levels is homogeneous between groups. These results are typically reported as standardized such as age-standardized mortality for two countries or several hospitals. Indirect standardization calculates an expected outcome distribution from the rates obtained in a "control" or "healthy" population that is projected to the distribution of strata in the referent population. Direct standardization goes the other way. These methods are typically used for a binary outcome. Propensity score weighting accounts of the probability of a binary exposure and controls for those variables in that regard. It is similar to direct standardization for an exposure. See Rothman, Modern Epidemiology 3rd edition. Randomization and Quasirandomization It's a subtle point, but if you are actually able to randomize people to a certain experimental condition, then the impact of other variables is mitigated. It's a remarkably stronger condition, because you do not even need to know what those other variables are. In that sense, you have "controlled" for their influence. This is not possible in observational research, but it turns out that propensity score methods create a simple probabilistic measure for exposure that allows one to weight, adjust, or match participants so that they can be analyzed in the same fashion as a quasi-randomized study. See Rosenbaum, Rubin 1983. Microsimulation Another way of simulating data that might have been obtained from a randomized study is to perform microsimulation. Here, one can actually turn their attention to larger and more sophisticated, machine learning like models. A term which Judea Pearl has coined that I like is "Oracle Models": complex networks which are capable of generating predictions and forecast for a number of features and outcomes. It turns out one can "fold down" the information of such an oracle model to simulate outcomes in a balanced cohort of people who represent a randomized cohort, balanced in their "control variable" distribution, and using simple t-test routines to assess the magnitude and precision of possible differences. See Rutter , Zaslavsky, and Feuer 2012 Matching, weighting, and covariate adjustment in a regression model all estimate the same associations, and thus all can be claimed to be ways of "controlling" for other variables.
How exactly does one “control for other variables”? There is an excellent discussion so far of covariate adjustment as a means of "controlling for other variables". But I think that is only part of the story. In fact, there are many (other) design, mod
677
How exactly does one “control for other variables”?
The software doesn't literally control for variables. If you're familiar with matrix notation of regression $Y=X\beta+\varepsilon$, then you may remember that least squares solution is $b=(X^TX)^{-1}X^TY$. So, the software evaluates this expression numerically using computational linear algebra methods.
How exactly does one “control for other variables”?
The software doesn't literally control for variables. If you're familiar with matrix notation of regression $Y=X\beta+\varepsilon$, then you may remember that least squares solution is $b=(X^TX)^{-1}X
How exactly does one “control for other variables”? The software doesn't literally control for variables. If you're familiar with matrix notation of regression $Y=X\beta+\varepsilon$, then you may remember that least squares solution is $b=(X^TX)^{-1}X^TY$. So, the software evaluates this expression numerically using computational linear algebra methods.
How exactly does one “control for other variables”? The software doesn't literally control for variables. If you're familiar with matrix notation of regression $Y=X\beta+\varepsilon$, then you may remember that least squares solution is $b=(X^TX)^{-1}X
678
When should I use lasso vs ridge?
Keep in mind that ridge regression can't zero out coefficients; thus, you either end up including all the coefficients in the model, or none of them. In contrast, the LASSO does both parameter shrinkage and variable selection automatically. If some of your covariates are highly correlated, you may want to look at the Elastic Net [3] instead of the LASSO. I'd personally recommend using the Non-Negative Garotte (NNG) [1] as it's consistent in terms of estimation and variable selection [2]. Unlike LASSO and ridge regression, NNG requires an initial estimate that is then shrunk towards the origin. In the original paper, Breiman recommends the least-squares solution for the initial estimate (you may however want to start the search from a ridge regression solution and use something like GCV to select the penalty parameter). In terms of available software, I've implemented the original NNG in MATLAB (based on Breiman's original FORTRAN code). You can download it from: http://www.emakalic.org/blog/wp-content/uploads/2010/04/nngarotte.zip BTW, if you prefer a Bayesian solution, check out [4,5]. References: [1] Breiman, L. Better Subset Regression Using the Nonnegative Garrote Technometrics, 1995, 37, 373-384 [2] Yuan, M. & Lin, Y. On the non-negative garrotte estimator Journal of the Royal Statistical Society (Series B), 2007, 69, 143-161 [3] Zou, H. & Hastie, T. Regularization and variable selection via the elastic net Journal of the Royal Statistical Society (Series B), 2005, 67, 301-320 [4] Park, T. & Casella, G. The Bayesian Lasso Journal of the American Statistical Association, 2008, 103, 681-686 [5] Kyung, M.; Gill, J.; Ghosh, M. & Casella, G. Penalized Regression, Standard Errors, and Bayesian Lassos Bayesian Analysis, 2010, 5, 369-412
When should I use lasso vs ridge?
Keep in mind that ridge regression can't zero out coefficients; thus, you either end up including all the coefficients in the model, or none of them. In contrast, the LASSO does both parameter shrinka
When should I use lasso vs ridge? Keep in mind that ridge regression can't zero out coefficients; thus, you either end up including all the coefficients in the model, or none of them. In contrast, the LASSO does both parameter shrinkage and variable selection automatically. If some of your covariates are highly correlated, you may want to look at the Elastic Net [3] instead of the LASSO. I'd personally recommend using the Non-Negative Garotte (NNG) [1] as it's consistent in terms of estimation and variable selection [2]. Unlike LASSO and ridge regression, NNG requires an initial estimate that is then shrunk towards the origin. In the original paper, Breiman recommends the least-squares solution for the initial estimate (you may however want to start the search from a ridge regression solution and use something like GCV to select the penalty parameter). In terms of available software, I've implemented the original NNG in MATLAB (based on Breiman's original FORTRAN code). You can download it from: http://www.emakalic.org/blog/wp-content/uploads/2010/04/nngarotte.zip BTW, if you prefer a Bayesian solution, check out [4,5]. References: [1] Breiman, L. Better Subset Regression Using the Nonnegative Garrote Technometrics, 1995, 37, 373-384 [2] Yuan, M. & Lin, Y. On the non-negative garrotte estimator Journal of the Royal Statistical Society (Series B), 2007, 69, 143-161 [3] Zou, H. & Hastie, T. Regularization and variable selection via the elastic net Journal of the Royal Statistical Society (Series B), 2005, 67, 301-320 [4] Park, T. & Casella, G. The Bayesian Lasso Journal of the American Statistical Association, 2008, 103, 681-686 [5] Kyung, M.; Gill, J.; Ghosh, M. & Casella, G. Penalized Regression, Standard Errors, and Bayesian Lassos Bayesian Analysis, 2010, 5, 369-412
When should I use lasso vs ridge? Keep in mind that ridge regression can't zero out coefficients; thus, you either end up including all the coefficients in the model, or none of them. In contrast, the LASSO does both parameter shrinka
679
When should I use lasso vs ridge?
Ridge or lasso are forms of regularized linear regressions. The regularization can also be interpreted as prior in a maximum a posteriori estimation method. Under this interpretation, the ridge and the lasso make different assumptions on the class of linear transformation they infer to relate input and output data. In the ridge, the coefficients of the linear transformation are normal distributed and in the lasso they are Laplace distributed. In the lasso, this makes it easier for the coefficients to be zero and therefore easier to eliminate some of your input variable as not contributing to the output. There are also some practical considerations. The ridge is a bit easier to implement and faster to compute, which may matter depending on the type of data you have. If you have both implemented, use subsets of your data to find the ridge and the lasso and compare how well they work on the left out data. The errors should give you an idea of which to use.
When should I use lasso vs ridge?
Ridge or lasso are forms of regularized linear regressions. The regularization can also be interpreted as prior in a maximum a posteriori estimation method. Under this interpretation, the ridge and
When should I use lasso vs ridge? Ridge or lasso are forms of regularized linear regressions. The regularization can also be interpreted as prior in a maximum a posteriori estimation method. Under this interpretation, the ridge and the lasso make different assumptions on the class of linear transformation they infer to relate input and output data. In the ridge, the coefficients of the linear transformation are normal distributed and in the lasso they are Laplace distributed. In the lasso, this makes it easier for the coefficients to be zero and therefore easier to eliminate some of your input variable as not contributing to the output. There are also some practical considerations. The ridge is a bit easier to implement and faster to compute, which may matter depending on the type of data you have. If you have both implemented, use subsets of your data to find the ridge and the lasso and compare how well they work on the left out data. The errors should give you an idea of which to use.
When should I use lasso vs ridge? Ridge or lasso are forms of regularized linear regressions. The regularization can also be interpreted as prior in a maximum a posteriori estimation method. Under this interpretation, the ridge and
680
When should I use lasso vs ridge?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Generally, when you have many small/medium sized effects you should go with ridge. If you have only a few variables with a medium/large effect, go with lasso. Hastie, Tibshirani, Friedman
When should I use lasso vs ridge?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
When should I use lasso vs ridge? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Generally, when you have many small/medium sized effects you should go with ridge. If you have only a few variables with a medium/large effect, go with lasso. Hastie, Tibshirani, Friedman
When should I use lasso vs ridge? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
681
How to deal with perfect separation in logistic regression?
A solution to this is to utilize a form of penalized regression. In fact, this is the original reason some of the penalized regression forms were developed (although they turned out to have other interesting properties. Install and load package glmnet in R and you're mostly ready to go. One of the less user-friendly aspects of glmnet is that you can only feed it matrices, not formulas as we're used to. However, you can look at model.matrix and the like to construct this matrix from a data.frame and a formula... Now, when you expect that this perfect separation is not just a byproduct of your sample, but could be true in the population, you specifically don't want to handle this: use this separating variable simply as the sole predictor for your outcome, not employing a model of any kind.
How to deal with perfect separation in logistic regression?
A solution to this is to utilize a form of penalized regression. In fact, this is the original reason some of the penalized regression forms were developed (although they turned out to have other inte
How to deal with perfect separation in logistic regression? A solution to this is to utilize a form of penalized regression. In fact, this is the original reason some of the penalized regression forms were developed (although they turned out to have other interesting properties. Install and load package glmnet in R and you're mostly ready to go. One of the less user-friendly aspects of glmnet is that you can only feed it matrices, not formulas as we're used to. However, you can look at model.matrix and the like to construct this matrix from a data.frame and a formula... Now, when you expect that this perfect separation is not just a byproduct of your sample, but could be true in the population, you specifically don't want to handle this: use this separating variable simply as the sole predictor for your outcome, not employing a model of any kind.
How to deal with perfect separation in logistic regression? A solution to this is to utilize a form of penalized regression. In fact, this is the original reason some of the penalized regression forms were developed (although they turned out to have other inte
682
How to deal with perfect separation in logistic regression?
You've several options: Remove some of the bias. (a) By penalizing the likelihood as per @Nick's suggestion. Package logistf in R or the FIRTH option in SAS's PROC LOGISTIC implement the method proposed in Firth (1993), "Bias reduction of maximum likelihood estimates", Biometrika, 80,1.; which removes the first-order bias from maximum likelihood estimates. (Here @Gavin recommends the brglm package, which I'm not familiar with, but I gather it implements a similar approach for non-canonical link functions e.g. probit.) (b) By using median-unbiased estimates in exact conditional logistic regression. Package elrm or logistiX in R, or the EXACT statement in SAS's PROC LOGISTIC. Exclude cases where the predictor category or value causing separation occurs. These may well be outside your scope; or worthy of further, focused investigation. (The R package safeBinaryRegression is handy for finding them.) Re-cast the model. Typically this is something you'd have done beforehand if you'd thought about it, because it's too complex for your sample size. (a) Remove the predictor from the model. Dicey, for the reasons given by @Simon: "You're removing the predictor that best explains the response". (b) By collapsing predictor categories / binning the predictor values. Only if this makes sense. (c) Re-expressing the predictor as two (or more) crossed factors without interaction. Only if this makes sense. Use a Bayesian analysis as per @Manoel's suggestion. Though it seems unlikely you'd want to just because of separation, worth considering on its other merits.The paper he recommends is Gelman et al (2008), "A weakly informative default prior distribution for logistic & other regression models", Ann. Appl. Stat., 2, 4: the default in question is an independent Cauchy prior for each coefficient, with a mean of zero & a scale of $\frac{5}{2}$; to be used after standardizing all continuous predictors to have a mean of zero & a standard deviation of $\frac{1}{2}$. If you can elucidate strongly informative priors, so much the better. Do nothing. (But calculate confidence intervals based on profile likelihoods, as the Wald estimates of standard error will be badly wrong.) An often over-looked option. If the purpose of the model is just to describe what you've learnt about the relationships between predictors & response, there's no shame in quoting a confidence interval for an odds ratio of, say, 2.3 upwards. (Indeed it could seem fishy to quote confidence intervals based on unbiased estimates that exclude the odds ratios best supported by the data.) Problems come when you're trying to predict using point estimates, & the predictor on which separation occurs swamps the others. Use a hidden logistic regression model, as described in Rousseeuw & Christmann (2003),"Robustness against separation and outliers in logistic regression", Computational Statistics & Data Analysis, 43, 3, and implemented in the R package hlr. (@user603 suggests this.) I haven't read the paper, but they say in the abstract "a slightly more general model is proposed under which the observed response is strongly related but not equal to the unobservable true response", which suggests to me it mightn't be a good idea to use the method unless that sounds plausible. "Change a few randomly selected observations from 1 to 0 or 0 to 1 among variables exhibiting complete separation": @RobertF's comment. This suggestion seems to arise from regarding separation as a problem per se rather than as a symptom of a paucity of information in the data which might lead you to prefer other methods to maximum-likelihood estimation, or to limit inferences to those you can make with reasonable precision—approaches which have their own merits & are not just "fixes" for separation. (Aside from its being unabashedly ad hoc, it's unpalatable to most that analysts asking the same question of the same data, making the same assumptions, should give different answers owing to the result of a coin toss or whatever.)
How to deal with perfect separation in logistic regression?
You've several options: Remove some of the bias. (a) By penalizing the likelihood as per @Nick's suggestion. Package logistf in R or the FIRTH option in SAS's PROC LOGISTIC implement the method propo
How to deal with perfect separation in logistic regression? You've several options: Remove some of the bias. (a) By penalizing the likelihood as per @Nick's suggestion. Package logistf in R or the FIRTH option in SAS's PROC LOGISTIC implement the method proposed in Firth (1993), "Bias reduction of maximum likelihood estimates", Biometrika, 80,1.; which removes the first-order bias from maximum likelihood estimates. (Here @Gavin recommends the brglm package, which I'm not familiar with, but I gather it implements a similar approach for non-canonical link functions e.g. probit.) (b) By using median-unbiased estimates in exact conditional logistic regression. Package elrm or logistiX in R, or the EXACT statement in SAS's PROC LOGISTIC. Exclude cases where the predictor category or value causing separation occurs. These may well be outside your scope; or worthy of further, focused investigation. (The R package safeBinaryRegression is handy for finding them.) Re-cast the model. Typically this is something you'd have done beforehand if you'd thought about it, because it's too complex for your sample size. (a) Remove the predictor from the model. Dicey, for the reasons given by @Simon: "You're removing the predictor that best explains the response". (b) By collapsing predictor categories / binning the predictor values. Only if this makes sense. (c) Re-expressing the predictor as two (or more) crossed factors without interaction. Only if this makes sense. Use a Bayesian analysis as per @Manoel's suggestion. Though it seems unlikely you'd want to just because of separation, worth considering on its other merits.The paper he recommends is Gelman et al (2008), "A weakly informative default prior distribution for logistic & other regression models", Ann. Appl. Stat., 2, 4: the default in question is an independent Cauchy prior for each coefficient, with a mean of zero & a scale of $\frac{5}{2}$; to be used after standardizing all continuous predictors to have a mean of zero & a standard deviation of $\frac{1}{2}$. If you can elucidate strongly informative priors, so much the better. Do nothing. (But calculate confidence intervals based on profile likelihoods, as the Wald estimates of standard error will be badly wrong.) An often over-looked option. If the purpose of the model is just to describe what you've learnt about the relationships between predictors & response, there's no shame in quoting a confidence interval for an odds ratio of, say, 2.3 upwards. (Indeed it could seem fishy to quote confidence intervals based on unbiased estimates that exclude the odds ratios best supported by the data.) Problems come when you're trying to predict using point estimates, & the predictor on which separation occurs swamps the others. Use a hidden logistic regression model, as described in Rousseeuw & Christmann (2003),"Robustness against separation and outliers in logistic regression", Computational Statistics & Data Analysis, 43, 3, and implemented in the R package hlr. (@user603 suggests this.) I haven't read the paper, but they say in the abstract "a slightly more general model is proposed under which the observed response is strongly related but not equal to the unobservable true response", which suggests to me it mightn't be a good idea to use the method unless that sounds plausible. "Change a few randomly selected observations from 1 to 0 or 0 to 1 among variables exhibiting complete separation": @RobertF's comment. This suggestion seems to arise from regarding separation as a problem per se rather than as a symptom of a paucity of information in the data which might lead you to prefer other methods to maximum-likelihood estimation, or to limit inferences to those you can make with reasonable precision—approaches which have their own merits & are not just "fixes" for separation. (Aside from its being unabashedly ad hoc, it's unpalatable to most that analysts asking the same question of the same data, making the same assumptions, should give different answers owing to the result of a coin toss or whatever.)
How to deal with perfect separation in logistic regression? You've several options: Remove some of the bias. (a) By penalizing the likelihood as per @Nick's suggestion. Package logistf in R or the FIRTH option in SAS's PROC LOGISTIC implement the method propo
683
How to deal with perfect separation in logistic regression?
This is an expansion of Scortchi and Manoel's answers, but since you seem to use R I thought I'd supply some code. :) I believe the easiest and most straightforward solution to your problem is to use a Bayesian analysis with non-informative prior assumptions as proposed by Gelman et al (2008). As Scortchi mentions, Gelman recommends to put a Cauchy prior with median 0.0 and scale 2.5 on each coefficient (normalized to have mean 0.0 and a SD of 0.5). This will regularize the coefficients and pull them just slightly towards zero. In this case it is exactly what you want. Due to having very wide tails the Cauchy still allows for large coefficients (as opposed to the short tailed Normal), from Gelman: How to run this analysis? Use the bayesglm function in arm package that implements this analysis! library(arm) set.seed(123456) # Faking some data where x1 is unrelated to y # while x2 perfectly separates y. d <- data.frame(y = c(0,0,0,0, 0, 1,1,1,1,1), x1 = rnorm(10), x2 = sort(rnorm(10))) fit <- glm(y ~ x1 + x2, data=d, family="binomial") ## Warning message: ## glm.fit: fitted probabilities numerically 0 or 1 occurred summary(fit) ## Call: ## glm(formula = y ~ x1 + x2, family = "binomial", data = d) ## ## Deviance Residuals: ## Min 1Q Median 3Q Max ## -1.114e-05 -2.110e-08 0.000e+00 2.110e-08 1.325e-05 ## ## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -18.528 75938.934 0 1 ## x1 -4.837 76469.100 0 1 ## x2 81.689 165617.221 0 1 ## ## (Dispersion parameter for binomial family taken to be 1) ## ## Null deviance: 1.3863e+01 on 9 degrees of freedom ## Residual deviance: 3.3646e-10 on 7 degrees of freedom ## AIC: 6 ## ## Number of Fisher Scoring iterations: 25 Does not work that well... Now the Bayesian version: fit <- bayesglm(y ~ x1 + x2, data=d, family="binomial") display(fit) ## bayesglm(formula = y ~ x1 + x2, family = "binomial", data = d) ## coef.est coef.se ## (Intercept) -1.10 1.37 ## x1 -0.05 0.79 ## x2 3.75 1.85 ## --- ## n = 10, k = 3 ## residual deviance = 2.2, null deviance = 3.3 (difference = 1.1) Super-simple, no? References Gelman et al (2008), "A weakly informative default prior distribution for logistic & other regression models", Ann. Appl. Stat., 2, 4 http://projecteuclid.org/euclid.aoas/1231424214
How to deal with perfect separation in logistic regression?
This is an expansion of Scortchi and Manoel's answers, but since you seem to use R I thought I'd supply some code. :) I believe the easiest and most straightforward solution to your problem is to use
How to deal with perfect separation in logistic regression? This is an expansion of Scortchi and Manoel's answers, but since you seem to use R I thought I'd supply some code. :) I believe the easiest and most straightforward solution to your problem is to use a Bayesian analysis with non-informative prior assumptions as proposed by Gelman et al (2008). As Scortchi mentions, Gelman recommends to put a Cauchy prior with median 0.0 and scale 2.5 on each coefficient (normalized to have mean 0.0 and a SD of 0.5). This will regularize the coefficients and pull them just slightly towards zero. In this case it is exactly what you want. Due to having very wide tails the Cauchy still allows for large coefficients (as opposed to the short tailed Normal), from Gelman: How to run this analysis? Use the bayesglm function in arm package that implements this analysis! library(arm) set.seed(123456) # Faking some data where x1 is unrelated to y # while x2 perfectly separates y. d <- data.frame(y = c(0,0,0,0, 0, 1,1,1,1,1), x1 = rnorm(10), x2 = sort(rnorm(10))) fit <- glm(y ~ x1 + x2, data=d, family="binomial") ## Warning message: ## glm.fit: fitted probabilities numerically 0 or 1 occurred summary(fit) ## Call: ## glm(formula = y ~ x1 + x2, family = "binomial", data = d) ## ## Deviance Residuals: ## Min 1Q Median 3Q Max ## -1.114e-05 -2.110e-08 0.000e+00 2.110e-08 1.325e-05 ## ## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -18.528 75938.934 0 1 ## x1 -4.837 76469.100 0 1 ## x2 81.689 165617.221 0 1 ## ## (Dispersion parameter for binomial family taken to be 1) ## ## Null deviance: 1.3863e+01 on 9 degrees of freedom ## Residual deviance: 3.3646e-10 on 7 degrees of freedom ## AIC: 6 ## ## Number of Fisher Scoring iterations: 25 Does not work that well... Now the Bayesian version: fit <- bayesglm(y ~ x1 + x2, data=d, family="binomial") display(fit) ## bayesglm(formula = y ~ x1 + x2, family = "binomial", data = d) ## coef.est coef.se ## (Intercept) -1.10 1.37 ## x1 -0.05 0.79 ## x2 3.75 1.85 ## --- ## n = 10, k = 3 ## residual deviance = 2.2, null deviance = 3.3 (difference = 1.1) Super-simple, no? References Gelman et al (2008), "A weakly informative default prior distribution for logistic & other regression models", Ann. Appl. Stat., 2, 4 http://projecteuclid.org/euclid.aoas/1231424214
How to deal with perfect separation in logistic regression? This is an expansion of Scortchi and Manoel's answers, but since you seem to use R I thought I'd supply some code. :) I believe the easiest and most straightforward solution to your problem is to use
684
How to deal with perfect separation in logistic regression?
One of the most thorough explanations of "quasi-complete separation" issues in maximum likelihood is Paul Allison's paper. He's writing about SAS software, but the issues he addresses are generalizable to any software: Complete separation occurs whenever a linear function of x can generate perfect predictions of y Quasi-complete separation occurs when (a) there exists some coefficient vector b such that bxi ≥ 0 whenever yi = 1, and bxi ≤ 0* whenever **yi = 0 and this equality holds for at least one case in each category of the dependent variable. In other words in the simplest case, for any dichotomous independent variable in a logistic regression, if there is a zero in the 2 × 2 table formed by that variable and the dependent variable, the ML estimate for the regression coefficient does not exist. Allison discusses many of the solutions already mentioned including deletion of problem variables, collapsing categories, doing nothing, leveraging exact logistic regression, Bayesian estimation and penalized maximum likelihood estimation. http://www2.sas.com/proceedings/forum2008/360-2008.pdf
How to deal with perfect separation in logistic regression?
One of the most thorough explanations of "quasi-complete separation" issues in maximum likelihood is Paul Allison's paper. He's writing about SAS software, but the issues he addresses are generalizabl
How to deal with perfect separation in logistic regression? One of the most thorough explanations of "quasi-complete separation" issues in maximum likelihood is Paul Allison's paper. He's writing about SAS software, but the issues he addresses are generalizable to any software: Complete separation occurs whenever a linear function of x can generate perfect predictions of y Quasi-complete separation occurs when (a) there exists some coefficient vector b such that bxi ≥ 0 whenever yi = 1, and bxi ≤ 0* whenever **yi = 0 and this equality holds for at least one case in each category of the dependent variable. In other words in the simplest case, for any dichotomous independent variable in a logistic regression, if there is a zero in the 2 × 2 table formed by that variable and the dependent variable, the ML estimate for the regression coefficient does not exist. Allison discusses many of the solutions already mentioned including deletion of problem variables, collapsing categories, doing nothing, leveraging exact logistic regression, Bayesian estimation and penalized maximum likelihood estimation. http://www2.sas.com/proceedings/forum2008/360-2008.pdf
How to deal with perfect separation in logistic regression? One of the most thorough explanations of "quasi-complete separation" issues in maximum likelihood is Paul Allison's paper. He's writing about SAS software, but the issues he addresses are generalizabl
685
How to deal with perfect separation in logistic regression?
The original question is miscast and many of the answers are problematic. The fact that a maximum likelihood estimate is $\infty$ when there is perfect separation is only a problem because we continue to use Wald statistics (i.e., we use the information matrix and standard errors) for inference. An $\infty$ $\beta$ gives rise to a predicted probability of 1.0. There is nothing wrong with this, although Bayesian models or shrinkage in a frequentist model is likely to result in a better calibrated model. Just use likelihood ratio $\chi^2$ test and profile likelihood confidence intervals and you'll get valid inference without changing the model. See for example this R package: https://cran.r-project.org/web/packages/ProfileLikelihood/ProfileLikelihood.pdf. I think we should be routinely be using Bayesian models but let's recognize that $\infty$ is a valid MLE.
How to deal with perfect separation in logistic regression?
The original question is miscast and many of the answers are problematic. The fact that a maximum likelihood estimate is $\infty$ when there is perfect separation is only a problem because we continu
How to deal with perfect separation in logistic regression? The original question is miscast and many of the answers are problematic. The fact that a maximum likelihood estimate is $\infty$ when there is perfect separation is only a problem because we continue to use Wald statistics (i.e., we use the information matrix and standard errors) for inference. An $\infty$ $\beta$ gives rise to a predicted probability of 1.0. There is nothing wrong with this, although Bayesian models or shrinkage in a frequentist model is likely to result in a better calibrated model. Just use likelihood ratio $\chi^2$ test and profile likelihood confidence intervals and you'll get valid inference without changing the model. See for example this R package: https://cran.r-project.org/web/packages/ProfileLikelihood/ProfileLikelihood.pdf. I think we should be routinely be using Bayesian models but let's recognize that $\infty$ is a valid MLE.
How to deal with perfect separation in logistic regression? The original question is miscast and many of the answers are problematic. The fact that a maximum likelihood estimate is $\infty$ when there is perfect separation is only a problem because we continu
686
How to deal with perfect separation in logistic regression?
For logistic models for inference, it's important to first underscore that there is no error here. The warning in R is correctly informing you that the maximum likelihood estimator lies on the boundary of the parameter space. The odds ratio of $\infty$ is strongly suggestive of an association. The only issue is that two common methods of producing tests: the Wald test and the Likelihood ratio test require an evaluation of the information under the alternative hypothesis. With data generated along the lines of x <- seq(-3, 3, by=0.1) y <- x > 0 summary(glm(y ~ x, family=binomial)) The warning is made: Warning messages: 1: glm.fit: algorithm did not converge 2: glm.fit: fitted probabilities numerically 0 or 1 occurred which very obviously reflects the dependence that is built into these data. In R the Wald test is found with summary.glm or with waldtest in the lmtest package. The likelihood ratio test is performed with anova or with lrtest in the lmtest package. In both cases, the information matrix is infinitely valued, and no inference is available. Rather, R does produce output, but you cannot trust it. The inference that R typically produces in these cases has p-values very close to one. This is because the loss of precision in the OR is orders of magnitude smaller that the loss of precision in the variance-covariance matrix. Some solutions outlined here: Use a one-step estimator, There is a lot of theory supporting the low bias, efficiency, and generalizability of one step estimators. It is easy to specify a one-step estimator in R and the results are typically very favorable for prediction and inference. And this model will never diverge, because the iterator (Newton-Raphson) simply does not have the chance to do so! fit.1s <- glm(y ~ x, family=binomial, control=glm.control(maxit=1)) summary(fit.1s) Gives: Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -0.03987 0.29569 -0.135 0.893 x 1.19604 0.16794 7.122 1.07e-12 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 So you can see the predictions reflect the direction of trend. And the inference is highly suggestive of the trends which we believe to be true. perform a score test, The Score (or Rao) statistic differs from the the likelihood ratio and wald statistics. It does not require an evaluation of the variance under the alternative hypothesis. We fit the model under the null: mm <- model.matrix( ~ x) fit0 <- glm(y ~ 1, family=binomial) pred0 <- predict(fit0, type='response') inf.null <- t(mm) %*% diag(binomial()$variance(mu=pred0)) %*% mm sc.null <- t(mm) %*% c(y - pred0) score.stat <- t(sc.null) %*% solve(inf.null) %*% sc.null ## compare to chisq pchisq(score.stat, 1, lower.tail=F) Gives as a measure of association very strong statistical significance. Note by the way that the one step estimator produces a $\chi^2$ test statistic of 50.7 and the score test here produces a test statistic pf 45.75 > pchisq(scstat, df=1, lower.tail=F) [,1] [1,] 1.343494e-11 In both cases you have inference for an OR of infinity. , and use median unbiased estimates for a confidence interval. You can produce a median unbiased, non-singular 95% CI for the infinite odds ratio by using median unbiased estimation. The package epitools in R can do this. And I give an example of implementing this estimator here: Confidence interval for Bernoulli sampling
How to deal with perfect separation in logistic regression?
For logistic models for inference, it's important to first underscore that there is no error here. The warning in R is correctly informing you that the maximum likelihood estimator lies on the boundar
How to deal with perfect separation in logistic regression? For logistic models for inference, it's important to first underscore that there is no error here. The warning in R is correctly informing you that the maximum likelihood estimator lies on the boundary of the parameter space. The odds ratio of $\infty$ is strongly suggestive of an association. The only issue is that two common methods of producing tests: the Wald test and the Likelihood ratio test require an evaluation of the information under the alternative hypothesis. With data generated along the lines of x <- seq(-3, 3, by=0.1) y <- x > 0 summary(glm(y ~ x, family=binomial)) The warning is made: Warning messages: 1: glm.fit: algorithm did not converge 2: glm.fit: fitted probabilities numerically 0 or 1 occurred which very obviously reflects the dependence that is built into these data. In R the Wald test is found with summary.glm or with waldtest in the lmtest package. The likelihood ratio test is performed with anova or with lrtest in the lmtest package. In both cases, the information matrix is infinitely valued, and no inference is available. Rather, R does produce output, but you cannot trust it. The inference that R typically produces in these cases has p-values very close to one. This is because the loss of precision in the OR is orders of magnitude smaller that the loss of precision in the variance-covariance matrix. Some solutions outlined here: Use a one-step estimator, There is a lot of theory supporting the low bias, efficiency, and generalizability of one step estimators. It is easy to specify a one-step estimator in R and the results are typically very favorable for prediction and inference. And this model will never diverge, because the iterator (Newton-Raphson) simply does not have the chance to do so! fit.1s <- glm(y ~ x, family=binomial, control=glm.control(maxit=1)) summary(fit.1s) Gives: Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -0.03987 0.29569 -0.135 0.893 x 1.19604 0.16794 7.122 1.07e-12 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 So you can see the predictions reflect the direction of trend. And the inference is highly suggestive of the trends which we believe to be true. perform a score test, The Score (or Rao) statistic differs from the the likelihood ratio and wald statistics. It does not require an evaluation of the variance under the alternative hypothesis. We fit the model under the null: mm <- model.matrix( ~ x) fit0 <- glm(y ~ 1, family=binomial) pred0 <- predict(fit0, type='response') inf.null <- t(mm) %*% diag(binomial()$variance(mu=pred0)) %*% mm sc.null <- t(mm) %*% c(y - pred0) score.stat <- t(sc.null) %*% solve(inf.null) %*% sc.null ## compare to chisq pchisq(score.stat, 1, lower.tail=F) Gives as a measure of association very strong statistical significance. Note by the way that the one step estimator produces a $\chi^2$ test statistic of 50.7 and the score test here produces a test statistic pf 45.75 > pchisq(scstat, df=1, lower.tail=F) [,1] [1,] 1.343494e-11 In both cases you have inference for an OR of infinity. , and use median unbiased estimates for a confidence interval. You can produce a median unbiased, non-singular 95% CI for the infinite odds ratio by using median unbiased estimation. The package epitools in R can do this. And I give an example of implementing this estimator here: Confidence interval for Bernoulli sampling
How to deal with perfect separation in logistic regression? For logistic models for inference, it's important to first underscore that there is no error here. The warning in R is correctly informing you that the maximum likelihood estimator lies on the boundar
687
How to deal with perfect separation in logistic regression?
Be careful with this warning message from R. Take a look at this blog post by Andrew Gelman, and you will see that it is not always a problem of perfect separation, but sometimes a bug with glm. It seems that if the starting values are too far from the maximum-likelihood estimate, it blows up. So, check first with other software, like Stata. If you really have this problem, you may try to use Bayesian modeling, with informative priors. But in practice I just get rid of the predictors causing the trouble, because I don't know how to pick an informative prior. But I guess there is a paper by Gelman about using informative prior when you have this problem of perfect separation problem. Just google it. Maybe you should give it a try.
How to deal with perfect separation in logistic regression?
Be careful with this warning message from R. Take a look at this blog post by Andrew Gelman, and you will see that it is not always a problem of perfect separation, but sometimes a bug with glm. It se
How to deal with perfect separation in logistic regression? Be careful with this warning message from R. Take a look at this blog post by Andrew Gelman, and you will see that it is not always a problem of perfect separation, but sometimes a bug with glm. It seems that if the starting values are too far from the maximum-likelihood estimate, it blows up. So, check first with other software, like Stata. If you really have this problem, you may try to use Bayesian modeling, with informative priors. But in practice I just get rid of the predictors causing the trouble, because I don't know how to pick an informative prior. But I guess there is a paper by Gelman about using informative prior when you have this problem of perfect separation problem. Just google it. Maybe you should give it a try.
How to deal with perfect separation in logistic regression? Be careful with this warning message from R. Take a look at this blog post by Andrew Gelman, and you will see that it is not always a problem of perfect separation, but sometimes a bug with glm. It se
688
How to deal with perfect separation in logistic regression?
I am not sure that I agree with the statements in your question. I think that warning message means, for some of the observed X level in your data, the fitted probability is numerically 0 or 1. In other words, at the resolution, it shows as 0 or 1. You can run predict(yourmodel,yourdata,type='response') and you will find 0's or/and 1's there as predicted probabilities. As a result, I think it is ok to just use the results.
How to deal with perfect separation in logistic regression?
I am not sure that I agree with the statements in your question. I think that warning message means, for some of the observed X level in your data, the fitted probability is numerically 0 or 1. In oth
How to deal with perfect separation in logistic regression? I am not sure that I agree with the statements in your question. I think that warning message means, for some of the observed X level in your data, the fitted probability is numerically 0 or 1. In other words, at the resolution, it shows as 0 or 1. You can run predict(yourmodel,yourdata,type='response') and you will find 0's or/and 1's there as predicted probabilities. As a result, I think it is ok to just use the results.
How to deal with perfect separation in logistic regression? I am not sure that I agree with the statements in your question. I think that warning message means, for some of the observed X level in your data, the fitted probability is numerically 0 or 1. In oth
689
How to deal with perfect separation in logistic regression?
This is a discussion from some points in Scortchi's answers. It is important and needs to be carefully handled. :) I highly recommend Re-cast the model if you have this warning. Double-check the correlation between all predictors to see if there are any very high correlated pairs, if so, remove one from that pair. In my real data, I saw a pair with a correlation close to 0.99, which means they are near a "perfectly" correlation. This triggers the failure of the algorithm. Sometimes, the algorithm cannot even estimate the corresponding coefficients. (a) I do not agree with @Simon that: "You're removing the predictor that best explains the response". Actually, in my case, I have "gross profit" and "gross profit + interest". The latter is not different much from the former because the interest of the firm does not change much over time. So using either (and just only) one of these two is good enough. I strongly oppose doing nothing (no offense). In my research, we did an intensive simulation to show that this warning actually provides some very off coefficient estimates. A lot of problems come when you predict, construct other statistics, conduct inference by using those point estimates. It is very dangerous to just leave them alone. I also tried Bayesian analysis, but it does not help in solving this issue (at least in my case). The point estimates are still problematic. All in all, I recommend doing something (re-cast the model with a better understanding of predictors) to remove serious multicollinearity! I think this warning is mainly due to the inside algorithms' failure caused by multicollinearity (we all know that, as statisticians, multicollinearity is notorious).
How to deal with perfect separation in logistic regression?
This is a discussion from some points in Scortchi's answers. It is important and needs to be carefully handled. :) I highly recommend Re-cast the model if you have this warning. Double-check the corr
How to deal with perfect separation in logistic regression? This is a discussion from some points in Scortchi's answers. It is important and needs to be carefully handled. :) I highly recommend Re-cast the model if you have this warning. Double-check the correlation between all predictors to see if there are any very high correlated pairs, if so, remove one from that pair. In my real data, I saw a pair with a correlation close to 0.99, which means they are near a "perfectly" correlation. This triggers the failure of the algorithm. Sometimes, the algorithm cannot even estimate the corresponding coefficients. (a) I do not agree with @Simon that: "You're removing the predictor that best explains the response". Actually, in my case, I have "gross profit" and "gross profit + interest". The latter is not different much from the former because the interest of the firm does not change much over time. So using either (and just only) one of these two is good enough. I strongly oppose doing nothing (no offense). In my research, we did an intensive simulation to show that this warning actually provides some very off coefficient estimates. A lot of problems come when you predict, construct other statistics, conduct inference by using those point estimates. It is very dangerous to just leave them alone. I also tried Bayesian analysis, but it does not help in solving this issue (at least in my case). The point estimates are still problematic. All in all, I recommend doing something (re-cast the model with a better understanding of predictors) to remove serious multicollinearity! I think this warning is mainly due to the inside algorithms' failure caused by multicollinearity (we all know that, as statisticians, multicollinearity is notorious).
How to deal with perfect separation in logistic regression? This is a discussion from some points in Scortchi's answers. It is important and needs to be carefully handled. :) I highly recommend Re-cast the model if you have this warning. Double-check the corr
690
How to deal with perfect separation in logistic regression?
I understand this is an old post, however I will still proceed with answering this as I have struggled days with it and it can help others. Complete separation happens when your selected variables to fit the model can very accurately differentiate between 0’s and 1’s or yes and no. Our whole approach of data science is based on probability estimation but it fails in this case. Rectification steps:- Use bayesglm() instead of glm(), when in case the variance between the variables is low At times using (maxit=”some numerical value”) along with the bayesglm() can help 3.Third and most important check for your selected variables for the model fitting, there must be a variable for which multi collinearity with the Y (outout) variable is very high, discard that variable from your model. As in my case I had a telecom churn data to predict the churn for the validation data. I had a variable in my training data which could very differentiate between the yes and no. After dropping it I could get the correct model. Further more you can use stepwise(fit) to make your model more accurate.
How to deal with perfect separation in logistic regression?
I understand this is an old post, however I will still proceed with answering this as I have struggled days with it and it can help others. Complete separation happens when your selected variables to
How to deal with perfect separation in logistic regression? I understand this is an old post, however I will still proceed with answering this as I have struggled days with it and it can help others. Complete separation happens when your selected variables to fit the model can very accurately differentiate between 0’s and 1’s or yes and no. Our whole approach of data science is based on probability estimation but it fails in this case. Rectification steps:- Use bayesglm() instead of glm(), when in case the variance between the variables is low At times using (maxit=”some numerical value”) along with the bayesglm() can help 3.Third and most important check for your selected variables for the model fitting, there must be a variable for which multi collinearity with the Y (outout) variable is very high, discard that variable from your model. As in my case I had a telecom churn data to predict the churn for the validation data. I had a variable in my training data which could very differentiate between the yes and no. After dropping it I could get the correct model. Further more you can use stepwise(fit) to make your model more accurate.
How to deal with perfect separation in logistic regression? I understand this is an old post, however I will still proceed with answering this as I have struggled days with it and it can help others. Complete separation happens when your selected variables to
691
What intuitive explanation is there for the central limit theorem?
I apologize in advance for the length of this post: it is with some trepidation that I let it out in public at all, because it takes some time and attention to read through and undoubtedly has typographic errors and expository lapses. But here it is for those who are interested in the fascinating topic, offered in the hope that it will encourage you to identify one or more of the many parts of the CLT for further elaboration in responses of your own. Most attempts at "explaining" the CLT are illustrations or just restatements that assert it is true. A really penetrating, correct explanation would have to explain an awful lot of things. Before looking at this further, let's be clear about what the CLT says. As you all know, there are versions that vary in their generality. The common context is a sequence of random variables, which are certain kinds of functions on a common probability space. For intuitive explanations that hold up rigorously I find it helpful to think of a probability space as a box with distinguishable objects. It doesn't matter what those objects are but I will call them "tickets." We make one "observation" of a box by thoroughly mixing up the tickets and drawing one out; that ticket constitutes the observation. After recording it for later analysis we return the ticket to the box so that its contents remain unchanged. A "random variable" basically is a number written on each ticket. In 1733, Abraham de Moivre considered the case of a single box where the numbers on the tickets are only zeros and ones ("Bernoulli trials"), with some of each number present. He imagined making $n$ physically independent observations, yielding a sequence of values $x_1, x_2, \ldots, x_n$, all of which are zero or one. The sum of those values, $y_n = x_1 + x_2 + \ldots + x_n$, is random because the terms in the sum are. Therefore, if we could repeat this procedure many times, various sums (whole numbers ranging from $0$ through $n$) would appear with various frequencies--proportions of the total. (See the histograms below.) Now one would expect--and it's true--that for very large values of $n$, all the frequencies would be quite small. If we were to be so bold (or foolish) as to attempt to "take a limit" or "let $n$ go to $\infty$", we would conclude correctly that all frequencies reduce to $0$. But if we simply draw a histogram of the frequencies, without paying any attention to how its axes are labeled, we see that the histograms for large $n$ all begin to look the same: in some sense, these histograms approach a limit even though the frequencies themselves all go to zero. These histograms depict the results of repeating the procedure of obtaining $y_n$ many times. $n$ is the "number of trials" in the titles. The insight here is to draw the histogram first and label its axes later. With large $n$ the histogram covers a large range of values centered around $n/2$ (on the horizontal axis) and a vanishingly small interval of values (on the vertical axis), because the individual frequencies grow quite small. Fitting this curve into the plotting region has therefore required both a shifting and rescaling of the histogram. The mathematical description of this is that for each $n$ we can choose some central value $m_n$ (not necessarily unique!) to position the histogram and some scale value $s_n$ (not necessarily unique!) to make it fit within the axes. This can be done mathematically by changing $y_n$ to $z_n = (y_n - m_n) / s_n$. Remember that a histogram represents frequencies by areas between it and the horizontal axis. The eventual stability of these histograms for large values of $n$ should therefore be stated in terms of area. So, pick any interval of values you like, say from $a$ to $b \gt a$ and, as $n$ increases, track the area of the part of the histogram of $z_n$ that horizontally spans the interval $(a, b]$. The CLT asserts several things: No matter what $a$ and $b$ are, if we choose the sequences $m_n$ and $s_n$ appropriately (in a way that does not depend on $a$ or $b$ at all), this area indeed approaches a limit as $n$ gets large. The sequences $m_n$ and $s_n$ can be chosen in a way that depends only on $n$, the average of values in the box, and some measure of spread of those values--but on nothing else--so that regardless of what is in the box, the limit is always the same. (This universality property is amazing.) Specifically, that limiting area is the area under the curve $y = \exp(-z^2/2) / \sqrt{2 \pi}$ between $a$ and $b$: this is the formula of that universal limiting histogram. The first generalization of the CLT adds, When the box can contain numbers in addition to zeros and ones, exactly the same conclusions hold (provided that the proportions of extremely large or small numbers in the box are not "too great," a criterion that has a precise and simple quantitative statement). The next generalization, and perhaps the most amazing one, replaces this single box of tickets with an ordered indefinitely long array of boxes with tickets. Each box can have different numbers on its tickets in different proportions. The observation $x_1$ is made by drawing a ticket from the first box, $x_2$ comes from the second box, and so on. Exactly the same conclusions hold provided the contents of the boxes are "not too different" (there are several precise, but different, quantitative characterizations of what "not too different" has to mean; they allow an astonishing amount of latitude). These five assertions, at a minimum, need explaining. There's more. Several intriguing aspects of the setup are implicit in all the statements. For example, What is special about the sum? Why don't we have central limit theorems for other mathematical combinations of numbers such as their product or their maximum? (It turns out we do, but they are not quite so general nor do they always have such a clean, simple conclusion unless they can be reduced to the CLT.) The sequences of $m_n$ and $s_n$ are not unique but they're almost unique in the sense that eventually they have to approximate the expectation of the sum of $n$ tickets and the standard deviation of the sum, respectively (which, in the first two statements of the CLT, equals $\sqrt{n}$ times the standard deviation of the box). The standard deviation is one measure of the spread of values, but it is by no means the only one nor is it the most "natural," either historically or for many applications. (Many people would choose something like a median absolute deviation from the median, for instance.) Why does the SD appear in such an essential way? Consider the formula for the limiting histogram: who would have expected it to take such a form? It says the logarithm of the probability density is a quadratic function. Why? Is there some intuitive or clear, compelling explanation for this? I confess I am unable to reach the ultimate goal of supplying answers that are simple enough to meet Srikant's challenging criteria for intuitiveness and simplicity, but I have sketched this background in the hope that others might be inspired to fill in some of the many gaps. I think a good demonstration will ultimately have to rely on an elementary analysis of how values between $\alpha_n = a s_n + m_n$ and $\beta_n = b s_n + m_n$ can arise in forming the sum $x_1 + x_2 + \ldots + x_n$. Going back to the single-box version of the CLT, the case of a symmetric distribution is simpler to handle: its median equals its mean, so there's a 50% chance that $x_i$ will be less than the box's mean and a 50% chance that $x_i$ will be greater than its mean. Moreover, when $n$ is sufficiently large, the positive deviations from the mean ought to compensate for the negative deviations in the mean. (This requires some careful justification, not just hand waving.) Thus we ought primarily to be concerned about counting the numbers of positive and negative deviations and only have a secondary concern about their sizes. (Of all the things I have written here, this might be the most useful at providing some intuition about why the CLT works. Indeed, the technical assumptions needed to make the generalizations of the CLT true essentially are various ways of ruling out the possibility that rare huge deviations will upset the balance enough to prevent the limiting histogram from arising.) This shows, to some degree anyway, why the first generalization of the CLT does not really uncover anything that was not in de Moivre's original Bernoulli trial version. At this point it looks like there is nothing for it but to do a little math: we need to count the number of distinct ways in which the number of positive deviations from the mean can differ from the number of negative deviations by any predetermined value $k$, where evidently $k$ is one of $-n, -n+2, \ldots, n-2, n$. But because vanishingly small errors will disappear in the limit, we don't have to count precisely; we only need to approximate the counts. To this end it suffices to know that $$\text{The number of ways to obtain } k \text{ positive and } n-k \text{ negative values out of } n$$ $$\text{equals } \frac{n-k+1}{k}$$ $$\text{times the number of ways to get } k-1 \text{ positive and } n-k+1 \text { negative values.}$$ (That's a perfectly elementary result so I won't bother to write down the justification.) Now we approximate wholesale. The maximum frequency occurs when $k$ is as close to $n/2$ as possible (also elementary). Let's write $m = n/2$. Then, relative to the maximum frequency, the frequency of $m+j+1$ positive deviations ($j \ge 0$) is estimated by the product $$\frac{m+1}{m+1} \frac{m}{m+2} \cdots \frac{m-j+1}{m+j+1}$$ $$=\frac{1 - 1/(m+1)}{1 + 1/(m+1)} \frac{1-2/(m+1)}{1+2/(m+1)} \cdots \frac{1-j/(m+1)}{1+j/(m+1)}.$$ 135 years before de Moivre was writing, John Napier invented logarithms to simplify multiplication, so let's take advantage of this. Using the approximation $$\log\left(\frac{1-x}{1+x}\right) = -2x - \frac{2x^3}{3} + O(x^5),$$ we find that the log of the relative frequency is approximately $$-\frac{2}{m+1}\left(1 + 2 + \cdots + j\right) - \frac{2}{3(m+1)^3}\left(1^3+2^3+\cdots+j^3\right) = -\frac{j^2}{m} + O\left(\frac{j^4}{m^3}\right).$$ Because the error in approximating this sum by $-j^2/m$ is on the order of $j^4/m^3$, the approximation ought to work well provided $j^4$ is small relative to $m^3$. That covers a greater range of values of $j$ than is needed. (It suffices for the approximation to work for $j$ only on the order of $\sqrt{m}$ which asymptotically is much smaller than $m^{3/4}$.) Consequently, writing $$z = \sqrt{2}\,\frac{j}{\sqrt{m}} = \frac{j/n}{1 / \sqrt{4n}}$$ for the standardized deviation, the relative frequency of deviations of size given by $z$ must be proportional to $\exp(-z^2/2)$ for large $m.$ Thus appears the Gaussian law of #3 above. Obviously much more analysis of this sort should be presented to justify the other assertions in the CLT, but I'm running out of time, space, and energy and I've probably lost 90% of the people who started reading this anyway. This simple approximation, though, suggests how de Moivre might originally have suspected that there is a universal limiting distribution, that its logarithm is a quadratic function, and that the proper scale factor $s_n$ must be proportional to $\sqrt{n}$ (as shown by the denominator of the preceding formula). It is difficult to imagine how this important quantitative relationship could be explained without invoking some kind of mathematical information and reasoning; anything less would leave the precise shape of the limiting curve a complete mystery.
What intuitive explanation is there for the central limit theorem?
I apologize in advance for the length of this post: it is with some trepidation that I let it out in public at all, because it takes some time and attention to read through and undoubtedly has typogra
What intuitive explanation is there for the central limit theorem? I apologize in advance for the length of this post: it is with some trepidation that I let it out in public at all, because it takes some time and attention to read through and undoubtedly has typographic errors and expository lapses. But here it is for those who are interested in the fascinating topic, offered in the hope that it will encourage you to identify one or more of the many parts of the CLT for further elaboration in responses of your own. Most attempts at "explaining" the CLT are illustrations or just restatements that assert it is true. A really penetrating, correct explanation would have to explain an awful lot of things. Before looking at this further, let's be clear about what the CLT says. As you all know, there are versions that vary in their generality. The common context is a sequence of random variables, which are certain kinds of functions on a common probability space. For intuitive explanations that hold up rigorously I find it helpful to think of a probability space as a box with distinguishable objects. It doesn't matter what those objects are but I will call them "tickets." We make one "observation" of a box by thoroughly mixing up the tickets and drawing one out; that ticket constitutes the observation. After recording it for later analysis we return the ticket to the box so that its contents remain unchanged. A "random variable" basically is a number written on each ticket. In 1733, Abraham de Moivre considered the case of a single box where the numbers on the tickets are only zeros and ones ("Bernoulli trials"), with some of each number present. He imagined making $n$ physically independent observations, yielding a sequence of values $x_1, x_2, \ldots, x_n$, all of which are zero or one. The sum of those values, $y_n = x_1 + x_2 + \ldots + x_n$, is random because the terms in the sum are. Therefore, if we could repeat this procedure many times, various sums (whole numbers ranging from $0$ through $n$) would appear with various frequencies--proportions of the total. (See the histograms below.) Now one would expect--and it's true--that for very large values of $n$, all the frequencies would be quite small. If we were to be so bold (or foolish) as to attempt to "take a limit" or "let $n$ go to $\infty$", we would conclude correctly that all frequencies reduce to $0$. But if we simply draw a histogram of the frequencies, without paying any attention to how its axes are labeled, we see that the histograms for large $n$ all begin to look the same: in some sense, these histograms approach a limit even though the frequencies themselves all go to zero. These histograms depict the results of repeating the procedure of obtaining $y_n$ many times. $n$ is the "number of trials" in the titles. The insight here is to draw the histogram first and label its axes later. With large $n$ the histogram covers a large range of values centered around $n/2$ (on the horizontal axis) and a vanishingly small interval of values (on the vertical axis), because the individual frequencies grow quite small. Fitting this curve into the plotting region has therefore required both a shifting and rescaling of the histogram. The mathematical description of this is that for each $n$ we can choose some central value $m_n$ (not necessarily unique!) to position the histogram and some scale value $s_n$ (not necessarily unique!) to make it fit within the axes. This can be done mathematically by changing $y_n$ to $z_n = (y_n - m_n) / s_n$. Remember that a histogram represents frequencies by areas between it and the horizontal axis. The eventual stability of these histograms for large values of $n$ should therefore be stated in terms of area. So, pick any interval of values you like, say from $a$ to $b \gt a$ and, as $n$ increases, track the area of the part of the histogram of $z_n$ that horizontally spans the interval $(a, b]$. The CLT asserts several things: No matter what $a$ and $b$ are, if we choose the sequences $m_n$ and $s_n$ appropriately (in a way that does not depend on $a$ or $b$ at all), this area indeed approaches a limit as $n$ gets large. The sequences $m_n$ and $s_n$ can be chosen in a way that depends only on $n$, the average of values in the box, and some measure of spread of those values--but on nothing else--so that regardless of what is in the box, the limit is always the same. (This universality property is amazing.) Specifically, that limiting area is the area under the curve $y = \exp(-z^2/2) / \sqrt{2 \pi}$ between $a$ and $b$: this is the formula of that universal limiting histogram. The first generalization of the CLT adds, When the box can contain numbers in addition to zeros and ones, exactly the same conclusions hold (provided that the proportions of extremely large or small numbers in the box are not "too great," a criterion that has a precise and simple quantitative statement). The next generalization, and perhaps the most amazing one, replaces this single box of tickets with an ordered indefinitely long array of boxes with tickets. Each box can have different numbers on its tickets in different proportions. The observation $x_1$ is made by drawing a ticket from the first box, $x_2$ comes from the second box, and so on. Exactly the same conclusions hold provided the contents of the boxes are "not too different" (there are several precise, but different, quantitative characterizations of what "not too different" has to mean; they allow an astonishing amount of latitude). These five assertions, at a minimum, need explaining. There's more. Several intriguing aspects of the setup are implicit in all the statements. For example, What is special about the sum? Why don't we have central limit theorems for other mathematical combinations of numbers such as their product or their maximum? (It turns out we do, but they are not quite so general nor do they always have such a clean, simple conclusion unless they can be reduced to the CLT.) The sequences of $m_n$ and $s_n$ are not unique but they're almost unique in the sense that eventually they have to approximate the expectation of the sum of $n$ tickets and the standard deviation of the sum, respectively (which, in the first two statements of the CLT, equals $\sqrt{n}$ times the standard deviation of the box). The standard deviation is one measure of the spread of values, but it is by no means the only one nor is it the most "natural," either historically or for many applications. (Many people would choose something like a median absolute deviation from the median, for instance.) Why does the SD appear in such an essential way? Consider the formula for the limiting histogram: who would have expected it to take such a form? It says the logarithm of the probability density is a quadratic function. Why? Is there some intuitive or clear, compelling explanation for this? I confess I am unable to reach the ultimate goal of supplying answers that are simple enough to meet Srikant's challenging criteria for intuitiveness and simplicity, but I have sketched this background in the hope that others might be inspired to fill in some of the many gaps. I think a good demonstration will ultimately have to rely on an elementary analysis of how values between $\alpha_n = a s_n + m_n$ and $\beta_n = b s_n + m_n$ can arise in forming the sum $x_1 + x_2 + \ldots + x_n$. Going back to the single-box version of the CLT, the case of a symmetric distribution is simpler to handle: its median equals its mean, so there's a 50% chance that $x_i$ will be less than the box's mean and a 50% chance that $x_i$ will be greater than its mean. Moreover, when $n$ is sufficiently large, the positive deviations from the mean ought to compensate for the negative deviations in the mean. (This requires some careful justification, not just hand waving.) Thus we ought primarily to be concerned about counting the numbers of positive and negative deviations and only have a secondary concern about their sizes. (Of all the things I have written here, this might be the most useful at providing some intuition about why the CLT works. Indeed, the technical assumptions needed to make the generalizations of the CLT true essentially are various ways of ruling out the possibility that rare huge deviations will upset the balance enough to prevent the limiting histogram from arising.) This shows, to some degree anyway, why the first generalization of the CLT does not really uncover anything that was not in de Moivre's original Bernoulli trial version. At this point it looks like there is nothing for it but to do a little math: we need to count the number of distinct ways in which the number of positive deviations from the mean can differ from the number of negative deviations by any predetermined value $k$, where evidently $k$ is one of $-n, -n+2, \ldots, n-2, n$. But because vanishingly small errors will disappear in the limit, we don't have to count precisely; we only need to approximate the counts. To this end it suffices to know that $$\text{The number of ways to obtain } k \text{ positive and } n-k \text{ negative values out of } n$$ $$\text{equals } \frac{n-k+1}{k}$$ $$\text{times the number of ways to get } k-1 \text{ positive and } n-k+1 \text { negative values.}$$ (That's a perfectly elementary result so I won't bother to write down the justification.) Now we approximate wholesale. The maximum frequency occurs when $k$ is as close to $n/2$ as possible (also elementary). Let's write $m = n/2$. Then, relative to the maximum frequency, the frequency of $m+j+1$ positive deviations ($j \ge 0$) is estimated by the product $$\frac{m+1}{m+1} \frac{m}{m+2} \cdots \frac{m-j+1}{m+j+1}$$ $$=\frac{1 - 1/(m+1)}{1 + 1/(m+1)} \frac{1-2/(m+1)}{1+2/(m+1)} \cdots \frac{1-j/(m+1)}{1+j/(m+1)}.$$ 135 years before de Moivre was writing, John Napier invented logarithms to simplify multiplication, so let's take advantage of this. Using the approximation $$\log\left(\frac{1-x}{1+x}\right) = -2x - \frac{2x^3}{3} + O(x^5),$$ we find that the log of the relative frequency is approximately $$-\frac{2}{m+1}\left(1 + 2 + \cdots + j\right) - \frac{2}{3(m+1)^3}\left(1^3+2^3+\cdots+j^3\right) = -\frac{j^2}{m} + O\left(\frac{j^4}{m^3}\right).$$ Because the error in approximating this sum by $-j^2/m$ is on the order of $j^4/m^3$, the approximation ought to work well provided $j^4$ is small relative to $m^3$. That covers a greater range of values of $j$ than is needed. (It suffices for the approximation to work for $j$ only on the order of $\sqrt{m}$ which asymptotically is much smaller than $m^{3/4}$.) Consequently, writing $$z = \sqrt{2}\,\frac{j}{\sqrt{m}} = \frac{j/n}{1 / \sqrt{4n}}$$ for the standardized deviation, the relative frequency of deviations of size given by $z$ must be proportional to $\exp(-z^2/2)$ for large $m.$ Thus appears the Gaussian law of #3 above. Obviously much more analysis of this sort should be presented to justify the other assertions in the CLT, but I'm running out of time, space, and energy and I've probably lost 90% of the people who started reading this anyway. This simple approximation, though, suggests how de Moivre might originally have suspected that there is a universal limiting distribution, that its logarithm is a quadratic function, and that the proper scale factor $s_n$ must be proportional to $\sqrt{n}$ (as shown by the denominator of the preceding formula). It is difficult to imagine how this important quantitative relationship could be explained without invoking some kind of mathematical information and reasoning; anything less would leave the precise shape of the limiting curve a complete mystery.
What intuitive explanation is there for the central limit theorem? I apologize in advance for the length of this post: it is with some trepidation that I let it out in public at all, because it takes some time and attention to read through and undoubtedly has typogra
692
What intuitive explanation is there for the central limit theorem?
The nicest animation I know: http://www.ms.uky.edu/~mai/java/stat/GaltonMachine.html The simplest words I have read: http://elonen.iki.fi/articles/centrallimit/index.en.html If you sum the results of these ten throws, what you get is likely to be closer to 30-40 than the maximum, 60 (all sixes) or on the other hand, the minumum, 10 (all ones). The reason for this is that you can get the middle values in many more different ways than the extremes. Example: when throwing two dice: 1+6 = 2+5 = 3+4 = 7, but only 1+1 = 2 and only 6+6 = 12. That is: even though you get any of the six numbers equally likely when throwing one die, the extremes are less probable than middle values in sums of several dice.
What intuitive explanation is there for the central limit theorem?
The nicest animation I know: http://www.ms.uky.edu/~mai/java/stat/GaltonMachine.html The simplest words I have read: http://elonen.iki.fi/articles/centrallimit/index.en.html If you sum the results o
What intuitive explanation is there for the central limit theorem? The nicest animation I know: http://www.ms.uky.edu/~mai/java/stat/GaltonMachine.html The simplest words I have read: http://elonen.iki.fi/articles/centrallimit/index.en.html If you sum the results of these ten throws, what you get is likely to be closer to 30-40 than the maximum, 60 (all sixes) or on the other hand, the minumum, 10 (all ones). The reason for this is that you can get the middle values in many more different ways than the extremes. Example: when throwing two dice: 1+6 = 2+5 = 3+4 = 7, but only 1+1 = 2 and only 6+6 = 12. That is: even though you get any of the six numbers equally likely when throwing one die, the extremes are less probable than middle values in sums of several dice.
What intuitive explanation is there for the central limit theorem? The nicest animation I know: http://www.ms.uky.edu/~mai/java/stat/GaltonMachine.html The simplest words I have read: http://elonen.iki.fi/articles/centrallimit/index.en.html If you sum the results o
693
What intuitive explanation is there for the central limit theorem?
An observation concerning the CLT may be the following. When you have a sum $$ S = X_1 + X_2 + \ldots + X_n $$ of a lot of random components, if one is "smaller than usual" then this is mostly compensated for by some of the other components being "larger than usual". In other words, negative deviations and positive deviations from the component means cancel each other out in the summation. Personally, I have no clear-cut intuition why exactly the remaining deviations form a distribution that looks more and more normal the more terms you have. There are many versions of the CLT, some stronger than others, some with relaxed conditions such as a moderate dependence between the terms and/or non-identical distributions for the terms. In the simplest-to-prove versions of the CLT, the proof is usually based on the moment-generating function (or Laplace-Stieltjes transform or some other appropriate transform of the density) of the sum $S$. Writing this as a Taylor expansion and keeping only the most dominant term gives you the moment-generating function of the normal distribution. So for me personally, the normality is something that follows from a bunch of equations and I can not provide any further intuition than that. It should be noted however that the sum's distribution, never really is normally distributed, nor does the CLT claims that it would be. If $n$ is finite, there is still some distance to the normal distribution and if $n=\infty$ both the mean and the variance are infinite as well. In the latter case you could take the mean of the infinite sum, but then you get a deterministic number without any variance at all, which could hardly be labelled as "normally distributed". This may pose problems with practical applications of the CLT. Usually, if you are interested in the distribution of $S/n$ close to its center, CLT works fine. However, convergence to the normal is not uniform everywhere and the further you get away from the center, the more terms you need to have a reasonable approximation. With all the "sanctity" of the Central Limit Theorem in statistics, its limitations are often overlooked all too easily. Below I give two slides from my course making the point that CLT utterly fails in the tails, in any practical use case. Unfortunately, a lot of people specifically use CLT to estimate tail probabilities, knowingly or otherwise.
What intuitive explanation is there for the central limit theorem?
An observation concerning the CLT may be the following. When you have a sum $$ S = X_1 + X_2 + \ldots + X_n $$ of a lot of random components, if one is "smaller than usual" then this is mostly compen
What intuitive explanation is there for the central limit theorem? An observation concerning the CLT may be the following. When you have a sum $$ S = X_1 + X_2 + \ldots + X_n $$ of a lot of random components, if one is "smaller than usual" then this is mostly compensated for by some of the other components being "larger than usual". In other words, negative deviations and positive deviations from the component means cancel each other out in the summation. Personally, I have no clear-cut intuition why exactly the remaining deviations form a distribution that looks more and more normal the more terms you have. There are many versions of the CLT, some stronger than others, some with relaxed conditions such as a moderate dependence between the terms and/or non-identical distributions for the terms. In the simplest-to-prove versions of the CLT, the proof is usually based on the moment-generating function (or Laplace-Stieltjes transform or some other appropriate transform of the density) of the sum $S$. Writing this as a Taylor expansion and keeping only the most dominant term gives you the moment-generating function of the normal distribution. So for me personally, the normality is something that follows from a bunch of equations and I can not provide any further intuition than that. It should be noted however that the sum's distribution, never really is normally distributed, nor does the CLT claims that it would be. If $n$ is finite, there is still some distance to the normal distribution and if $n=\infty$ both the mean and the variance are infinite as well. In the latter case you could take the mean of the infinite sum, but then you get a deterministic number without any variance at all, which could hardly be labelled as "normally distributed". This may pose problems with practical applications of the CLT. Usually, if you are interested in the distribution of $S/n$ close to its center, CLT works fine. However, convergence to the normal is not uniform everywhere and the further you get away from the center, the more terms you need to have a reasonable approximation. With all the "sanctity" of the Central Limit Theorem in statistics, its limitations are often overlooked all too easily. Below I give two slides from my course making the point that CLT utterly fails in the tails, in any practical use case. Unfortunately, a lot of people specifically use CLT to estimate tail probabilities, knowingly or otherwise.
What intuitive explanation is there for the central limit theorem? An observation concerning the CLT may be the following. When you have a sum $$ S = X_1 + X_2 + \ldots + X_n $$ of a lot of random components, if one is "smaller than usual" then this is mostly compen
694
What intuitive explanation is there for the central limit theorem?
Intuition is a tricky thing. It's even trickier with theory in our hands tied behind our back. The CLT is all about sums of tiny, independent disturbances. "Sums" in the sense of the sample mean, "tiny" in the sense of finite variance (of the population), and "disturbances" in the sense of plus/minus around a central (population) value. For me, the device that appeals most directly to intuition is the quincunx, or 'Galton box', see Wikipedia (for 'bean machine'?) The idea is to roll a tiny little ball down the face of a board adorned by a lattice of equally spaced pins. On its way down the ball diverts right and left (...randomly, independently) and collects at the bottom. Over time, we see a nice bell shaped mound form right before our eyes. The CLT says the same thing. It is a mathematical description of this phenomenon (more precisely, the quincunx is physical evidence for the normal approximation to the binomial distribution). Loosely speaking, the CLT says that as long as our population is not overly misbehaved (that is, if the tails of the PDF are sufficiently thin), then the sample mean (properly scaled) behaves just like that little ball bouncing down the face of the quincunx: sometimes it falls off to the left, sometimes it falls off to the right, but most of the time it lands right around the middle, in a nice bell shape. The majesty of the CLT (to me) is that the shape of the underlying population is irrelevant. Shape only plays a role insofar as it delegates the length of time we need to wait (in the sense of sample size).
What intuitive explanation is there for the central limit theorem?
Intuition is a tricky thing. It's even trickier with theory in our hands tied behind our back. The CLT is all about sums of tiny, independent disturbances. "Sums" in the sense of the sample mean, "t
What intuitive explanation is there for the central limit theorem? Intuition is a tricky thing. It's even trickier with theory in our hands tied behind our back. The CLT is all about sums of tiny, independent disturbances. "Sums" in the sense of the sample mean, "tiny" in the sense of finite variance (of the population), and "disturbances" in the sense of plus/minus around a central (population) value. For me, the device that appeals most directly to intuition is the quincunx, or 'Galton box', see Wikipedia (for 'bean machine'?) The idea is to roll a tiny little ball down the face of a board adorned by a lattice of equally spaced pins. On its way down the ball diverts right and left (...randomly, independently) and collects at the bottom. Over time, we see a nice bell shaped mound form right before our eyes. The CLT says the same thing. It is a mathematical description of this phenomenon (more precisely, the quincunx is physical evidence for the normal approximation to the binomial distribution). Loosely speaking, the CLT says that as long as our population is not overly misbehaved (that is, if the tails of the PDF are sufficiently thin), then the sample mean (properly scaled) behaves just like that little ball bouncing down the face of the quincunx: sometimes it falls off to the left, sometimes it falls off to the right, but most of the time it lands right around the middle, in a nice bell shape. The majesty of the CLT (to me) is that the shape of the underlying population is irrelevant. Shape only plays a role insofar as it delegates the length of time we need to wait (in the sense of sample size).
What intuitive explanation is there for the central limit theorem? Intuition is a tricky thing. It's even trickier with theory in our hands tied behind our back. The CLT is all about sums of tiny, independent disturbances. "Sums" in the sense of the sample mean, "t
695
What intuitive explanation is there for the central limit theorem?
This answer hopes to give an intuitive meaning of the central limit theorem, using simple calculus techniques (Taylor expansion of order 3). Here is the outline: What the CLT says An intuitive proof of the CLT using simple calculus Why the normal distribution? We will mention the normal distribution at the very end; because the fact that the normal distribution eventually comes up does not bear much intuition. 1. What the central limit theorem says? Several versions of the CLT There are several equivalent versions of the CLT. The textbook statement of the CLT says that for any real $x$ and any sequence of independent random variables $X_1,\cdots,X_n$ with zero-mean and variance 1, \[P\left(\frac{X_1+\cdots+X_n}{\sqrt n} \le x\right) \to_{n\to+\infty} \int_{-\infty}^x \frac{e^{-t^2/2}}{\sqrt{2\pi}} dt.\] To understand on what is universal and intuitive about the CLT, let's forget the limit for a moment. The above statement says that if $X_1.,\ldots,X_n$ and $Z_1,\ldots,Z_n$ are two sequences of independent random variables each with zero-mean and variance 1, then \[E \left[ f\left(\tfrac{X_1+\cdots+X_n}{\sqrt n}\right) \right] - E \left[ f\left(\tfrac{Z_1+\cdots+Z_n}{\sqrt n}\right) \right] \to_{n\to+\infty} 0 \] for every indicator function $f$ of the form, for some fixed real $x$, \begin{equation} f(t) = \begin{cases} 1 \text{ if } t < x \\ 0 \text{ if } t\ge x.\end{cases} \end{equation} The previous display embodies the fact the limit is the same no matter the particular distributions of $X_1,\ldots,X_n$ and $Z_1,\ldots,Z_n$, provided that the random variables are independent with mean zero, variance one. Some other versions of the CLT mentions the class of Lipschtiz functions that are bounded by 1; some other versions of the CLT mentions the class of smooth functions with bounded derivative of order $k$. Consider two sequences $X_1,\ldots,X_n$ and $Z_1,\ldots,Z_n$ as above, and for some function $f$, the convergence result (CONV) \[E \left[ f\left(\tfrac{X_1+\cdots+X_n}{\sqrt n}\right) \right] - E \left[ f\left(\tfrac{Z_1+\cdots+Z_n}{\sqrt n}\right) \right] \to_{n\to+\infty} 0 \tag{CONV}\] It is possible to establish the equivalence ("if and only if") between the following statements: (CONV) above holds for every indicator functions $f$ of the form $f(t)=1$ for $t < x$ and $f(t)=0$ for $t\ge x$ for some fixed real $x$. (CONV) holds for every bounded lipschitz function $f:R\to R$. (CONV) holds for every smooth (i.e., $C^{\infty}$) functions with compact support. (CONV) holds for every functions $f$ three time continuously differentiable with $\sup_{x\in R} |f'''(x)| \le 1$. Each of the 4 points above says that the convergence holds for a large class of functions. By a technical approximation argument, one can show that the four points above are equivalent, we refer the reader to Chapter 7, page 77 of David Pollard's book A user's guide to measure theoretic probabilities from which this answer is highly inspired. Our assumption for the remaining of this answer... We will assume that $\sup_{x\in R} |f'''(x)| \le C$ for some constant $C>0$, which corresponds to point 4 above. We will also assume that the random variables have finite, bounded third moment: $E[|X_i|^3]$ and $E[|Z_i|^3]$ are finite. 2. The value of $E\left[ f\left( \tfrac{X_1+\cdots+X_n}{\sqrt n} \right) \right]$ is universal: it does not depend on the distribution of $X_1,...,X_n$ Let us show that this quantity is universal (up to a small error term), in the sense that it does not depend on which collection of independent random variables was provided. Take $X_1,\ldots,X_n$ and $Z_1,\ldots,Z_n$ two sequences of independent random variables, each with mean 0 and variance 1, and finite third moment. The idea is to iteratively replace $X_i$ by $Z_i$ in one of the quantity and control the difference by basic calculus (the idea, I believe, is due to Lindeberg). By a Taylor expansion, if $W = Z_1+\cdots+Z_{n-1}$, and $h(x)=f(x/\sqrt n)$ then \begin{align} h(Z_1+\cdots+Z_{n-1}+X_n) &= h(W) + X_n h'(W) + \frac{X_n^2 h''(W)}{2} + \frac{X_n^3/h'''(M_n)}{6} \\ h(Z_1+\cdots+Z_{n-1}+Z_n) &= h(W) + Z_n h'(W) + \frac{Z_n^2 h''(W)}{2} + \frac{Z_n^3 h'''(M_n')}{6} \\ \end{align} where $M_n$ and $M_n'$ are midpoints given by the mean-value theorem. Taking expectation on both lines, the zeroth order term is the same, the first order terms are equal in expectation because by independence of $X_n$ and $W$, $E[X_n h'(W)]= E[X_n] E[h'(W)] =0$ and similarly for the second line. Again by independence, the second order terms are the same in expectation. The only remaining terms are the third order one, and in expectation the difference between the two lines is at most \[ \frac{(C/6)E[ |X_n|^3 + |Z_n|^3 ]}{(\sqrt n)^3}. \] Here $C$ is an upper bound on the third derivative of $f'''$. The denominator $(\sqrt{n})^3$ appears because $h'''(t) = f'''(t/\sqrt n)/(\sqrt n)^3$. By independence, the contribution of $X_n$ in the sum is meaningless because it could be replaced by $Z_n$ without incurring an error larger than the above display! We now reiterate to replace $X_{n-1}$ by $Z_{n-1}$. If $\tilde W= Z_1+Z_2+\cdots+Z_{n-2} + X_n$ then \begin{align} h(Z_1+\cdots+Z_{n-2}+X_{n-1}+X_n) &= h(\tilde W) + X_{n-1} h'(\tilde W) + \frac{X_{n-1}^2 h''(\tilde W)}{2} + \frac{X_{n-1}^3/h'''(\tilde M_n)}{6}\\ h(Z_1+\cdots+Z_{n-2}+Z_{n-1}+X_n) &= h(\tilde W) + Z_{n-1} h'(\tilde W) + \frac{Z_{n-1}^2 h''(\tilde W)}{2} + \frac{Z_{n-1}^3/h'''(\tilde M_n)}{6}. \end{align} By independence of $Z_{n-1}$ and $\tilde W$, and by independence of $X_{n-1}$ and $\tilde W$, again the zeroth, first and second order terms are equal in expectation for both lines. The difference in expectation between the two lines is again at most \[ \frac{(C/6)E[ |X_{n-1}|^3 + |Z_{n-1}|^3 ]}{(\sqrt n)^3}. \] We keep iterating until we replaced all $Z_i$'s with $X_i$'s. By adding the errors made at each of the $n$ steps, we obtain \[ \Big| E\left[ f\left( \tfrac{X_1+\cdots+X_n}{\sqrt n} \right) \right]-E\left[ f\left( \tfrac{Z_1+\cdots+Z_n}{\sqrt n} \right) \right] \Big| \le n \frac{(C/6)\max_{i=1,\ldots,n} E[ |X_i|^3 + |Z_i|^3 ]}{(\sqrt n)^3}. \] as $n$ increases, the right hand side converges to 0 if the third moments of our random variables are finite (let's assume it is the case). This means that the expectations on the left become arbitrarily close to each other, no matter if the distribution of $X_1,\ldots,X_n$ is far from that of $Z_1,\ldots,Z_n$. By independence, the contribution of each $X_i$ in the sum is meaningless because it could be replaced by $Z_i$ without incurring an error larger than $O(1/(\sqrt n)^3)$. And replacing all $X_i$'s by the $Z_i$'s does not change the quantity by more than $O(1/\sqrt n)$. The expectation $E\left[ f\left( \frac{X_1+\cdots+X_n}{\sqrt n} \right) \right]$ is thus universal, it does not depend on the distribution of $X_1,\ldots,X_n$. On the other hand, independence and $E[X_i]=E[Z_i]=0,E[Z_i^2]=E[X_i^2]=1$ was of utmost importance for the above bounds. 3. Why the normal distribution? We have seen that the expectation $E\left[ f\left( \frac{X_1+\cdots+X_n}{\sqrt n} \right) \right]$ will be the same no matter what the distribution of $X_i$ is, up to a small error of order $O(1/\sqrt n)$. But for applications, it would be useful to compute such quantity. It would also be useful to get a simpler expression for this quantity $E\left[ f\left( \frac{X_1+\cdots+X_n}{\sqrt n} \right) \right]$. Since this quantity is the same for any collection $X_1,\ldots,X_n$, we can simply pick one specific collection such that the distribution $(X_1+\cdots+X_n)/\sqrt n$ is easy to compute or easy to remember. For the normal distribution $N(0,1)$, it happens that this quantity becomes really simple. Indeed, if $Z_1,\ldots,Z_n$ are iid $N(0,1)$ then $\frac{Z_1+\cdots+Z_n}{\sqrt n}$ has also the $N(0,1)$ distribution and it does not depend on $n$! Hence if $Z\sim N(0,1)$, then \[ E\left[ f\left( \frac{Z_1+\cdots+Z_n}{\sqrt n} \right) \right] = E[ f(Z)], \] and by the above argument, for any collection of independent random variables $X_1,\ldots,X_n$ with $E[X_i]=0,E[X_i^2]=1$, then \[ \left| E\left[ f\left( \frac{X_1+\cdots+X_n}{\sqrt n} \right) \right] -E[f(Z) \right| \le \frac{\sup_{x\in R} |f'''(x)| \max_{i=1,\ldots,n} E[|X_i|^3 + |Z|^3]}{6\sqrt n}. \]
What intuitive explanation is there for the central limit theorem?
This answer hopes to give an intuitive meaning of the central limit theorem, using simple calculus techniques (Taylor expansion of order 3). Here is the outline: What the CLT says An intuitive proof
What intuitive explanation is there for the central limit theorem? This answer hopes to give an intuitive meaning of the central limit theorem, using simple calculus techniques (Taylor expansion of order 3). Here is the outline: What the CLT says An intuitive proof of the CLT using simple calculus Why the normal distribution? We will mention the normal distribution at the very end; because the fact that the normal distribution eventually comes up does not bear much intuition. 1. What the central limit theorem says? Several versions of the CLT There are several equivalent versions of the CLT. The textbook statement of the CLT says that for any real $x$ and any sequence of independent random variables $X_1,\cdots,X_n$ with zero-mean and variance 1, \[P\left(\frac{X_1+\cdots+X_n}{\sqrt n} \le x\right) \to_{n\to+\infty} \int_{-\infty}^x \frac{e^{-t^2/2}}{\sqrt{2\pi}} dt.\] To understand on what is universal and intuitive about the CLT, let's forget the limit for a moment. The above statement says that if $X_1.,\ldots,X_n$ and $Z_1,\ldots,Z_n$ are two sequences of independent random variables each with zero-mean and variance 1, then \[E \left[ f\left(\tfrac{X_1+\cdots+X_n}{\sqrt n}\right) \right] - E \left[ f\left(\tfrac{Z_1+\cdots+Z_n}{\sqrt n}\right) \right] \to_{n\to+\infty} 0 \] for every indicator function $f$ of the form, for some fixed real $x$, \begin{equation} f(t) = \begin{cases} 1 \text{ if } t < x \\ 0 \text{ if } t\ge x.\end{cases} \end{equation} The previous display embodies the fact the limit is the same no matter the particular distributions of $X_1,\ldots,X_n$ and $Z_1,\ldots,Z_n$, provided that the random variables are independent with mean zero, variance one. Some other versions of the CLT mentions the class of Lipschtiz functions that are bounded by 1; some other versions of the CLT mentions the class of smooth functions with bounded derivative of order $k$. Consider two sequences $X_1,\ldots,X_n$ and $Z_1,\ldots,Z_n$ as above, and for some function $f$, the convergence result (CONV) \[E \left[ f\left(\tfrac{X_1+\cdots+X_n}{\sqrt n}\right) \right] - E \left[ f\left(\tfrac{Z_1+\cdots+Z_n}{\sqrt n}\right) \right] \to_{n\to+\infty} 0 \tag{CONV}\] It is possible to establish the equivalence ("if and only if") between the following statements: (CONV) above holds for every indicator functions $f$ of the form $f(t)=1$ for $t < x$ and $f(t)=0$ for $t\ge x$ for some fixed real $x$. (CONV) holds for every bounded lipschitz function $f:R\to R$. (CONV) holds for every smooth (i.e., $C^{\infty}$) functions with compact support. (CONV) holds for every functions $f$ three time continuously differentiable with $\sup_{x\in R} |f'''(x)| \le 1$. Each of the 4 points above says that the convergence holds for a large class of functions. By a technical approximation argument, one can show that the four points above are equivalent, we refer the reader to Chapter 7, page 77 of David Pollard's book A user's guide to measure theoretic probabilities from which this answer is highly inspired. Our assumption for the remaining of this answer... We will assume that $\sup_{x\in R} |f'''(x)| \le C$ for some constant $C>0$, which corresponds to point 4 above. We will also assume that the random variables have finite, bounded third moment: $E[|X_i|^3]$ and $E[|Z_i|^3]$ are finite. 2. The value of $E\left[ f\left( \tfrac{X_1+\cdots+X_n}{\sqrt n} \right) \right]$ is universal: it does not depend on the distribution of $X_1,...,X_n$ Let us show that this quantity is universal (up to a small error term), in the sense that it does not depend on which collection of independent random variables was provided. Take $X_1,\ldots,X_n$ and $Z_1,\ldots,Z_n$ two sequences of independent random variables, each with mean 0 and variance 1, and finite third moment. The idea is to iteratively replace $X_i$ by $Z_i$ in one of the quantity and control the difference by basic calculus (the idea, I believe, is due to Lindeberg). By a Taylor expansion, if $W = Z_1+\cdots+Z_{n-1}$, and $h(x)=f(x/\sqrt n)$ then \begin{align} h(Z_1+\cdots+Z_{n-1}+X_n) &= h(W) + X_n h'(W) + \frac{X_n^2 h''(W)}{2} + \frac{X_n^3/h'''(M_n)}{6} \\ h(Z_1+\cdots+Z_{n-1}+Z_n) &= h(W) + Z_n h'(W) + \frac{Z_n^2 h''(W)}{2} + \frac{Z_n^3 h'''(M_n')}{6} \\ \end{align} where $M_n$ and $M_n'$ are midpoints given by the mean-value theorem. Taking expectation on both lines, the zeroth order term is the same, the first order terms are equal in expectation because by independence of $X_n$ and $W$, $E[X_n h'(W)]= E[X_n] E[h'(W)] =0$ and similarly for the second line. Again by independence, the second order terms are the same in expectation. The only remaining terms are the third order one, and in expectation the difference between the two lines is at most \[ \frac{(C/6)E[ |X_n|^3 + |Z_n|^3 ]}{(\sqrt n)^3}. \] Here $C$ is an upper bound on the third derivative of $f'''$. The denominator $(\sqrt{n})^3$ appears because $h'''(t) = f'''(t/\sqrt n)/(\sqrt n)^3$. By independence, the contribution of $X_n$ in the sum is meaningless because it could be replaced by $Z_n$ without incurring an error larger than the above display! We now reiterate to replace $X_{n-1}$ by $Z_{n-1}$. If $\tilde W= Z_1+Z_2+\cdots+Z_{n-2} + X_n$ then \begin{align} h(Z_1+\cdots+Z_{n-2}+X_{n-1}+X_n) &= h(\tilde W) + X_{n-1} h'(\tilde W) + \frac{X_{n-1}^2 h''(\tilde W)}{2} + \frac{X_{n-1}^3/h'''(\tilde M_n)}{6}\\ h(Z_1+\cdots+Z_{n-2}+Z_{n-1}+X_n) &= h(\tilde W) + Z_{n-1} h'(\tilde W) + \frac{Z_{n-1}^2 h''(\tilde W)}{2} + \frac{Z_{n-1}^3/h'''(\tilde M_n)}{6}. \end{align} By independence of $Z_{n-1}$ and $\tilde W$, and by independence of $X_{n-1}$ and $\tilde W$, again the zeroth, first and second order terms are equal in expectation for both lines. The difference in expectation between the two lines is again at most \[ \frac{(C/6)E[ |X_{n-1}|^3 + |Z_{n-1}|^3 ]}{(\sqrt n)^3}. \] We keep iterating until we replaced all $Z_i$'s with $X_i$'s. By adding the errors made at each of the $n$ steps, we obtain \[ \Big| E\left[ f\left( \tfrac{X_1+\cdots+X_n}{\sqrt n} \right) \right]-E\left[ f\left( \tfrac{Z_1+\cdots+Z_n}{\sqrt n} \right) \right] \Big| \le n \frac{(C/6)\max_{i=1,\ldots,n} E[ |X_i|^3 + |Z_i|^3 ]}{(\sqrt n)^3}. \] as $n$ increases, the right hand side converges to 0 if the third moments of our random variables are finite (let's assume it is the case). This means that the expectations on the left become arbitrarily close to each other, no matter if the distribution of $X_1,\ldots,X_n$ is far from that of $Z_1,\ldots,Z_n$. By independence, the contribution of each $X_i$ in the sum is meaningless because it could be replaced by $Z_i$ without incurring an error larger than $O(1/(\sqrt n)^3)$. And replacing all $X_i$'s by the $Z_i$'s does not change the quantity by more than $O(1/\sqrt n)$. The expectation $E\left[ f\left( \frac{X_1+\cdots+X_n}{\sqrt n} \right) \right]$ is thus universal, it does not depend on the distribution of $X_1,\ldots,X_n$. On the other hand, independence and $E[X_i]=E[Z_i]=0,E[Z_i^2]=E[X_i^2]=1$ was of utmost importance for the above bounds. 3. Why the normal distribution? We have seen that the expectation $E\left[ f\left( \frac{X_1+\cdots+X_n}{\sqrt n} \right) \right]$ will be the same no matter what the distribution of $X_i$ is, up to a small error of order $O(1/\sqrt n)$. But for applications, it would be useful to compute such quantity. It would also be useful to get a simpler expression for this quantity $E\left[ f\left( \frac{X_1+\cdots+X_n}{\sqrt n} \right) \right]$. Since this quantity is the same for any collection $X_1,\ldots,X_n$, we can simply pick one specific collection such that the distribution $(X_1+\cdots+X_n)/\sqrt n$ is easy to compute or easy to remember. For the normal distribution $N(0,1)$, it happens that this quantity becomes really simple. Indeed, if $Z_1,\ldots,Z_n$ are iid $N(0,1)$ then $\frac{Z_1+\cdots+Z_n}{\sqrt n}$ has also the $N(0,1)$ distribution and it does not depend on $n$! Hence if $Z\sim N(0,1)$, then \[ E\left[ f\left( \frac{Z_1+\cdots+Z_n}{\sqrt n} \right) \right] = E[ f(Z)], \] and by the above argument, for any collection of independent random variables $X_1,\ldots,X_n$ with $E[X_i]=0,E[X_i^2]=1$, then \[ \left| E\left[ f\left( \frac{X_1+\cdots+X_n}{\sqrt n} \right) \right] -E[f(Z) \right| \le \frac{\sup_{x\in R} |f'''(x)| \max_{i=1,\ldots,n} E[|X_i|^3 + |Z|^3]}{6\sqrt n}. \]
What intuitive explanation is there for the central limit theorem? This answer hopes to give an intuitive meaning of the central limit theorem, using simple calculus techniques (Taylor expansion of order 3). Here is the outline: What the CLT says An intuitive proof
696
What intuitive explanation is there for the central limit theorem?
Why the $\sqrt{n}$ instead of $n$? What's this weird version of an average? If you have a bunch of perpendicular vectors $x_1, \dotsc, x_n$ of length $\ell$, then $ \frac{x_1 + \dotsb + x_n}{\sqrt{n}}$ is again of length $\ell.$ You have to normalize by $\sqrt{n}$ to keep the sum at the same scale. There is a deep connection between independent random variables and orthogonal vectors. When random variables are independent, that basically means that they are orthogonal vectors in a vector space of functions. (The function space I refer to is $L^2$, and the variance of a random variable $X$ is just $\|X - \mu\|_{L^2}^2$. So no wonder the variance is additive over independent random variables. Just like $\|x + y\|^2 = \|x\|^2 + \|y\|^2$ when $x \perp y$.)** Why the normal distribution? One thing that really confused me for a while, and which I think lies at the heart of the matter, is the following question: Why is it that the sum $\frac{X_1 + \dotsb + X_n} {\sqrt{n}}$ ($n$ large) doesn’t care anything about the $X_i$ except their mean and their variance? (Moments 1 and 2.) This is similar to the law of large numbers phenomenon: $\frac{X_1 + \dotsb + X_n} {n}$ ($n$ large) only cares about moment 1 (the mean). (Both of these have their hypotheses that I'm suppressing (see the footnote), but the most important thing, of course, is that the $X_i$ be independent.) A more elucidating way to express this phenomenon is: in the sum $\frac{X_1 + \dotsb + X_n}{\sqrt{n}}$, I can replace any or all of the $X_i$ with some other RV’s, mixing and matching between all kinds of various distributions, as long as they have the same first and second moments. And it won’t matter as long as $n$ is large, relative to the moments. If we understand why that’s true, then we understand the central limit theorem. Because then we may as well take $X_i$ to be normal with the same first and second moment, and in that case we know $\frac{X_1 + \dotsb + X_n}{\sqrt{n}}$ is just normal again for any $n$, including super-large $n$. Because the normal distribution has the special property ("stability") that you can add two independent normals together and get another normal. Voila. The explanation of the first-and-second-moment phenomemonon is ultimately just some arithmetic. There are several lenses through which once can choose to view this arithmetic. The most common one people use is the fourier transform (AKA characteristic function), which has the feel of "I follow the steps, but how and why would anyone ever think of that?" Another approach is to look at the cumulants of $X_i$. There we find that the normal distribution is the unique distribution whose higher cumulants vanish, and dividing by $\sqrt{n}$ tends to kill all but the first two cumulants as $n$ gets large. I'll show here a more elementary approach. As the sum $Z_n \overset{\text{(def)}}{=} \frac{X_1 + \dotsb + X_n}{\sqrt{n}}$ gets longer and longer, I'll show that all of the moments of $Z_n$ are functions only of the variances $\operatorname{Var}(X_i)$ and the means $\mathbb{E}X_i$, and nothing else. Now the moments of $Z_n$ determine the distribution of $Z_n$ (that's true not just for long independent sums, but for any nice distribution, by the Carleman continuity theorem). To restate, we're claiming that as $n$ gets large, $Z_n$ depends only on the $\mathbb{E}X_i$ and the $\operatorname{Var}X_i$. And to show that, we're going to show that $\mathbb{E}((Z_n - \mathbb{E}Z_n)^k)$ depends only on the $\mathbb{E}X_i$ and the $\operatorname{Var}X_i$. That suffices, by the Carleman continuity theorem. For convenience, let's require that the $X_i$ have mean zero and variance $\sigma^2$. Assume all their moments exist and are uniformly bounded. (But nevertheless, the $X_i$ can be all different independent distributions.) Claim: Under the stated assumptions, the $k$th moment $$\mathbb{E} \left[ \left(\frac{X_1 + \dotsb + X_n}{\sqrt{n}}\right)^k \right]$$ has a limit as $n \to \infty$, and that limit is a function only of $\sigma^2$. (It disregards all other information.) (Specifically, the values of those limits of moments are just the moments of the normal distribution $\mathcal{N}(0, \sigma^2)$: zero for $k$ odd, and $|\sigma|^k \frac{k!}{(k/2)!2^{k/2}}$ when $k$ is even. This is equation (1) below.) Proof: Consider $\mathbb{E} \left[ \left(\frac{X_1 + \dotsb + X_n}{\sqrt{n}}\right)^k \right]$. When you expand it, you get a factor of $n^{-k/2}$ times a big fat multinomial sum. $$n^{-k/2} \sum_{|\boldsymbol{\alpha}| = k} \binom{k}{\alpha_1, \dotsc, \alpha_n}\prod_{i=1}^n \mathbb{E}(X_i^{\alpha_i})$$ $$\alpha_1 + \dotsb + \alpha_n = k$$ $$(\alpha_i \geq 0)$$ (Remember you can distribute the expectation over independent random variables. $\mathbb{E}(X^a Y^b) = \mathbb{E}(X^a)\mathbb{E}(Y^b)$.) Now if ever I have as one of my factors a plain old $\mathbb{E}(X_i)$, with exponent $\alpha_i =1$, then that whole term is zero, because $\mathbb{E}(X_i) = 0$ by assumption. So I need all the exponents $\alpha_i \neq 1$ in order for that term to survive. That pushes me toward using fewer of the $X_i$ in each term, because each term has $\sum \alpha_i = k$, and I have to have each $\alpha_i >1$ if it is $>0$. In fact, some simple arithmetic shows that at most $k/2$ of the $\alpha_i$ can be nonzero, and that's only when $k$ is even, and when I use only twos and zeros as my $\alpha_i$. This pattern where I use only twos and zeros turns out to be very important...in fact, any term where I don't do that will vanish as the sum grows larger. Lemma: The sum $$n^{-k/2} \sum_{|\boldsymbol{\alpha}| = k}\binom{k}{\alpha_1, \dotsc, \alpha_n}\prod_{i=1}^n \mathbb{E}(X_i^{\alpha_i})$$ breaks up like $$n^{-k/2} \left( \underbrace{\left( \text{terms where some } \alpha_i = 1 \right)}_{\text{These are zero because $\mathbb{E}X_i = 0$}} + \underbrace{\left( \text{terms where }\alpha_i\text{'s are twos and zeros}\right)}_{\text{This part is } O(n^{k/2}) \text{ if $k$ is even, otherwise no such terms}} + \underbrace{\left( \text{rest of terms}\right)}_{o(n^{k/2})} \right)$$ In other words, in the limit, all terms become irrelevant except $$ n^{-k/2}\sum\limits_{\binom{n}{k/2}} \underbrace{\binom{k}{2,\dotsc, 2}}_{k/2 \text{ twos}} \prod\limits_{j=1}^{k/2}\mathbb{E}(X_{i_j}^2) \tag{1}$$ Proof: The main points are to split up the sum by which (strong) composition of $k$ is represented by the multinomial $\boldsymbol{\alpha}$. There are only $2^{k-1}$ possibilities for strong compositions of $k$, so the number of those can't explode as $n \to \infty$. Then there is the choice of which of the $X_1, \dotsc, X_n$ will receive the positive exponents, and the number of such choices is $\binom{n}{\text{# positive terms in }\boldsymbol{\alpha}} = O(n^{\text{# positive terms in }\boldsymbol{\alpha}})$. (Remember the number of positive terms in $\boldsymbol{\alpha}$ can't be bigger than $k/2$ without killing the term.) That's basically it. You can find a more thorough description here on my website, or in section 2.2.3 of Tao's Topics in Random Matrix Theory, where I first read this argument. And that concludes the whole proof. We’ve shown that all moments of $\frac{X_1 + … + X_n}{\sqrt{n}}$ forget everything but $\mathbb{E}X_i$ and $\mathbb{E}(X_i^2)$ as $n \to \infty$. And therefore swapping out the $X_i$ with any variables with the same first and second moments wouldn't have made any difference in the limit. And so we may as well have taken them to be $\sim \mathcal{N}(\mu, \sigma^2)$ to begin with; it wouldn't have made any difference. **(If one wants to pursue more deeply the question of why $n^{1/2}$ is the magic number here for vectors and for functions, and why the variance (square $L^2$ norm) is the important statistic, one might read about why $L^2$ is the only $L^p$ space that can be an inner product space. Because $2$ is the only number that is its own Holder conjugate.) Another valid view is that $n^{1/2}$ is not the only denominator can appear. There are different "basins of attraction" for random variables, and so there are infinitely many central limit theorems. There are random variables for which $\frac{X_1 + \dotsb + X_n}{n} \Rightarrow X$, and for which $\frac{X_1 + \dotsb + X_n}{1} \Rightarrow X$! But these random variables necessarily have infinite variance. These are called "stable laws". It's also enlightening to look at the normal distribution from a calculus of variations standpoint: the normal distribution $\mathcal{N}(\mu, \sigma^2)$ maximizes the Shannon entropy among distributions with a given mean and variance, and which are absolutely continuous with respect to the Lebesgue measure on $\mathbb{R}$ (or $\mathbb{R}^d$, for the multivariate case). This is proven here, for example.
What intuitive explanation is there for the central limit theorem?
Why the $\sqrt{n}$ instead of $n$? What's this weird version of an average? If you have a bunch of perpendicular vectors $x_1, \dotsc, x_n$ of length $\ell$, then $ \frac{x_1 + \dotsb + x_n}{\sqrt{n}}
What intuitive explanation is there for the central limit theorem? Why the $\sqrt{n}$ instead of $n$? What's this weird version of an average? If you have a bunch of perpendicular vectors $x_1, \dotsc, x_n$ of length $\ell$, then $ \frac{x_1 + \dotsb + x_n}{\sqrt{n}}$ is again of length $\ell.$ You have to normalize by $\sqrt{n}$ to keep the sum at the same scale. There is a deep connection between independent random variables and orthogonal vectors. When random variables are independent, that basically means that they are orthogonal vectors in a vector space of functions. (The function space I refer to is $L^2$, and the variance of a random variable $X$ is just $\|X - \mu\|_{L^2}^2$. So no wonder the variance is additive over independent random variables. Just like $\|x + y\|^2 = \|x\|^2 + \|y\|^2$ when $x \perp y$.)** Why the normal distribution? One thing that really confused me for a while, and which I think lies at the heart of the matter, is the following question: Why is it that the sum $\frac{X_1 + \dotsb + X_n} {\sqrt{n}}$ ($n$ large) doesn’t care anything about the $X_i$ except their mean and their variance? (Moments 1 and 2.) This is similar to the law of large numbers phenomenon: $\frac{X_1 + \dotsb + X_n} {n}$ ($n$ large) only cares about moment 1 (the mean). (Both of these have their hypotheses that I'm suppressing (see the footnote), but the most important thing, of course, is that the $X_i$ be independent.) A more elucidating way to express this phenomenon is: in the sum $\frac{X_1 + \dotsb + X_n}{\sqrt{n}}$, I can replace any or all of the $X_i$ with some other RV’s, mixing and matching between all kinds of various distributions, as long as they have the same first and second moments. And it won’t matter as long as $n$ is large, relative to the moments. If we understand why that’s true, then we understand the central limit theorem. Because then we may as well take $X_i$ to be normal with the same first and second moment, and in that case we know $\frac{X_1 + \dotsb + X_n}{\sqrt{n}}$ is just normal again for any $n$, including super-large $n$. Because the normal distribution has the special property ("stability") that you can add two independent normals together and get another normal. Voila. The explanation of the first-and-second-moment phenomemonon is ultimately just some arithmetic. There are several lenses through which once can choose to view this arithmetic. The most common one people use is the fourier transform (AKA characteristic function), which has the feel of "I follow the steps, but how and why would anyone ever think of that?" Another approach is to look at the cumulants of $X_i$. There we find that the normal distribution is the unique distribution whose higher cumulants vanish, and dividing by $\sqrt{n}$ tends to kill all but the first two cumulants as $n$ gets large. I'll show here a more elementary approach. As the sum $Z_n \overset{\text{(def)}}{=} \frac{X_1 + \dotsb + X_n}{\sqrt{n}}$ gets longer and longer, I'll show that all of the moments of $Z_n$ are functions only of the variances $\operatorname{Var}(X_i)$ and the means $\mathbb{E}X_i$, and nothing else. Now the moments of $Z_n$ determine the distribution of $Z_n$ (that's true not just for long independent sums, but for any nice distribution, by the Carleman continuity theorem). To restate, we're claiming that as $n$ gets large, $Z_n$ depends only on the $\mathbb{E}X_i$ and the $\operatorname{Var}X_i$. And to show that, we're going to show that $\mathbb{E}((Z_n - \mathbb{E}Z_n)^k)$ depends only on the $\mathbb{E}X_i$ and the $\operatorname{Var}X_i$. That suffices, by the Carleman continuity theorem. For convenience, let's require that the $X_i$ have mean zero and variance $\sigma^2$. Assume all their moments exist and are uniformly bounded. (But nevertheless, the $X_i$ can be all different independent distributions.) Claim: Under the stated assumptions, the $k$th moment $$\mathbb{E} \left[ \left(\frac{X_1 + \dotsb + X_n}{\sqrt{n}}\right)^k \right]$$ has a limit as $n \to \infty$, and that limit is a function only of $\sigma^2$. (It disregards all other information.) (Specifically, the values of those limits of moments are just the moments of the normal distribution $\mathcal{N}(0, \sigma^2)$: zero for $k$ odd, and $|\sigma|^k \frac{k!}{(k/2)!2^{k/2}}$ when $k$ is even. This is equation (1) below.) Proof: Consider $\mathbb{E} \left[ \left(\frac{X_1 + \dotsb + X_n}{\sqrt{n}}\right)^k \right]$. When you expand it, you get a factor of $n^{-k/2}$ times a big fat multinomial sum. $$n^{-k/2} \sum_{|\boldsymbol{\alpha}| = k} \binom{k}{\alpha_1, \dotsc, \alpha_n}\prod_{i=1}^n \mathbb{E}(X_i^{\alpha_i})$$ $$\alpha_1 + \dotsb + \alpha_n = k$$ $$(\alpha_i \geq 0)$$ (Remember you can distribute the expectation over independent random variables. $\mathbb{E}(X^a Y^b) = \mathbb{E}(X^a)\mathbb{E}(Y^b)$.) Now if ever I have as one of my factors a plain old $\mathbb{E}(X_i)$, with exponent $\alpha_i =1$, then that whole term is zero, because $\mathbb{E}(X_i) = 0$ by assumption. So I need all the exponents $\alpha_i \neq 1$ in order for that term to survive. That pushes me toward using fewer of the $X_i$ in each term, because each term has $\sum \alpha_i = k$, and I have to have each $\alpha_i >1$ if it is $>0$. In fact, some simple arithmetic shows that at most $k/2$ of the $\alpha_i$ can be nonzero, and that's only when $k$ is even, and when I use only twos and zeros as my $\alpha_i$. This pattern where I use only twos and zeros turns out to be very important...in fact, any term where I don't do that will vanish as the sum grows larger. Lemma: The sum $$n^{-k/2} \sum_{|\boldsymbol{\alpha}| = k}\binom{k}{\alpha_1, \dotsc, \alpha_n}\prod_{i=1}^n \mathbb{E}(X_i^{\alpha_i})$$ breaks up like $$n^{-k/2} \left( \underbrace{\left( \text{terms where some } \alpha_i = 1 \right)}_{\text{These are zero because $\mathbb{E}X_i = 0$}} + \underbrace{\left( \text{terms where }\alpha_i\text{'s are twos and zeros}\right)}_{\text{This part is } O(n^{k/2}) \text{ if $k$ is even, otherwise no such terms}} + \underbrace{\left( \text{rest of terms}\right)}_{o(n^{k/2})} \right)$$ In other words, in the limit, all terms become irrelevant except $$ n^{-k/2}\sum\limits_{\binom{n}{k/2}} \underbrace{\binom{k}{2,\dotsc, 2}}_{k/2 \text{ twos}} \prod\limits_{j=1}^{k/2}\mathbb{E}(X_{i_j}^2) \tag{1}$$ Proof: The main points are to split up the sum by which (strong) composition of $k$ is represented by the multinomial $\boldsymbol{\alpha}$. There are only $2^{k-1}$ possibilities for strong compositions of $k$, so the number of those can't explode as $n \to \infty$. Then there is the choice of which of the $X_1, \dotsc, X_n$ will receive the positive exponents, and the number of such choices is $\binom{n}{\text{# positive terms in }\boldsymbol{\alpha}} = O(n^{\text{# positive terms in }\boldsymbol{\alpha}})$. (Remember the number of positive terms in $\boldsymbol{\alpha}$ can't be bigger than $k/2$ without killing the term.) That's basically it. You can find a more thorough description here on my website, or in section 2.2.3 of Tao's Topics in Random Matrix Theory, where I first read this argument. And that concludes the whole proof. We’ve shown that all moments of $\frac{X_1 + … + X_n}{\sqrt{n}}$ forget everything but $\mathbb{E}X_i$ and $\mathbb{E}(X_i^2)$ as $n \to \infty$. And therefore swapping out the $X_i$ with any variables with the same first and second moments wouldn't have made any difference in the limit. And so we may as well have taken them to be $\sim \mathcal{N}(\mu, \sigma^2)$ to begin with; it wouldn't have made any difference. **(If one wants to pursue more deeply the question of why $n^{1/2}$ is the magic number here for vectors and for functions, and why the variance (square $L^2$ norm) is the important statistic, one might read about why $L^2$ is the only $L^p$ space that can be an inner product space. Because $2$ is the only number that is its own Holder conjugate.) Another valid view is that $n^{1/2}$ is not the only denominator can appear. There are different "basins of attraction" for random variables, and so there are infinitely many central limit theorems. There are random variables for which $\frac{X_1 + \dotsb + X_n}{n} \Rightarrow X$, and for which $\frac{X_1 + \dotsb + X_n}{1} \Rightarrow X$! But these random variables necessarily have infinite variance. These are called "stable laws". It's also enlightening to look at the normal distribution from a calculus of variations standpoint: the normal distribution $\mathcal{N}(\mu, \sigma^2)$ maximizes the Shannon entropy among distributions with a given mean and variance, and which are absolutely continuous with respect to the Lebesgue measure on $\mathbb{R}$ (or $\mathbb{R}^d$, for the multivariate case). This is proven here, for example.
What intuitive explanation is there for the central limit theorem? Why the $\sqrt{n}$ instead of $n$? What's this weird version of an average? If you have a bunch of perpendicular vectors $x_1, \dotsc, x_n$ of length $\ell$, then $ \frac{x_1 + \dotsb + x_n}{\sqrt{n}}
697
What intuitive explanation is there for the central limit theorem?
I gave up on trying to come up with an intuitive version and came up with some simulations. I have one that presents a simulation of a Quincunx and some others that do things like show how even a skewed raw reaction time distribution will become normal if you collect enough RT's per subject. I think they help but they're new in my class this year and I haven't graded the first test yet. One thing that I thought was good was being able to show the law of large numbers as well. I could show how variable things are with small sample sizes and then show how they stabilize with large ones. I do a bunch of other large number demos as well. I can show the interaction in the Quincunx between the numbers of random processes and the numbers of samples. (turns out not being able to use a chalk or white board in my class may have been a blessing)
What intuitive explanation is there for the central limit theorem?
I gave up on trying to come up with an intuitive version and came up with some simulations. I have one that presents a simulation of a Quincunx and some others that do things like show how even a ske
What intuitive explanation is there for the central limit theorem? I gave up on trying to come up with an intuitive version and came up with some simulations. I have one that presents a simulation of a Quincunx and some others that do things like show how even a skewed raw reaction time distribution will become normal if you collect enough RT's per subject. I think they help but they're new in my class this year and I haven't graded the first test yet. One thing that I thought was good was being able to show the law of large numbers as well. I could show how variable things are with small sample sizes and then show how they stabilize with large ones. I do a bunch of other large number demos as well. I can show the interaction in the Quincunx between the numbers of random processes and the numbers of samples. (turns out not being able to use a chalk or white board in my class may have been a blessing)
What intuitive explanation is there for the central limit theorem? I gave up on trying to come up with an intuitive version and came up with some simulations. I have one that presents a simulation of a Quincunx and some others that do things like show how even a ske
698
What intuitive explanation is there for the central limit theorem?
What follows is perhaps the most intuitive explanation I have come across for the CLT. Consider a standard six-sided die. Every time you roll that die, an integer value results between 1 and 6, with equal probability. So, if you were to roll that die many, many times and then plot the frequency with which the different values occur, you will see a flat line; all six values arise with equal frequency. Now, what happens when you roll a pair of dice and add them together? If you roll the pair of dice, integer values from 2 through 12 will result. If you were to roll the pair of dice many, many times and record their sum, what will the resulting distribution look like? You will not find a flat distribution; you will find that the distribution is peaked in the middle. Why? While only one combination of values yields a 2 (1 and 1), and only one combination yields a 12 (6 and 6), multiple combinations can yield a 7 (5 and 2, 2 and 5, 3 and 4, or 4 and 3). Note: if you have ever played Settlers of Catan, this may be familiar to you! This is why the 6 and 8 tiles are more desirable than the 2 or 12 tiles; the 6s and 8s occur more often. This concept only amplifies as you add more die to the summation. That is, as you increase the number of random variables that enter your sum, the distribution of resulting values across trials will grow increasingly peaked in the middle. And, this property is not tied to the uniform distribution of a die; the same result will occur if you sum random variables drawn from any underlying distribution.
What intuitive explanation is there for the central limit theorem?
What follows is perhaps the most intuitive explanation I have come across for the CLT. Consider a standard six-sided die. Every time you roll that die, an integer value results between 1 and 6, with e
What intuitive explanation is there for the central limit theorem? What follows is perhaps the most intuitive explanation I have come across for the CLT. Consider a standard six-sided die. Every time you roll that die, an integer value results between 1 and 6, with equal probability. So, if you were to roll that die many, many times and then plot the frequency with which the different values occur, you will see a flat line; all six values arise with equal frequency. Now, what happens when you roll a pair of dice and add them together? If you roll the pair of dice, integer values from 2 through 12 will result. If you were to roll the pair of dice many, many times and record their sum, what will the resulting distribution look like? You will not find a flat distribution; you will find that the distribution is peaked in the middle. Why? While only one combination of values yields a 2 (1 and 1), and only one combination yields a 12 (6 and 6), multiple combinations can yield a 7 (5 and 2, 2 and 5, 3 and 4, or 4 and 3). Note: if you have ever played Settlers of Catan, this may be familiar to you! This is why the 6 and 8 tiles are more desirable than the 2 or 12 tiles; the 6s and 8s occur more often. This concept only amplifies as you add more die to the summation. That is, as you increase the number of random variables that enter your sum, the distribution of resulting values across trials will grow increasingly peaked in the middle. And, this property is not tied to the uniform distribution of a die; the same result will occur if you sum random variables drawn from any underlying distribution.
What intuitive explanation is there for the central limit theorem? What follows is perhaps the most intuitive explanation I have come across for the CLT. Consider a standard six-sided die. Every time you roll that die, an integer value results between 1 and 6, with e
699
What does 1x1 convolution mean in a neural network?
Suppose that I have a conv layer which outputs an $(N, F, H, W)$ shaped tensor where: $N$ is the batch size $F$ is the number of convolutional filters $H, W$ are the spatial dimensions Suppose the input is fed into a conv layer with $F_1$ 1x1 filters, zero padding and stride 1. Then the output of this 1x1 conv layer will have shape $(N, F_1, H , W)$. So 1x1 conv filters can be used to change the dimensionality in the filter space. If $F_1 > F$ then we are increasing dimensionality, if $F_1 < F$ we are decreasing dimensionality, in the filter dimension. Indeed, in the Google Inception article Going Deeper with Convolutions, they state (bold is mine, not by original authors): One big problem with the above modules, at least in this naive form, is that even a modest number of 5x5 convolutions can be prohibitively expensive on top of a convolutional layer with a large number of filters. This leads to the second idea of the proposed architecture: judiciously applying dimension reductions and projections wherever the computational requirements would increase too much otherwise. This is based on the success of embeddings: even low dimensional embeddings might contain a lot of information about a relatively large image patch...1x1 convolutions are used to compute reductions before the expensive 3x3 and 5x5 convolutions. Besides being used as reductions, they also include the use of rectified linear activation which makes them dual-purpose. So in the Inception architecture, we use the 1x1 convolutional filters to reduce dimensionality in the filter dimension. As I explained above, these 1x1 conv layers can be used in general to change the filter space dimensionality (either increase or decrease) and in the Inception architecture we see how effective these 1x1 filters can be for dimensionality reduction, explicitly in the filter dimension space, not the spatial dimension space. Perhaps there are other interpretations of 1x1 conv filters, but I prefer this explanation, especially in the context of the Google Inception architecture.
What does 1x1 convolution mean in a neural network?
Suppose that I have a conv layer which outputs an $(N, F, H, W)$ shaped tensor where: $N$ is the batch size $F$ is the number of convolutional filters $H, W$ are the spatial dimensions Suppose the i
What does 1x1 convolution mean in a neural network? Suppose that I have a conv layer which outputs an $(N, F, H, W)$ shaped tensor where: $N$ is the batch size $F$ is the number of convolutional filters $H, W$ are the spatial dimensions Suppose the input is fed into a conv layer with $F_1$ 1x1 filters, zero padding and stride 1. Then the output of this 1x1 conv layer will have shape $(N, F_1, H , W)$. So 1x1 conv filters can be used to change the dimensionality in the filter space. If $F_1 > F$ then we are increasing dimensionality, if $F_1 < F$ we are decreasing dimensionality, in the filter dimension. Indeed, in the Google Inception article Going Deeper with Convolutions, they state (bold is mine, not by original authors): One big problem with the above modules, at least in this naive form, is that even a modest number of 5x5 convolutions can be prohibitively expensive on top of a convolutional layer with a large number of filters. This leads to the second idea of the proposed architecture: judiciously applying dimension reductions and projections wherever the computational requirements would increase too much otherwise. This is based on the success of embeddings: even low dimensional embeddings might contain a lot of information about a relatively large image patch...1x1 convolutions are used to compute reductions before the expensive 3x3 and 5x5 convolutions. Besides being used as reductions, they also include the use of rectified linear activation which makes them dual-purpose. So in the Inception architecture, we use the 1x1 convolutional filters to reduce dimensionality in the filter dimension. As I explained above, these 1x1 conv layers can be used in general to change the filter space dimensionality (either increase or decrease) and in the Inception architecture we see how effective these 1x1 filters can be for dimensionality reduction, explicitly in the filter dimension space, not the spatial dimension space. Perhaps there are other interpretations of 1x1 conv filters, but I prefer this explanation, especially in the context of the Google Inception architecture.
What does 1x1 convolution mean in a neural network? Suppose that I have a conv layer which outputs an $(N, F, H, W)$ shaped tensor where: $N$ is the batch size $F$ is the number of convolutional filters $H, W$ are the spatial dimensions Suppose the i
700
What does 1x1 convolution mean in a neural network?
A 1x1 convolution simply maps an input pixel with all it's channels to an output pixel, not looking at anything around itself. It is often used to reduce the number of depth channels, since it is often very slow to multiply volumes with extremely large depths. input (256 depth) -> 1x1 convolution (64 depth) -> 4x4 convolution (256 depth) input (256 depth) -> 4x4 convolution (256 depth) The bottom one is about ~3.7x slower. Theoretically the neural network can 'choose' which input 'colors' to look at using this, instead of brute force multiplying everything.
What does 1x1 convolution mean in a neural network?
A 1x1 convolution simply maps an input pixel with all it's channels to an output pixel, not looking at anything around itself. It is often used to reduce the number of depth channels, since it is ofte
What does 1x1 convolution mean in a neural network? A 1x1 convolution simply maps an input pixel with all it's channels to an output pixel, not looking at anything around itself. It is often used to reduce the number of depth channels, since it is often very slow to multiply volumes with extremely large depths. input (256 depth) -> 1x1 convolution (64 depth) -> 4x4 convolution (256 depth) input (256 depth) -> 4x4 convolution (256 depth) The bottom one is about ~3.7x slower. Theoretically the neural network can 'choose' which input 'colors' to look at using this, instead of brute force multiplying everything.
What does 1x1 convolution mean in a neural network? A 1x1 convolution simply maps an input pixel with all it's channels to an output pixel, not looking at anything around itself. It is often used to reduce the number of depth channels, since it is ofte