idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
801
Can a probability distribution value exceeding 1 be OK?
When random variable $X$ is continuous and its probability density function is $f(x)$, $f(x)dx$ is a probability, but $f(x)$ is not a probability and can be larger than one. The reported $f(\mbox{height}|\mbox{male})$ is not a probability, but $f(\mbox{height}|\mbox{male})d\mbox{height}$ is. In other words, for a continuous random variable $X$, $P(X\in[x,x+dx))=f(x)dx$, $P(X\in[a,b])=\int_{a}^{b}f(x)dx$, and $P(X = x)=P(X \in [x,x])=0$. The same goes for conditional probabilities.
Can a probability distribution value exceeding 1 be OK?
When random variable $X$ is continuous and its probability density function is $f(x)$, $f(x)dx$ is a probability, but $f(x)$ is not a probability and can be larger than one. The reported $f(\mbox{hei
Can a probability distribution value exceeding 1 be OK? When random variable $X$ is continuous and its probability density function is $f(x)$, $f(x)dx$ is a probability, but $f(x)$ is not a probability and can be larger than one. The reported $f(\mbox{height}|\mbox{male})$ is not a probability, but $f(\mbox{height}|\mbox{male})d\mbox{height}$ is. In other words, for a continuous random variable $X$, $P(X\in[x,x+dx))=f(x)dx$, $P(X\in[a,b])=\int_{a}^{b}f(x)dx$, and $P(X = x)=P(X \in [x,x])=0$. The same goes for conditional probabilities.
Can a probability distribution value exceeding 1 be OK? When random variable $X$ is continuous and its probability density function is $f(x)$, $f(x)dx$ is a probability, but $f(x)$ is not a probability and can be larger than one. The reported $f(\mbox{hei
802
Can a probability distribution value exceeding 1 be OK?
The point value at a particular parameter value of a probability density plot would be a likelihood, right? If so, then the statement might be corrected by simply changing P(height|male) to L(height|male).
Can a probability distribution value exceeding 1 be OK?
The point value at a particular parameter value of a probability density plot would be a likelihood, right? If so, then the statement might be corrected by simply changing P(height|male) to L(height|m
Can a probability distribution value exceeding 1 be OK? The point value at a particular parameter value of a probability density plot would be a likelihood, right? If so, then the statement might be corrected by simply changing P(height|male) to L(height|male).
Can a probability distribution value exceeding 1 be OK? The point value at a particular parameter value of a probability density plot would be a likelihood, right? If so, then the statement might be corrected by simply changing P(height|male) to L(height|m
803
Choice of K in K-fold cross-validation
The choice of $k = 10$ is somewhat arbitrary. Here's how I decide $k$: first of all, in order to lower the variance of the CV result, you can and should repeat/iterate the CV with new random splits. This makes the argument of high $k$ => more computation time largely irrelevant, as you anyways want to calculate many models. I tend to think mainly of the total number of models calculated (in analogy to bootstrapping). So I may decide for 100 x 10-fold CV or 200 x 5-fold CV. @ogrisel already explained that usually large $k$ mean less (pessimistic) bias. (Some exceptions are known particularly for $k = n$, i.e. leave-one-out). If possible, I use a $k$ that is a divisor of the sample size, or the size of the groups in the sample that should be stratified. Too large $k$ mean that only a low number of sample combinations is possible, thus limiting the number of iterations that are different. For leave-one-out: $\binom{n}{1} = n = k$ different model/test sample combinations are possible. Iterations don't make sense at all. E.g. $n = 20$ and $k = 10$: $\binom{n=20}{2} = 190 = 19 ⋅ k$ different model/test sample combinations exist. You may consider going through all possible combinations here as 19 iterations of $k$-fold CV or a total of 190 models is not very much. These thoughts have more weight with small sample sizes. With more samples available $k$ doesn't matter very much. The possible number of combinations soon becomes large enough so the (say) 100 iterations of 10-fold CV do not run a great risk of being duplicates. Also, more training samples usually means that you are at a flatter part of the learning curve, so the difference between the surrogate models and the "real" model trained on all $n$ samples becomes negligible.
Choice of K in K-fold cross-validation
The choice of $k = 10$ is somewhat arbitrary. Here's how I decide $k$: first of all, in order to lower the variance of the CV result, you can and should repeat/iterate the CV with new random splits.
Choice of K in K-fold cross-validation The choice of $k = 10$ is somewhat arbitrary. Here's how I decide $k$: first of all, in order to lower the variance of the CV result, you can and should repeat/iterate the CV with new random splits. This makes the argument of high $k$ => more computation time largely irrelevant, as you anyways want to calculate many models. I tend to think mainly of the total number of models calculated (in analogy to bootstrapping). So I may decide for 100 x 10-fold CV or 200 x 5-fold CV. @ogrisel already explained that usually large $k$ mean less (pessimistic) bias. (Some exceptions are known particularly for $k = n$, i.e. leave-one-out). If possible, I use a $k$ that is a divisor of the sample size, or the size of the groups in the sample that should be stratified. Too large $k$ mean that only a low number of sample combinations is possible, thus limiting the number of iterations that are different. For leave-one-out: $\binom{n}{1} = n = k$ different model/test sample combinations are possible. Iterations don't make sense at all. E.g. $n = 20$ and $k = 10$: $\binom{n=20}{2} = 190 = 19 ⋅ k$ different model/test sample combinations exist. You may consider going through all possible combinations here as 19 iterations of $k$-fold CV or a total of 190 models is not very much. These thoughts have more weight with small sample sizes. With more samples available $k$ doesn't matter very much. The possible number of combinations soon becomes large enough so the (say) 100 iterations of 10-fold CV do not run a great risk of being duplicates. Also, more training samples usually means that you are at a flatter part of the learning curve, so the difference between the surrogate models and the "real" model trained on all $n$ samples becomes negligible.
Choice of K in K-fold cross-validation The choice of $k = 10$ is somewhat arbitrary. Here's how I decide $k$: first of all, in order to lower the variance of the CV result, you can and should repeat/iterate the CV with new random splits.
804
Choice of K in K-fold cross-validation
Larger K means less bias towards overestimating the true expected error (as training folds will be closer to the total dataset) but higher variance and higher running time (as you are getting closer to the limit case: Leave-One-Out CV). If the slope of the learning curve is flat enough at training_size = 90% of total dataset, then the bias can be ignored and K=10 is reasonable. Also higher K give you more samples to estimate a more accurate confidence interval on you estimate (using either parametric standard error assuming normality of the distribution of the CV test errors or non parametric bootstrap CI that just make the i.i.d assumption which is actually not very true as CV folds are not independent from one another). Edit: underestimating => overestimating the true expected error Edit: the part of this reply about higher variances for large K or LOOCV is probably wrong (not always true). More details with simulations in this answer: Bias and variance in leave-one-out vs K-fold cross validation (thanks Xavier Bourret Sicotte for this work).
Choice of K in K-fold cross-validation
Larger K means less bias towards overestimating the true expected error (as training folds will be closer to the total dataset) but higher variance and higher running time (as you are getting closer t
Choice of K in K-fold cross-validation Larger K means less bias towards overestimating the true expected error (as training folds will be closer to the total dataset) but higher variance and higher running time (as you are getting closer to the limit case: Leave-One-Out CV). If the slope of the learning curve is flat enough at training_size = 90% of total dataset, then the bias can be ignored and K=10 is reasonable. Also higher K give you more samples to estimate a more accurate confidence interval on you estimate (using either parametric standard error assuming normality of the distribution of the CV test errors or non parametric bootstrap CI that just make the i.i.d assumption which is actually not very true as CV folds are not independent from one another). Edit: underestimating => overestimating the true expected error Edit: the part of this reply about higher variances for large K or LOOCV is probably wrong (not always true). More details with simulations in this answer: Bias and variance in leave-one-out vs K-fold cross validation (thanks Xavier Bourret Sicotte for this work).
Choice of K in K-fold cross-validation Larger K means less bias towards overestimating the true expected error (as training folds will be closer to the total dataset) but higher variance and higher running time (as you are getting closer t
805
Choice of K in K-fold cross-validation
I don't know how K affects accuracy and generalization, and this may depend on the learning algorithm, but it definitely affects the computational complexity almost linearly (asymptotically, linearly) for training algorithms with algorithmic complexity linear in the number of training instances. The computational time for training increases K-1 times if the training time is linear in the number of training instances. So for small training sets I'd consider the accuracy and generalization aspects, especially given that we need to get the most out of a limited number of training instances. However, for large training sets and learning algorithms with high asymptotical comutational complexity growth in the number of training instances (at least linear), I just select K=2 so that there is no increase in computational time for a training algorithm with asymptotic complexity linear in the number of training instances.
Choice of K in K-fold cross-validation
I don't know how K affects accuracy and generalization, and this may depend on the learning algorithm, but it definitely affects the computational complexity almost linearly (asymptotically, linearly)
Choice of K in K-fold cross-validation I don't know how K affects accuracy and generalization, and this may depend on the learning algorithm, but it definitely affects the computational complexity almost linearly (asymptotically, linearly) for training algorithms with algorithmic complexity linear in the number of training instances. The computational time for training increases K-1 times if the training time is linear in the number of training instances. So for small training sets I'd consider the accuracy and generalization aspects, especially given that we need to get the most out of a limited number of training instances. However, for large training sets and learning algorithms with high asymptotical comutational complexity growth in the number of training instances (at least linear), I just select K=2 so that there is no increase in computational time for a training algorithm with asymptotic complexity linear in the number of training instances.
Choice of K in K-fold cross-validation I don't know how K affects accuracy and generalization, and this may depend on the learning algorithm, but it definitely affects the computational complexity almost linearly (asymptotically, linearly)
806
Choice of K in K-fold cross-validation
Solution: K = N/N*0.30 N = Size of data set K = Fold Comment: We can also choose 20% instead of 30%, depending on size you want to choose as your test set. Example: If data set size: N=1500; K=1500/1500*0.30 = 3.33; We can choose K value as 3 or 4 Note: Large K value in leave one out cross-validation would result in over-fitting. Small K value in leave one out cross-validation would result in under-fitting. Approach might be naive, but would be still better than choosing k=10 for data set of different sizes.
Choice of K in K-fold cross-validation
Solution: K = N/N*0.30 N = Size of data set K = Fold Comment: We can also choose 20% instead of 30%, depending on size you want to choose as your test set. Example: If data set size: N=1500; K=150
Choice of K in K-fold cross-validation Solution: K = N/N*0.30 N = Size of data set K = Fold Comment: We can also choose 20% instead of 30%, depending on size you want to choose as your test set. Example: If data set size: N=1500; K=1500/1500*0.30 = 3.33; We can choose K value as 3 or 4 Note: Large K value in leave one out cross-validation would result in over-fitting. Small K value in leave one out cross-validation would result in under-fitting. Approach might be naive, but would be still better than choosing k=10 for data set of different sizes.
Choice of K in K-fold cross-validation Solution: K = N/N*0.30 N = Size of data set K = Fold Comment: We can also choose 20% instead of 30%, depending on size you want to choose as your test set. Example: If data set size: N=1500; K=150
807
A list of cost functions used in neural networks, alongside applications
Here are those I understand so far. Most of these work best when given values between 0 and 1. Quadratic cost Also known as mean squared error, this is defined as: $$C_{MST}(W, B, S^r, E^r) = 0.5\sum\limits_j (a^L_j - E^r_j)^2$$ The gradient of this cost function with respect to the output of a neural network and some sample $r$ is: $$\nabla_a C_{MST} = (a^L - E^r)$$ Cross-entropy cost Also known as Bernoulli negative log-likelihood and Binary Cross-Entropy $$C_{CE}(W, B, S^r, E^r) = -\sum\limits_j [E^r_j \text{ ln } a^L_j + (1 - E^r_j) \text{ ln }(1-a^L_j)]$$ The gradient of this cost function with respect to the output of a neural network and some sample $r$ is: $$\nabla_a C_{CE} = \frac{(a^L - E^r)}{(1-a^L)(a^L)}$$ Exponentional cost This requires choosing some parameter $\tau$ that you think will give you the behavior you want. Typically you'll just need to play with this until things work good. $$C_{EXP}(W, B, S^r, E^r) = \tau\text{ }\exp(\frac{1}{\tau} \sum\limits_j (a^L_j - E^r_j)^2)$$ where $\text{exp}(x)$ is simply shorthand for $e^x$. The gradient of this cost function with respect to the output of a neural network and some sample $r$ is: $$\nabla_a C = \frac{2}{\tau}(a^L- E^r)C_{EXP}(W, B, S^r, E^r)$$ I could rewrite out $C_{EXP}$, but that seems redundant. Point is the gradient computes a vector and then multiplies it by $C_{EXP}$. Hellinger distance $$C_{HD}(W, B, S^r, E^r) = \frac{1}{\sqrt{2}}\sum\limits_j(\sqrt{a^L_j}-\sqrt{E^r_j})^2$$ You can find more about this here. This needs to have positive values, and ideally values between $0$ and $1$. The same is true for the following divergences. The gradient of this cost function with respect to the output of a neural network and some sample $r$ is: $$\nabla_a C = \frac{\sqrt{a^L}-\sqrt{E^r}}{\sqrt{2}\sqrt{a^L}}$$ Kullback–Leibler divergence Also known as Information Divergence, Information Gain, Relative entropy, KLIC, or KL Divergence (See here). Kullback–Leibler divergence is typically denoted $$D_{\mathrm{KL}}(P\|Q) = \sum_i P(i) \, \ln\frac{P(i)}{Q(i)}$$, where $D_{\mathrm{KL}}(P\|Q)$ is a measure of the information lost when $Q$ is used to approximate $P$. Thus we want to set $P=E^i$ and $Q=a^L$, because we want to measure how much information is lost when we use $a^i_j$ to approximate $E^i_j$. This gives us $$C_{KL}(W, B, S^r, E^r)=\sum\limits_jE^r_j \log \frac{E^r_j}{a^L_j}$$ The other divergences here use this same idea of setting $P=E^i$ and $Q=a^L$. The gradient of this cost function with respect to the output of a neural network and some sample $r$ is: $$\nabla_a C = -\frac{E^r}{a^L}$$ Generalized Kullback–Leibler divergence From here. $$C_{GKL}(W, B, S^r, E^r)=\sum\limits_j E^r_j \log \frac{E^r_j}{a^L_j} -\sum\limits_j(E^r_j) + \sum\limits_j(a^L_j)$$ The gradient of this cost function with respect to the output of a neural network and some sample $r$ is: $$\nabla_a C = \frac{a^L-E^r}{a^L}$$ Itakura–Saito distance Also from here. $$C_{GKL}(W, B, S^r, E^r)= \sum_j \left(\frac {E^r_j}{a^L_j} - \log \frac{E^r_j}{a^L_j} - 1 \right)$$ The gradient of this cost function with respect to the output of a neural network and some sample $r$ is: $$\nabla_a C = \frac{a^L-E^r}{\left(a^L\right)^2}$$ Where $\left(\left(a^L\right)^2\right)_j = a^L_j \cdot a^L_j$. In other words, $\left( a^L\right) ^2$ is simply equal to squaring each element of $a^L$.
A list of cost functions used in neural networks, alongside applications
Here are those I understand so far. Most of these work best when given values between 0 and 1. Quadratic cost Also known as mean squared error, this is defined as: $$C_{MST}(W, B, S^r, E^r) = 0.5\sum\
A list of cost functions used in neural networks, alongside applications Here are those I understand so far. Most of these work best when given values between 0 and 1. Quadratic cost Also known as mean squared error, this is defined as: $$C_{MST}(W, B, S^r, E^r) = 0.5\sum\limits_j (a^L_j - E^r_j)^2$$ The gradient of this cost function with respect to the output of a neural network and some sample $r$ is: $$\nabla_a C_{MST} = (a^L - E^r)$$ Cross-entropy cost Also known as Bernoulli negative log-likelihood and Binary Cross-Entropy $$C_{CE}(W, B, S^r, E^r) = -\sum\limits_j [E^r_j \text{ ln } a^L_j + (1 - E^r_j) \text{ ln }(1-a^L_j)]$$ The gradient of this cost function with respect to the output of a neural network and some sample $r$ is: $$\nabla_a C_{CE} = \frac{(a^L - E^r)}{(1-a^L)(a^L)}$$ Exponentional cost This requires choosing some parameter $\tau$ that you think will give you the behavior you want. Typically you'll just need to play with this until things work good. $$C_{EXP}(W, B, S^r, E^r) = \tau\text{ }\exp(\frac{1}{\tau} \sum\limits_j (a^L_j - E^r_j)^2)$$ where $\text{exp}(x)$ is simply shorthand for $e^x$. The gradient of this cost function with respect to the output of a neural network and some sample $r$ is: $$\nabla_a C = \frac{2}{\tau}(a^L- E^r)C_{EXP}(W, B, S^r, E^r)$$ I could rewrite out $C_{EXP}$, but that seems redundant. Point is the gradient computes a vector and then multiplies it by $C_{EXP}$. Hellinger distance $$C_{HD}(W, B, S^r, E^r) = \frac{1}{\sqrt{2}}\sum\limits_j(\sqrt{a^L_j}-\sqrt{E^r_j})^2$$ You can find more about this here. This needs to have positive values, and ideally values between $0$ and $1$. The same is true for the following divergences. The gradient of this cost function with respect to the output of a neural network and some sample $r$ is: $$\nabla_a C = \frac{\sqrt{a^L}-\sqrt{E^r}}{\sqrt{2}\sqrt{a^L}}$$ Kullback–Leibler divergence Also known as Information Divergence, Information Gain, Relative entropy, KLIC, or KL Divergence (See here). Kullback–Leibler divergence is typically denoted $$D_{\mathrm{KL}}(P\|Q) = \sum_i P(i) \, \ln\frac{P(i)}{Q(i)}$$, where $D_{\mathrm{KL}}(P\|Q)$ is a measure of the information lost when $Q$ is used to approximate $P$. Thus we want to set $P=E^i$ and $Q=a^L$, because we want to measure how much information is lost when we use $a^i_j$ to approximate $E^i_j$. This gives us $$C_{KL}(W, B, S^r, E^r)=\sum\limits_jE^r_j \log \frac{E^r_j}{a^L_j}$$ The other divergences here use this same idea of setting $P=E^i$ and $Q=a^L$. The gradient of this cost function with respect to the output of a neural network and some sample $r$ is: $$\nabla_a C = -\frac{E^r}{a^L}$$ Generalized Kullback–Leibler divergence From here. $$C_{GKL}(W, B, S^r, E^r)=\sum\limits_j E^r_j \log \frac{E^r_j}{a^L_j} -\sum\limits_j(E^r_j) + \sum\limits_j(a^L_j)$$ The gradient of this cost function with respect to the output of a neural network and some sample $r$ is: $$\nabla_a C = \frac{a^L-E^r}{a^L}$$ Itakura–Saito distance Also from here. $$C_{GKL}(W, B, S^r, E^r)= \sum_j \left(\frac {E^r_j}{a^L_j} - \log \frac{E^r_j}{a^L_j} - 1 \right)$$ The gradient of this cost function with respect to the output of a neural network and some sample $r$ is: $$\nabla_a C = \frac{a^L-E^r}{\left(a^L\right)^2}$$ Where $\left(\left(a^L\right)^2\right)_j = a^L_j \cdot a^L_j$. In other words, $\left( a^L\right) ^2$ is simply equal to squaring each element of $a^L$.
A list of cost functions used in neural networks, alongside applications Here are those I understand so far. Most of these work best when given values between 0 and 1. Quadratic cost Also known as mean squared error, this is defined as: $$C_{MST}(W, B, S^r, E^r) = 0.5\sum\
808
A list of cost functions used in neural networks, alongside applications
Don't have the reputation to comment, but there are sign errors in those last 3 gradients. In the KL divergence, $$\eqalign{ C &= \sum_j E_j\log(E_j/a_j) \cr &= \sum_j E_j\log(E_j) - E_j\log(a_j) \cr\cr dC &= -\sum_j E_j\,\,d\log(a_j) \cr &= -\sum_j (E_j/a_j)\,da_j \cr\cr \nabla_a C &= \frac{-E}{a} \cr\cr }$$ This same sign error appears in the Generalized KL divergence. In the Itakura-Saito distance , $$\eqalign{ C &= \sum_j (E_j/a_j) - \log(E_j/a_j) - 1 \cr &= \sum_j (E_j/a_j) - \log(E_j) + \log(a_j) -1 \cr\cr dC &= \sum_j (-E_j/a^2_j)\,da_j + d\log(a_j) \cr &= \sum_j (1/a_j)\,da_j - (E_j/a^2_j)\,da_j \cr &= \sum_j (a_j-E_j)/a^2_j\,\,\,da_j \cr\cr \nabla_a C &= \frac{a-E}{(a)^2} \cr }$$
A list of cost functions used in neural networks, alongside applications
Don't have the reputation to comment, but there are sign errors in those last 3 gradients. In the KL divergence, $$\eqalign{ C &= \sum_j E_j\log(E_j/a_j) \cr &= \sum_j E_j\log(E_j) - E_j\log(a_j)
A list of cost functions used in neural networks, alongside applications Don't have the reputation to comment, but there are sign errors in those last 3 gradients. In the KL divergence, $$\eqalign{ C &= \sum_j E_j\log(E_j/a_j) \cr &= \sum_j E_j\log(E_j) - E_j\log(a_j) \cr\cr dC &= -\sum_j E_j\,\,d\log(a_j) \cr &= -\sum_j (E_j/a_j)\,da_j \cr\cr \nabla_a C &= \frac{-E}{a} \cr\cr }$$ This same sign error appears in the Generalized KL divergence. In the Itakura-Saito distance , $$\eqalign{ C &= \sum_j (E_j/a_j) - \log(E_j/a_j) - 1 \cr &= \sum_j (E_j/a_j) - \log(E_j) + \log(a_j) -1 \cr\cr dC &= \sum_j (-E_j/a^2_j)\,da_j + d\log(a_j) \cr &= \sum_j (1/a_j)\,da_j - (E_j/a^2_j)\,da_j \cr &= \sum_j (a_j-E_j)/a^2_j\,\,\,da_j \cr\cr \nabla_a C &= \frac{a-E}{(a)^2} \cr }$$
A list of cost functions used in neural networks, alongside applications Don't have the reputation to comment, but there are sign errors in those last 3 gradients. In the KL divergence, $$\eqalign{ C &= \sum_j E_j\log(E_j/a_j) \cr &= \sum_j E_j\log(E_j) - E_j\log(a_j)
809
How to intuitively explain what a kernel is?
Kernel is a way of computing the dot product of two vectors $\mathbf x$ and $\mathbf y$ in some (possibly very high dimensional) feature space, which is why kernel functions are sometimes called "generalized dot product". Suppose we have a mapping $\varphi \, : \, \mathbb R^n \to \mathbb R^m$ that brings our vectors in $\mathbb R^n$ to some feature space $\mathbb R^m$. Then the dot product of $\mathbf x$ and $\mathbf y$ in this space is $\varphi(\mathbf x)^T \varphi(\mathbf y)$. A kernel is a function $k$ that corresponds to this dot product, i.e. $k(\mathbf x, \mathbf y) = \varphi(\mathbf x)^T \varphi(\mathbf y)$. Why is this useful? Kernels give a way to compute dot products in some feature space without even knowing what this space is and what is $\varphi$. For example, consider a simple polynomial kernel $k(\mathbf x, \mathbf y) = (1 + \mathbf x^T \mathbf y)^2$ with $\mathbf x, \mathbf y \in \mathbb R^2$. This doesn't seem to correspond to any mapping function $\varphi$, it's just a function that returns a real number. Assuming that $\mathbf x = (x_1, x_2)$ and $\mathbf y = (y_1, y_2)$, let's expand this expression: $\begin{align} k(\mathbf x, \mathbf y) & = (1 + \mathbf x^T \mathbf y)^2 = (1 + x_1 \, y_1 + x_2 \, y_2)^2 = \\ & = 1 + x_1^2 y_1^2 + x_2^2 y_2^2 + 2 x_1 y_1 + 2 x_2 y_2 + 2 x_1 x_2 y_1 y_2 \end{align}$ Note that this is nothing else but a dot product between two vectors $(1, x_1^2, x_2^2, \sqrt{2} x_1, \sqrt{2} x_2, \sqrt{2} x_1 x_2)$ and $(1, y_1^2, y_2^2, \sqrt{2} y_1, \sqrt{2} y_2, \sqrt{2} y_1 y_2)$, and $\varphi(\mathbf x) = \varphi(x_1, x_2) = (1, x_1^2, x_2^2, \sqrt{2} x_1, \sqrt{2} x_2, \sqrt{2} x_1 x_2)$. So the kernel $k(\mathbf x, \mathbf y) = (1 + \mathbf x^T \mathbf y)^2 = \varphi(\mathbf x)^T \varphi(\mathbf y)$ computes a dot product in 6-dimensional space without explicitly visiting this space. Another example is Gaussian kernel $k(\mathbf x, \mathbf y) = \exp\big(- \gamma \, \|\mathbf x - \mathbf y\|^2 \big)$. If we Taylor-expand this function, we'll see that it corresponds to an infinite-dimensional codomain of $\varphi$. Finally, I'd recommend an online course "Learning from Data" by Professor Yaser Abu-Mostafa as a good introduction to kernel-based methods. Specifically, lectures "Support Vector Machines", "Kernel Methods" and "Radial Basis Functions" are about kernels.
How to intuitively explain what a kernel is?
Kernel is a way of computing the dot product of two vectors $\mathbf x$ and $\mathbf y$ in some (possibly very high dimensional) feature space, which is why kernel functions are sometimes called "gene
How to intuitively explain what a kernel is? Kernel is a way of computing the dot product of two vectors $\mathbf x$ and $\mathbf y$ in some (possibly very high dimensional) feature space, which is why kernel functions are sometimes called "generalized dot product". Suppose we have a mapping $\varphi \, : \, \mathbb R^n \to \mathbb R^m$ that brings our vectors in $\mathbb R^n$ to some feature space $\mathbb R^m$. Then the dot product of $\mathbf x$ and $\mathbf y$ in this space is $\varphi(\mathbf x)^T \varphi(\mathbf y)$. A kernel is a function $k$ that corresponds to this dot product, i.e. $k(\mathbf x, \mathbf y) = \varphi(\mathbf x)^T \varphi(\mathbf y)$. Why is this useful? Kernels give a way to compute dot products in some feature space without even knowing what this space is and what is $\varphi$. For example, consider a simple polynomial kernel $k(\mathbf x, \mathbf y) = (1 + \mathbf x^T \mathbf y)^2$ with $\mathbf x, \mathbf y \in \mathbb R^2$. This doesn't seem to correspond to any mapping function $\varphi$, it's just a function that returns a real number. Assuming that $\mathbf x = (x_1, x_2)$ and $\mathbf y = (y_1, y_2)$, let's expand this expression: $\begin{align} k(\mathbf x, \mathbf y) & = (1 + \mathbf x^T \mathbf y)^2 = (1 + x_1 \, y_1 + x_2 \, y_2)^2 = \\ & = 1 + x_1^2 y_1^2 + x_2^2 y_2^2 + 2 x_1 y_1 + 2 x_2 y_2 + 2 x_1 x_2 y_1 y_2 \end{align}$ Note that this is nothing else but a dot product between two vectors $(1, x_1^2, x_2^2, \sqrt{2} x_1, \sqrt{2} x_2, \sqrt{2} x_1 x_2)$ and $(1, y_1^2, y_2^2, \sqrt{2} y_1, \sqrt{2} y_2, \sqrt{2} y_1 y_2)$, and $\varphi(\mathbf x) = \varphi(x_1, x_2) = (1, x_1^2, x_2^2, \sqrt{2} x_1, \sqrt{2} x_2, \sqrt{2} x_1 x_2)$. So the kernel $k(\mathbf x, \mathbf y) = (1 + \mathbf x^T \mathbf y)^2 = \varphi(\mathbf x)^T \varphi(\mathbf y)$ computes a dot product in 6-dimensional space without explicitly visiting this space. Another example is Gaussian kernel $k(\mathbf x, \mathbf y) = \exp\big(- \gamma \, \|\mathbf x - \mathbf y\|^2 \big)$. If we Taylor-expand this function, we'll see that it corresponds to an infinite-dimensional codomain of $\varphi$. Finally, I'd recommend an online course "Learning from Data" by Professor Yaser Abu-Mostafa as a good introduction to kernel-based methods. Specifically, lectures "Support Vector Machines", "Kernel Methods" and "Radial Basis Functions" are about kernels.
How to intuitively explain what a kernel is? Kernel is a way of computing the dot product of two vectors $\mathbf x$ and $\mathbf y$ in some (possibly very high dimensional) feature space, which is why kernel functions are sometimes called "gene
810
How to intuitively explain what a kernel is?
A visual example to help intuition Consider the following dataset where the yellow and blue points are clearly not linearly separable in two dimensions. If we could find a higher dimensional space in which these points were linearly separable, then we could do the following: Map the original features to the higher, transformer space (feature mapping) Perform linear SVM in this higher space Obtain a set of weights corresponding to the decision boundary hyperplane Map this hyperplane back into the original 2D space to obtain a non linear decision boundary There are many higher dimensional spaces in which these points are linearly separable. Here is one example $$ x_1, x_2 : \rightarrow z_1, z_2, z_3$$ $$ z_1 = \sqrt{2}x_1x_2 \ \ z_2 = x_1^2 \ \ z_3 = x_2^2$$ This is where the Kernel trick comes into play. Quoting the above great answers Suppose we have a mapping $\varphi \, : \, \mathbb R^n \to \mathbb R^m$ that brings our vectors in $\mathbb R^n$ to some feature space $\mathbb R^m$. Then the dot product of $\mathbf x$ and $\mathbf y$ in this space is $\varphi(\mathbf x)^T \varphi(\mathbf y)$. A kernel is a function $k$ that corresponds to this dot product, i.e. $k(\mathbf x, \mathbf y) = \varphi(\mathbf x)^T \varphi(\mathbf y)$ If we could find a kernel function that was equivalent to the above feature map, then we could plug the kernel function in the linear SVM and perform the calculations very efficiently. Polynomial kernel It turns out that the above feature map corresponds to the well known polynomial kernel : $K(\mathbf{x},\mathbf{x'}) = (\mathbf{x}^T\mathbf{x'})^d$. Let $d = 2$ and $\mathbf{x} = (x_1, x_2)^T$ we get \begin{aligned} k(\begin{pmatrix} x_1 \\ x_2 \end{pmatrix}, \begin{pmatrix} x_1' \\ x_2' \end{pmatrix} ) & = (x_1x_1' + x_2x_2')^2 \\ & = 2x_1x_1'x_2x_2' + (x_1x_1')^2 + (x_2x_2')^2 \\ & = (\sqrt{2}x_1x_2, \ x_1^2, \ x_2^2) \ \begin{pmatrix} \sqrt{2}x_1'x_2' \\ x_1'^2 \\ x_2'^2 \end{pmatrix} \end{aligned} $$ k(\begin{pmatrix} x_1 \\ x_2 \end{pmatrix}, \begin{pmatrix} x_1' \\ x_2' \end{pmatrix} ) = \phi(\mathbf{x})^T \phi(\mathbf{x'})$$ $$ \phi(\begin{pmatrix} x_1 \\ x_2 \end{pmatrix}) =\begin{pmatrix} \sqrt{2}x_1x_2 \\ x_1^2 \\ x_2^2 \end{pmatrix}$$ Visualizing the feature map and the resulting boundary line Left-hand side plot shows the points plotted in the transformed space together with the SVM linear boundary hyperplane Right-hand side plot shows the result in the original 2-D space Source Full post and python code here https://disi.unitn.it/~passerini/teaching/2014-2015/MachineLearning/slides/17_kernel_machines/handouts.pdf
How to intuitively explain what a kernel is?
A visual example to help intuition Consider the following dataset where the yellow and blue points are clearly not linearly separable in two dimensions. If we could find a higher dimensional space in
How to intuitively explain what a kernel is? A visual example to help intuition Consider the following dataset where the yellow and blue points are clearly not linearly separable in two dimensions. If we could find a higher dimensional space in which these points were linearly separable, then we could do the following: Map the original features to the higher, transformer space (feature mapping) Perform linear SVM in this higher space Obtain a set of weights corresponding to the decision boundary hyperplane Map this hyperplane back into the original 2D space to obtain a non linear decision boundary There are many higher dimensional spaces in which these points are linearly separable. Here is one example $$ x_1, x_2 : \rightarrow z_1, z_2, z_3$$ $$ z_1 = \sqrt{2}x_1x_2 \ \ z_2 = x_1^2 \ \ z_3 = x_2^2$$ This is where the Kernel trick comes into play. Quoting the above great answers Suppose we have a mapping $\varphi \, : \, \mathbb R^n \to \mathbb R^m$ that brings our vectors in $\mathbb R^n$ to some feature space $\mathbb R^m$. Then the dot product of $\mathbf x$ and $\mathbf y$ in this space is $\varphi(\mathbf x)^T \varphi(\mathbf y)$. A kernel is a function $k$ that corresponds to this dot product, i.e. $k(\mathbf x, \mathbf y) = \varphi(\mathbf x)^T \varphi(\mathbf y)$ If we could find a kernel function that was equivalent to the above feature map, then we could plug the kernel function in the linear SVM and perform the calculations very efficiently. Polynomial kernel It turns out that the above feature map corresponds to the well known polynomial kernel : $K(\mathbf{x},\mathbf{x'}) = (\mathbf{x}^T\mathbf{x'})^d$. Let $d = 2$ and $\mathbf{x} = (x_1, x_2)^T$ we get \begin{aligned} k(\begin{pmatrix} x_1 \\ x_2 \end{pmatrix}, \begin{pmatrix} x_1' \\ x_2' \end{pmatrix} ) & = (x_1x_1' + x_2x_2')^2 \\ & = 2x_1x_1'x_2x_2' + (x_1x_1')^2 + (x_2x_2')^2 \\ & = (\sqrt{2}x_1x_2, \ x_1^2, \ x_2^2) \ \begin{pmatrix} \sqrt{2}x_1'x_2' \\ x_1'^2 \\ x_2'^2 \end{pmatrix} \end{aligned} $$ k(\begin{pmatrix} x_1 \\ x_2 \end{pmatrix}, \begin{pmatrix} x_1' \\ x_2' \end{pmatrix} ) = \phi(\mathbf{x})^T \phi(\mathbf{x'})$$ $$ \phi(\begin{pmatrix} x_1 \\ x_2 \end{pmatrix}) =\begin{pmatrix} \sqrt{2}x_1x_2 \\ x_1^2 \\ x_2^2 \end{pmatrix}$$ Visualizing the feature map and the resulting boundary line Left-hand side plot shows the points plotted in the transformed space together with the SVM linear boundary hyperplane Right-hand side plot shows the result in the original 2-D space Source Full post and python code here https://disi.unitn.it/~passerini/teaching/2014-2015/MachineLearning/slides/17_kernel_machines/handouts.pdf
How to intuitively explain what a kernel is? A visual example to help intuition Consider the following dataset where the yellow and blue points are clearly not linearly separable in two dimensions. If we could find a higher dimensional space in
811
How to intuitively explain what a kernel is?
A very simple and intuitive way of thinking about kernels (at least for SVMs) is a similarity function. Given two objects, the kernel outputs some similarity score. The objects can be anything starting from two integers, two real valued vectors, trees whatever provided that the kernel function knows how to compare them. The arguably simplest example is the linear kernel, also called dot-product. Given two vectors, the similarity is the length of the projection of one vector on another. Another interesting kernel examples is Gaussian kernel. Given two vectors, the similarity will diminish with the radius of $\sigma$. The distance between two objects is "reweighted" by this radius parameter. The success of learning with kernels (again, at least for SVMs), very strongly depends on the choice of kernel. You can see a kernel as a compact representation of the knowledge about your classification problem. It is very often problem specific. I would not call a kernel a decision function since the kernel is used inside the decision function. Given a data point to classify, the decision function makes use of the kernel by comparing that data point to a number of support vectors weighted by the learned parameters $\alpha$. The support vectors are in the domain of that data point and along the learned parameters $\alpha$ are found by the learning algorithm.
How to intuitively explain what a kernel is?
A very simple and intuitive way of thinking about kernels (at least for SVMs) is a similarity function. Given two objects, the kernel outputs some similarity score. The objects can be anything startin
How to intuitively explain what a kernel is? A very simple and intuitive way of thinking about kernels (at least for SVMs) is a similarity function. Given two objects, the kernel outputs some similarity score. The objects can be anything starting from two integers, two real valued vectors, trees whatever provided that the kernel function knows how to compare them. The arguably simplest example is the linear kernel, also called dot-product. Given two vectors, the similarity is the length of the projection of one vector on another. Another interesting kernel examples is Gaussian kernel. Given two vectors, the similarity will diminish with the radius of $\sigma$. The distance between two objects is "reweighted" by this radius parameter. The success of learning with kernels (again, at least for SVMs), very strongly depends on the choice of kernel. You can see a kernel as a compact representation of the knowledge about your classification problem. It is very often problem specific. I would not call a kernel a decision function since the kernel is used inside the decision function. Given a data point to classify, the decision function makes use of the kernel by comparing that data point to a number of support vectors weighted by the learned parameters $\alpha$. The support vectors are in the domain of that data point and along the learned parameters $\alpha$ are found by the learning algorithm.
How to intuitively explain what a kernel is? A very simple and intuitive way of thinking about kernels (at least for SVMs) is a similarity function. Given two objects, the kernel outputs some similarity score. The objects can be anything startin
812
How to intuitively explain what a kernel is?
Very simply (but accurately) a kernel is a weighing factor between two sequences of data. This weighing factor can assign more weight to one "data point" at one "time point" than the other "data point", or assign equal weight or assign more weight to the other "data point" and so on. This way the correlation (dot product) can assign more "importance" at some points than others and thus cope for non-linearities (e.g non-flat spaces), additional information, data smoothing and so on. In still another way a kernel is a way to change the relative dimensions (or dimension units) of two data sequences in order to cope with the things mentioned above. In a third way (related to the previous two), a kernal is a way to map or project one data sequence onto the other in 1-to-1 manner taking into account given information or criteria (e.g curved space, missing data, data re-ordering and so on). So for example a given kernel may stretch or shrink or crop or bend one data sequence in order to fit or map 1-to-1 onto the other. A kernel can act like a Procrustes in order to "fit best"
How to intuitively explain what a kernel is?
Very simply (but accurately) a kernel is a weighing factor between two sequences of data. This weighing factor can assign more weight to one "data point" at one "time point" than the other "data point
How to intuitively explain what a kernel is? Very simply (but accurately) a kernel is a weighing factor between two sequences of data. This weighing factor can assign more weight to one "data point" at one "time point" than the other "data point", or assign equal weight or assign more weight to the other "data point" and so on. This way the correlation (dot product) can assign more "importance" at some points than others and thus cope for non-linearities (e.g non-flat spaces), additional information, data smoothing and so on. In still another way a kernel is a way to change the relative dimensions (or dimension units) of two data sequences in order to cope with the things mentioned above. In a third way (related to the previous two), a kernal is a way to map or project one data sequence onto the other in 1-to-1 manner taking into account given information or criteria (e.g curved space, missing data, data re-ordering and so on). So for example a given kernel may stretch or shrink or crop or bend one data sequence in order to fit or map 1-to-1 onto the other. A kernel can act like a Procrustes in order to "fit best"
How to intuitively explain what a kernel is? Very simply (but accurately) a kernel is a weighing factor between two sequences of data. This weighing factor can assign more weight to one "data point" at one "time point" than the other "data point
813
How to intuitively explain what a kernel is?
In the top answer to this question, there's a link to the lecture of Pr. Yaser Abu-Mostafa from CalTech and it gives a very nice intuition of it... so I'll try to explain what i understood, without equation: a kernel is a function (relatively simple to compute) taking two vectors (living in the X space) and returning a scalar that scalar happens in fact to be exactly the dot-product of our two vectors taken to a higher dimension space Z so, the kernel tells you how close two vectors are in that Z space, without paying the (possibly enormous) price of computing their coordinates there that's all you need to fit an SVM model! in a regular SVM model, you would have used the dot-product in the X space... using the kernel instead is as if you were doing the same thing in the Z space Here's some words from the lecture: Think of it this way. I am a guardian of the Z space. I'm closing the door. Nobody has access to the Z space. You come to me with requests... If you give me an x and ask me, what is the transformation, that's a big demand. I have to hand you a big z. And I may not allow that. But let's say that all I'm willing to give you are inner products. You give me x and x dash, I close the door, do my thing, and come back with a number, which is the inner product between z and z dash, without actually telling you what z and z dash were. That would be a simple operation. And if you can get away with it, then that's a pretty good thing. Selected parts of the lecture: Watch "What do we need from the Z space?" Watch "The kernel in action"
How to intuitively explain what a kernel is?
In the top answer to this question, there's a link to the lecture of Pr. Yaser Abu-Mostafa from CalTech and it gives a very nice intuition of it... so I'll try to explain what i understood, without eq
How to intuitively explain what a kernel is? In the top answer to this question, there's a link to the lecture of Pr. Yaser Abu-Mostafa from CalTech and it gives a very nice intuition of it... so I'll try to explain what i understood, without equation: a kernel is a function (relatively simple to compute) taking two vectors (living in the X space) and returning a scalar that scalar happens in fact to be exactly the dot-product of our two vectors taken to a higher dimension space Z so, the kernel tells you how close two vectors are in that Z space, without paying the (possibly enormous) price of computing their coordinates there that's all you need to fit an SVM model! in a regular SVM model, you would have used the dot-product in the X space... using the kernel instead is as if you were doing the same thing in the Z space Here's some words from the lecture: Think of it this way. I am a guardian of the Z space. I'm closing the door. Nobody has access to the Z space. You come to me with requests... If you give me an x and ask me, what is the transformation, that's a big demand. I have to hand you a big z. And I may not allow that. But let's say that all I'm willing to give you are inner products. You give me x and x dash, I close the door, do my thing, and come back with a number, which is the inner product between z and z dash, without actually telling you what z and z dash were. That would be a simple operation. And if you can get away with it, then that's a pretty good thing. Selected parts of the lecture: Watch "What do we need from the Z space?" Watch "The kernel in action"
How to intuitively explain what a kernel is? In the top answer to this question, there's a link to the lecture of Pr. Yaser Abu-Mostafa from CalTech and it gives a very nice intuition of it... so I'll try to explain what i understood, without eq
814
How to intuitively explain what a kernel is?
The kernel is a function that quantifies similarity between a pair of data points. And mathematically speaking this similarity can be computed using inner product, which has been explained beautifully in above answers. In RBF kernel it calculates this similarity between a landmark point and all other data points. How this landmark point is chosen is a different optimization problem all together. In support vector machines we find the decision boundary in higher dimension space without visiting it, with the help of a kernel. This decision boundary is linear is higher dimension space but when projected back on feature space it is curvy.
How to intuitively explain what a kernel is?
The kernel is a function that quantifies similarity between a pair of data points. And mathematically speaking this similarity can be computed using inner product, which has been explained beautifully
How to intuitively explain what a kernel is? The kernel is a function that quantifies similarity between a pair of data points. And mathematically speaking this similarity can be computed using inner product, which has been explained beautifully in above answers. In RBF kernel it calculates this similarity between a landmark point and all other data points. How this landmark point is chosen is a different optimization problem all together. In support vector machines we find the decision boundary in higher dimension space without visiting it, with the help of a kernel. This decision boundary is linear is higher dimension space but when projected back on feature space it is curvy.
How to intuitively explain what a kernel is? The kernel is a function that quantifies similarity between a pair of data points. And mathematically speaking this similarity can be computed using inner product, which has been explained beautifully
815
How to intuitively explain what a kernel is?
as was said in comments @ttnphns: "Dot product and projection are not quite identical." – kernel trick is utilized in computations involving the dot products (x, y). kernel itself programmically is just a core of a computer's OS, that controls & distributes "time" & "power" between CPU, RAM, Devices & etc. projection is builded Feature Map, as I understand. In ML (SVM) Kernel replaces Euclidean Inner Product, that is the measure of similarity/dissimilarity. SVM maximizes the distance between closiest points while learning the Decision Boundary - the distance that is Euclidean distance here in Euclidean space. As so as space can be multidimensional => linear models don't always help (Lnear Regression & PCA), non-linearity (e.g. with cos in geometrical space) is added in calculations to add Geometrically-Weighted-Transformations. e.g. compare Linear Kernel vs. Cosine Kernel Inner_Product: This is called cosine similarity, because Euclidean (L2) normalization projects the vectors onto the unit sphere, and their dot product is then the cosine of the angle between the points denoted by the vectors. (or mathematically & ststistically - here - for the Cosine kernel defined by the pdf f(x)=(π/4)cos(xπ/2)) Important: see here Types of SVM kernels Think of kernels as definned filters each for their own specific usecases. 1) Polynomial Kernels (used for image processing) 2) Gaussian Kernel (When there is no prior knowledge for data) 3) Gaussian Radial Basis Function(same as 2) 4) Laplace RBF Kernel ( recommend for higher training set more than million) 5) Hyperbolic Tangent Kernel (neural network based kernel) 6) Sigmoid Kernel(proxy for Neural network) 7) Anova Radial Basis Kernel (for Regression Problems) be carefull: space can be non-Euclidean or pseudo-Euclidean or etc... your choice depends on the objectives of your research task, I think P.S. So, defining either kernel (more precise here - certain kernel-trick) to use it in algorithm - you simply define certain type of decomposition to matrices & approximation-type for the learning, But as so as this essentially leads to certain working load for CPU & RAM (as so as the learning process is rather hardware resources consuming) - they called the parameter simply "kernel". (I think, it really is as simple as just the name meaning the direction of pc-resources distribution for certain type of derivation/approximation) - some answers here are helpful to distinguish OS-kernel & ML-kernel -- terms from different fields, but I think, are meaningfull in understanding how to calculate huge matrices rapidly (async), not waiting in the queue in iterating process (don't know how operational_system cope with recursion to improve memory/speed balance) P.P.S. can see here a number of approaches where kernels are useful. P.P.P.S. by the way, analysing data in time - Weighted Geometric Mean is often used
How to intuitively explain what a kernel is?
as was said in comments @ttnphns: "Dot product and projection are not quite identical." – kernel trick is utilized in computations involving the dot products (x, y). kernel itself programmically is ju
How to intuitively explain what a kernel is? as was said in comments @ttnphns: "Dot product and projection are not quite identical." – kernel trick is utilized in computations involving the dot products (x, y). kernel itself programmically is just a core of a computer's OS, that controls & distributes "time" & "power" between CPU, RAM, Devices & etc. projection is builded Feature Map, as I understand. In ML (SVM) Kernel replaces Euclidean Inner Product, that is the measure of similarity/dissimilarity. SVM maximizes the distance between closiest points while learning the Decision Boundary - the distance that is Euclidean distance here in Euclidean space. As so as space can be multidimensional => linear models don't always help (Lnear Regression & PCA), non-linearity (e.g. with cos in geometrical space) is added in calculations to add Geometrically-Weighted-Transformations. e.g. compare Linear Kernel vs. Cosine Kernel Inner_Product: This is called cosine similarity, because Euclidean (L2) normalization projects the vectors onto the unit sphere, and their dot product is then the cosine of the angle between the points denoted by the vectors. (or mathematically & ststistically - here - for the Cosine kernel defined by the pdf f(x)=(π/4)cos(xπ/2)) Important: see here Types of SVM kernels Think of kernels as definned filters each for their own specific usecases. 1) Polynomial Kernels (used for image processing) 2) Gaussian Kernel (When there is no prior knowledge for data) 3) Gaussian Radial Basis Function(same as 2) 4) Laplace RBF Kernel ( recommend for higher training set more than million) 5) Hyperbolic Tangent Kernel (neural network based kernel) 6) Sigmoid Kernel(proxy for Neural network) 7) Anova Radial Basis Kernel (for Regression Problems) be carefull: space can be non-Euclidean or pseudo-Euclidean or etc... your choice depends on the objectives of your research task, I think P.S. So, defining either kernel (more precise here - certain kernel-trick) to use it in algorithm - you simply define certain type of decomposition to matrices & approximation-type for the learning, But as so as this essentially leads to certain working load for CPU & RAM (as so as the learning process is rather hardware resources consuming) - they called the parameter simply "kernel". (I think, it really is as simple as just the name meaning the direction of pc-resources distribution for certain type of derivation/approximation) - some answers here are helpful to distinguish OS-kernel & ML-kernel -- terms from different fields, but I think, are meaningfull in understanding how to calculate huge matrices rapidly (async), not waiting in the queue in iterating process (don't know how operational_system cope with recursion to improve memory/speed balance) P.P.S. can see here a number of approaches where kernels are useful. P.P.P.S. by the way, analysing data in time - Weighted Geometric Mean is often used
How to intuitively explain what a kernel is? as was said in comments @ttnphns: "Dot product and projection are not quite identical." – kernel trick is utilized in computations involving the dot products (x, y). kernel itself programmically is ju
816
What's the difference between variance and standard deviation?
The standard deviation is the square root of the variance. The standard deviation is expressed in the same units as the mean is, whereas the variance is expressed in squared units, but for looking at a distribution, you can use either just so long as you are clear about what you are using. For example, a Normal distribution with mean = 10 and sd = 3 is exactly the same thing as a Normal distribution with mean = 10 and variance = 9.
What's the difference between variance and standard deviation?
The standard deviation is the square root of the variance. The standard deviation is expressed in the same units as the mean is, whereas the variance is expressed in squared units, but for looking at
What's the difference between variance and standard deviation? The standard deviation is the square root of the variance. The standard deviation is expressed in the same units as the mean is, whereas the variance is expressed in squared units, but for looking at a distribution, you can use either just so long as you are clear about what you are using. For example, a Normal distribution with mean = 10 and sd = 3 is exactly the same thing as a Normal distribution with mean = 10 and variance = 9.
What's the difference between variance and standard deviation? The standard deviation is the square root of the variance. The standard deviation is expressed in the same units as the mean is, whereas the variance is expressed in squared units, but for looking at
817
What's the difference between variance and standard deviation?
You don't need both. They each have different purposes. The SD is usually more useful to describe the variability of the data while the variance is usually much more useful mathematically. For example, the sum of uncorrelated distributions (random variables) also has a variance that is the sum of the variances of those distributions. This wouldn't be true of the SD. On the other hand, the SD has the convenience of being expressed in units of the original variable.
What's the difference between variance and standard deviation?
You don't need both. They each have different purposes. The SD is usually more useful to describe the variability of the data while the variance is usually much more useful mathematically. For exam
What's the difference between variance and standard deviation? You don't need both. They each have different purposes. The SD is usually more useful to describe the variability of the data while the variance is usually much more useful mathematically. For example, the sum of uncorrelated distributions (random variables) also has a variance that is the sum of the variances of those distributions. This wouldn't be true of the SD. On the other hand, the SD has the convenience of being expressed in units of the original variable.
What's the difference between variance and standard deviation? You don't need both. They each have different purposes. The SD is usually more useful to describe the variability of the data while the variance is usually much more useful mathematically. For exam
818
What's the difference between variance and standard deviation?
If John refers to independent random variables when he says "unrelated distributions," then his response is correct. However, to answer your question, there are several points that can be added: The mean and variance are the two parameters that determine a normal distribution. The Chebyshev inequality bounds the probability of a observed random variable being within $k$ standard deviations of the mean. The standard deviation is used to normalize statistics for statistical tests (e.g. the known standard deviation is used to normalize a sample mean for the $z$ test that the mean differs from $0$ or the sample standard deviation is used to normalize the sample mean when the true standard deviation is unknown, resulting in the $t$ test). For a normal distribution $68\%$ percent of the distribution is within $1$ standard deviation. $95.4\%$ within $2$ standard deviations and over $99\%$ within $3$ standard deviations. The margin of error is expressed as a multiple of the standard deviation of the estimate. Variance and bias are measures of uncertainty in a random quantity. The mean square error for an estimate equals the variance + the squared bias.
What's the difference between variance and standard deviation?
If John refers to independent random variables when he says "unrelated distributions," then his response is correct. However, to answer your question, there are several points that can be added: The
What's the difference between variance and standard deviation? If John refers to independent random variables when he says "unrelated distributions," then his response is correct. However, to answer your question, there are several points that can be added: The mean and variance are the two parameters that determine a normal distribution. The Chebyshev inequality bounds the probability of a observed random variable being within $k$ standard deviations of the mean. The standard deviation is used to normalize statistics for statistical tests (e.g. the known standard deviation is used to normalize a sample mean for the $z$ test that the mean differs from $0$ or the sample standard deviation is used to normalize the sample mean when the true standard deviation is unknown, resulting in the $t$ test). For a normal distribution $68\%$ percent of the distribution is within $1$ standard deviation. $95.4\%$ within $2$ standard deviations and over $99\%$ within $3$ standard deviations. The margin of error is expressed as a multiple of the standard deviation of the estimate. Variance and bias are measures of uncertainty in a random quantity. The mean square error for an estimate equals the variance + the squared bias.
What's the difference between variance and standard deviation? If John refers to independent random variables when he says "unrelated distributions," then his response is correct. However, to answer your question, there are several points that can be added: The
819
What's the difference between variance and standard deviation?
The variance of a data set measures the mathematical dispersion of the data relative to the mean. However, though this value is theoretically correct, it is difficult to apply in a real-world sense because the values used to calculate it were squared. The standard deviation, as the square root of the variance gives a value that is in the same units as the original values, which makes it much easier to work with and easier to interpret in conjunction with the concept of the normal curve.
What's the difference between variance and standard deviation?
The variance of a data set measures the mathematical dispersion of the data relative to the mean. However, though this value is theoretically correct, it is difficult to apply in a real-world sense be
What's the difference between variance and standard deviation? The variance of a data set measures the mathematical dispersion of the data relative to the mean. However, though this value is theoretically correct, it is difficult to apply in a real-world sense because the values used to calculate it were squared. The standard deviation, as the square root of the variance gives a value that is in the same units as the original values, which makes it much easier to work with and easier to interpret in conjunction with the concept of the normal curve.
What's the difference between variance and standard deviation? The variance of a data set measures the mathematical dispersion of the data relative to the mean. However, though this value is theoretically correct, it is difficult to apply in a real-world sense be
820
What's the difference between variance and standard deviation?
In terms of the distribution they're equivalent (yet obviously not interchangeable), but beware that in terms of estimators they're not: the square root of an estimate of the variance is NOT an (unbiased) estimator of the standard deviation. Only for a moderately large number of samples (and depending on the estimators) the two approach each other. For small sample sizes you need to know the parametric form of the distribution to convert among the two, which can become slightly circular.
What's the difference between variance and standard deviation?
In terms of the distribution they're equivalent (yet obviously not interchangeable), but beware that in terms of estimators they're not: the square root of an estimate of the variance is NOT an (unbia
What's the difference between variance and standard deviation? In terms of the distribution they're equivalent (yet obviously not interchangeable), but beware that in terms of estimators they're not: the square root of an estimate of the variance is NOT an (unbiased) estimator of the standard deviation. Only for a moderately large number of samples (and depending on the estimators) the two approach each other. For small sample sizes you need to know the parametric form of the distribution to convert among the two, which can become slightly circular.
What's the difference between variance and standard deviation? In terms of the distribution they're equivalent (yet obviously not interchangeable), but beware that in terms of estimators they're not: the square root of an estimate of the variance is NOT an (unbia
821
What's the difference between variance and standard deviation?
While calculating the variance, we squared the deviations. It mean that if the given data (observations) is in meters, it will become meter square. Hope it's not correct representation about the deviations. So, we square root again (SD) that is nothing but SD.
What's the difference between variance and standard deviation?
While calculating the variance, we squared the deviations. It mean that if the given data (observations) is in meters, it will become meter square. Hope it's not correct representation about the devia
What's the difference between variance and standard deviation? While calculating the variance, we squared the deviations. It mean that if the given data (observations) is in meters, it will become meter square. Hope it's not correct representation about the deviations. So, we square root again (SD) that is nothing but SD.
What's the difference between variance and standard deviation? While calculating the variance, we squared the deviations. It mean that if the given data (observations) is in meters, it will become meter square. Hope it's not correct representation about the devia
822
What's the difference between variance and standard deviation?
In adition to Hassan's response, you need to be careful on interpreting standard deviation. Some people define it as the mean distance between every observation and its mean, but this is the definition of mean absolute deviation (MAD), thus wrong. For a better understanding of both concepts, variance and SD, I highly recommend Taleb's video series on statistics (first lesson is about SD).
What's the difference between variance and standard deviation?
In adition to Hassan's response, you need to be careful on interpreting standard deviation. Some people define it as the mean distance between every observation and its mean, but this is the definitio
What's the difference between variance and standard deviation? In adition to Hassan's response, you need to be careful on interpreting standard deviation. Some people define it as the mean distance between every observation and its mean, but this is the definition of mean absolute deviation (MAD), thus wrong. For a better understanding of both concepts, variance and SD, I highly recommend Taleb's video series on statistics (first lesson is about SD).
What's the difference between variance and standard deviation? In adition to Hassan's response, you need to be careful on interpreting standard deviation. Some people define it as the mean distance between every observation and its mean, but this is the definitio
823
Cohen's kappa in plain English
Introduction The Kappa statistic (or value) is a metric that compares an Observed Accuracy with an Expected Accuracy (random chance). The kappa statistic is used not only to evaluate a single classifier, but also to evaluate classifiers amongst themselves. In addition, it takes into account random chance (agreement with a random classifier), which generally means it is less misleading than simply using accuracy as a metric (an Observed Accuracy of 80% is a lot less impressive with an Expected Accuracy of 75% versus an Expected Accuracy of 50%). Computation of Observed Accuracy and Expected Accuracy is integral to comprehension of the kappa statistic, and is most easily illustrated through use of a confusion matrix. Lets begin with a simple confusion matrix from a simple binary classification of Cats and Dogs: Computation Cats Dogs Cats| 10 | 7 | Dogs| 5 | 8 | Assume that a model was built using supervised machine learning on labeled data. This doesn't always have to be the case; the kappa statistic is often used as a measure of reliability between two human raters. Regardless, columns correspond to one "rater" while rows correspond to another "rater". In supervised machine learning, one "rater" reflects ground truth (the actual values of each instance to be classified), obtained from labeled data, and the other "rater" is the machine learning classifier used to perform the classification. Ultimately it doesn't matter which is which to compute the kappa statistic, but for clarity's sake lets say that the columns reflect ground truth and the rows reflect the machine learning classifier classifications. From the confusion matrix we can see there are 30 instances total (10 + 7 + 5 + 8 = 30). According to the first column 15 were labeled as Cats (10 + 5 = 15), and according to the second column 15 were labeled as Dogs (7 + 8 = 15). We can also see that the model classified 17 instances as Cats (10 + 7 = 17) and 13 instances as Dogs (5 + 8 = 13). Observed Accuracy is simply the number of instances that were classified correctly throughout the entire confusion matrix, i.e. the number of instances that were labeled as Cats via ground truth and then classified as Cats by the machine learning classifier, or labeled as Dogs via ground truth and then classified as Dogs by the machine learning classifier. To calculate Observed Accuracy, we simply add the number of instances that the machine learning classifier agreed with the ground truth label, and divide by the total number of instances. For this confusion matrix, this would be 0.6 ((10 + 8) / 30 = 0.6). Before we get to the equation for the kappa statistic, one more value is needed: the Expected Accuracy. This value is defined as the accuracy that any random classifier would be expected to achieve based on the confusion matrix. The Expected Accuracy is directly related to the number of instances of each class (Cats and Dogs), along with the number of instances that the machine learning classifier agreed with the ground truth label. To calculate Expected Accuracy for our confusion matrix, first multiply the marginal frequency of Cats for one "rater" by the marginal frequency of Cats for the second "rater", and divide by the total number of instances. The marginal frequency for a certain class by a certain "rater" is just the sum of all instances the "rater" indicated were that class. In our case, 15 (10 + 5 = 15) instances were labeled as Cats according to ground truth, and 17 (10 + 7 = 17) instances were classified as Cats by the machine learning classifier. This results in a value of 8.5 (15 * 17 / 30 = 8.5). This is then done for the second class as well (and can be repeated for each additional class if there are more than 2). 15 (7 + 8 = 15) instances were labeled as Dogs according to ground truth, and 13 (8 + 5 = 13) instances were classified as Dogs by the machine learning classifier. This results in a value of 6.5 (15 * 13 / 30 = 6.5). The final step is to add all these values together, and finally divide again by the total number of instances, resulting in an Expected Accuracy of 0.5 ((8.5 + 6.5) / 30 = 0.5). In our example, the Expected Accuracy turned out to be 50%, as will always be the case when either "rater" classifies each class with the same frequency in a binary classification (both Cats and Dogs contained 15 instances according to ground truth labels in our confusion matrix). The kappa statistic can then be calculated using both the Observed Accuracy (0.60) and the Expected Accuracy (0.50) and the formula: Kappa = (observed accuracy - expected accuracy)/(1 - expected accuracy) So, in our case, the kappa statistic equals: (0.60 - 0.50)/(1 - 0.50) = 0.20. As another example, here is a less balanced confusion matrix and the corresponding calculations: Cats Dogs Cats| 22 | 9 | Dogs| 7 | 13 | Ground truth: Cats (29), Dogs (22) Machine Learning Classifier: Cats (31), Dogs (20) Total: (51) Observed Accuracy: ((22 + 13) / 51) = 0.69 Expected Accuracy: ((29 * 31 / 51) + (22 * 20 / 51)) / 51 = 0.51 Kappa: (0.69 - 0.51) / (1 - 0.51) = 0.37 In essence, the kappa statistic is a measure of how closely the instances classified by the machine learning classifier matched the data labeled as ground truth, controlling for the accuracy of a random classifier as measured by the expected accuracy. Not only can this kappa statistic shed light into how the classifier itself performed, the kappa statistic for one model is directly comparable to the kappa statistic for any other model used for the same classification task. Interpretation There is not a standardized interpretation of the kappa statistic. According to Wikipedia (citing their paper), Landis and Koch considers 0-0.20 as slight, 0.21-0.40 as fair, 0.41-0.60 as moderate, 0.61-0.80 as substantial, and 0.81-1 as almost perfect. Fleiss considers kappas > 0.75 as excellent, 0.40-0.75 as fair to good, and < 0.40 as poor. It is important to note that both scales are somewhat arbitrary. At least two further considerations should be taken into account when interpreting the kappa statistic. First, the kappa statistic should always be compared with an accompanied confusion matrix if possible to obtain the most accurate interpretation. Consider the following confusion matrix: Cats Dogs Cats| 60 | 125 | Dogs| 5 | 5000| The kappa statistic is 0.47, well above the threshold for moderate according to Landis and Koch and fair-good for Fleiss. However, notice the hit rate for classifying Cats. Less than a third of all Cats were actually classified as Cats; the rest were all classified as Dogs. If we care more about classifying Cats correctly (say, we are allergic to Cats but not to Dogs, and all we care about is not succumbing to allergies as opposed to maximizing the number of animals we take in), then a classifier with a lower kappa but better rate of classifying Cats might be more ideal. Second, acceptable kappa statistic values vary on the context. For instance, in many inter-rater reliability studies with easily observable behaviors, kappa statistic values below 0.70 might be considered low. However, in studies using machine learning to explore unobservable phenomena like cognitive states such as day dreaming, kappa statistic values above 0.40 might be considered exceptional. So, in answer to your question about a 0.40 kappa, it depends. If nothing else, it means that the classifier achieved a rate of classification 2/5 of the way between whatever the expected accuracy was and 100% accuracy. If expected accuracy was 80%, that means that the classifier performed 40% (because kappa is 0.4) of 20% (because this is the distance between 80% and 100%) above 80% (because this is a kappa of 0, or random chance), or 88%. So, in that case, each increase in kappa of 0.10 indicates a 2% increase in classification accuracy. If accuracy was instead 50%, a kappa of 0.4 would mean that the classifier performed with an accuracy that is 40% (kappa of 0.4) of 50% (distance between 50% and 100%) greater than 50% (because this is a kappa of 0, or random chance), or 70%. Again, in this case that means that an increase in kappa of 0.1 indicates a 5% increase in classification accuracy. Classifiers built and evaluated on data sets of different class distributions can be compared more reliably through the kappa statistic (as opposed to merely using accuracy) because of this scaling in relation to expected accuracy. It gives a better indicator of how the classifier performed across all instances, because a simple accuracy can be skewed if the class distribution is similarly skewed. As mentioned earlier, an accuracy of 80% is a lot more impressive with an expected accuracy of 50% versus an expected accuracy of 75%. Expected accuracy as detailed above is susceptible to skewed class distributions, so by controlling for the expected accuracy through the kappa statistic, we allow models of different class distributions to be more easily compared. That's about all I have. If anyone notices anything left out, anything incorrect, or if anything is still unclear, please let me know so I can improve the answer. References I found helpful: Includes a succinct description of kappa: http://standardwisdom.com/softwarejournal/2011/12/confusion-matrix-another-single-value-metric-kappa-statistic/ Includes a description of calculating expected accuracy: http://epiville.ccnmtl.columbia.edu/popup/how_to_calculate_kappa.html
Cohen's kappa in plain English
Introduction The Kappa statistic (or value) is a metric that compares an Observed Accuracy with an Expected Accuracy (random chance). The kappa statistic is used not only to evaluate a single classifi
Cohen's kappa in plain English Introduction The Kappa statistic (or value) is a metric that compares an Observed Accuracy with an Expected Accuracy (random chance). The kappa statistic is used not only to evaluate a single classifier, but also to evaluate classifiers amongst themselves. In addition, it takes into account random chance (agreement with a random classifier), which generally means it is less misleading than simply using accuracy as a metric (an Observed Accuracy of 80% is a lot less impressive with an Expected Accuracy of 75% versus an Expected Accuracy of 50%). Computation of Observed Accuracy and Expected Accuracy is integral to comprehension of the kappa statistic, and is most easily illustrated through use of a confusion matrix. Lets begin with a simple confusion matrix from a simple binary classification of Cats and Dogs: Computation Cats Dogs Cats| 10 | 7 | Dogs| 5 | 8 | Assume that a model was built using supervised machine learning on labeled data. This doesn't always have to be the case; the kappa statistic is often used as a measure of reliability between two human raters. Regardless, columns correspond to one "rater" while rows correspond to another "rater". In supervised machine learning, one "rater" reflects ground truth (the actual values of each instance to be classified), obtained from labeled data, and the other "rater" is the machine learning classifier used to perform the classification. Ultimately it doesn't matter which is which to compute the kappa statistic, but for clarity's sake lets say that the columns reflect ground truth and the rows reflect the machine learning classifier classifications. From the confusion matrix we can see there are 30 instances total (10 + 7 + 5 + 8 = 30). According to the first column 15 were labeled as Cats (10 + 5 = 15), and according to the second column 15 were labeled as Dogs (7 + 8 = 15). We can also see that the model classified 17 instances as Cats (10 + 7 = 17) and 13 instances as Dogs (5 + 8 = 13). Observed Accuracy is simply the number of instances that were classified correctly throughout the entire confusion matrix, i.e. the number of instances that were labeled as Cats via ground truth and then classified as Cats by the machine learning classifier, or labeled as Dogs via ground truth and then classified as Dogs by the machine learning classifier. To calculate Observed Accuracy, we simply add the number of instances that the machine learning classifier agreed with the ground truth label, and divide by the total number of instances. For this confusion matrix, this would be 0.6 ((10 + 8) / 30 = 0.6). Before we get to the equation for the kappa statistic, one more value is needed: the Expected Accuracy. This value is defined as the accuracy that any random classifier would be expected to achieve based on the confusion matrix. The Expected Accuracy is directly related to the number of instances of each class (Cats and Dogs), along with the number of instances that the machine learning classifier agreed with the ground truth label. To calculate Expected Accuracy for our confusion matrix, first multiply the marginal frequency of Cats for one "rater" by the marginal frequency of Cats for the second "rater", and divide by the total number of instances. The marginal frequency for a certain class by a certain "rater" is just the sum of all instances the "rater" indicated were that class. In our case, 15 (10 + 5 = 15) instances were labeled as Cats according to ground truth, and 17 (10 + 7 = 17) instances were classified as Cats by the machine learning classifier. This results in a value of 8.5 (15 * 17 / 30 = 8.5). This is then done for the second class as well (and can be repeated for each additional class if there are more than 2). 15 (7 + 8 = 15) instances were labeled as Dogs according to ground truth, and 13 (8 + 5 = 13) instances were classified as Dogs by the machine learning classifier. This results in a value of 6.5 (15 * 13 / 30 = 6.5). The final step is to add all these values together, and finally divide again by the total number of instances, resulting in an Expected Accuracy of 0.5 ((8.5 + 6.5) / 30 = 0.5). In our example, the Expected Accuracy turned out to be 50%, as will always be the case when either "rater" classifies each class with the same frequency in a binary classification (both Cats and Dogs contained 15 instances according to ground truth labels in our confusion matrix). The kappa statistic can then be calculated using both the Observed Accuracy (0.60) and the Expected Accuracy (0.50) and the formula: Kappa = (observed accuracy - expected accuracy)/(1 - expected accuracy) So, in our case, the kappa statistic equals: (0.60 - 0.50)/(1 - 0.50) = 0.20. As another example, here is a less balanced confusion matrix and the corresponding calculations: Cats Dogs Cats| 22 | 9 | Dogs| 7 | 13 | Ground truth: Cats (29), Dogs (22) Machine Learning Classifier: Cats (31), Dogs (20) Total: (51) Observed Accuracy: ((22 + 13) / 51) = 0.69 Expected Accuracy: ((29 * 31 / 51) + (22 * 20 / 51)) / 51 = 0.51 Kappa: (0.69 - 0.51) / (1 - 0.51) = 0.37 In essence, the kappa statistic is a measure of how closely the instances classified by the machine learning classifier matched the data labeled as ground truth, controlling for the accuracy of a random classifier as measured by the expected accuracy. Not only can this kappa statistic shed light into how the classifier itself performed, the kappa statistic for one model is directly comparable to the kappa statistic for any other model used for the same classification task. Interpretation There is not a standardized interpretation of the kappa statistic. According to Wikipedia (citing their paper), Landis and Koch considers 0-0.20 as slight, 0.21-0.40 as fair, 0.41-0.60 as moderate, 0.61-0.80 as substantial, and 0.81-1 as almost perfect. Fleiss considers kappas > 0.75 as excellent, 0.40-0.75 as fair to good, and < 0.40 as poor. It is important to note that both scales are somewhat arbitrary. At least two further considerations should be taken into account when interpreting the kappa statistic. First, the kappa statistic should always be compared with an accompanied confusion matrix if possible to obtain the most accurate interpretation. Consider the following confusion matrix: Cats Dogs Cats| 60 | 125 | Dogs| 5 | 5000| The kappa statistic is 0.47, well above the threshold for moderate according to Landis and Koch and fair-good for Fleiss. However, notice the hit rate for classifying Cats. Less than a third of all Cats were actually classified as Cats; the rest were all classified as Dogs. If we care more about classifying Cats correctly (say, we are allergic to Cats but not to Dogs, and all we care about is not succumbing to allergies as opposed to maximizing the number of animals we take in), then a classifier with a lower kappa but better rate of classifying Cats might be more ideal. Second, acceptable kappa statistic values vary on the context. For instance, in many inter-rater reliability studies with easily observable behaviors, kappa statistic values below 0.70 might be considered low. However, in studies using machine learning to explore unobservable phenomena like cognitive states such as day dreaming, kappa statistic values above 0.40 might be considered exceptional. So, in answer to your question about a 0.40 kappa, it depends. If nothing else, it means that the classifier achieved a rate of classification 2/5 of the way between whatever the expected accuracy was and 100% accuracy. If expected accuracy was 80%, that means that the classifier performed 40% (because kappa is 0.4) of 20% (because this is the distance between 80% and 100%) above 80% (because this is a kappa of 0, or random chance), or 88%. So, in that case, each increase in kappa of 0.10 indicates a 2% increase in classification accuracy. If accuracy was instead 50%, a kappa of 0.4 would mean that the classifier performed with an accuracy that is 40% (kappa of 0.4) of 50% (distance between 50% and 100%) greater than 50% (because this is a kappa of 0, or random chance), or 70%. Again, in this case that means that an increase in kappa of 0.1 indicates a 5% increase in classification accuracy. Classifiers built and evaluated on data sets of different class distributions can be compared more reliably through the kappa statistic (as opposed to merely using accuracy) because of this scaling in relation to expected accuracy. It gives a better indicator of how the classifier performed across all instances, because a simple accuracy can be skewed if the class distribution is similarly skewed. As mentioned earlier, an accuracy of 80% is a lot more impressive with an expected accuracy of 50% versus an expected accuracy of 75%. Expected accuracy as detailed above is susceptible to skewed class distributions, so by controlling for the expected accuracy through the kappa statistic, we allow models of different class distributions to be more easily compared. That's about all I have. If anyone notices anything left out, anything incorrect, or if anything is still unclear, please let me know so I can improve the answer. References I found helpful: Includes a succinct description of kappa: http://standardwisdom.com/softwarejournal/2011/12/confusion-matrix-another-single-value-metric-kappa-statistic/ Includes a description of calculating expected accuracy: http://epiville.ccnmtl.columbia.edu/popup/how_to_calculate_kappa.html
Cohen's kappa in plain English Introduction The Kappa statistic (or value) is a metric that compares an Observed Accuracy with an Expected Accuracy (random chance). The kappa statistic is used not only to evaluate a single classifi
824
Cohen's kappa in plain English
rbx has a great answer. However, it is a little bit verbose. Here is my summary and intuition behind the Kappa metric. Kappa is an important measure on classifier performance, especially on imbalanced data set. For example, in credit card fraud detection, the marginal distribution of the response variable is high skewed, that using accuracy as a measure will not be useful. In other words, for given fraud detection example, 99.9% of the transactions will be non-fraud transactions. We can have a trivial classifier that always says non-fraud to every transaction, and we will still have 99.9% of the accuracy. On the other hand, Kappa will "fix" this problem by consider the marginal distribution of the response variable. Using Kappa, the aforementioned trivial classifier will have a very small Kappa. In plain English, it measures how much better the classifier is, compared to guessing with the target distribution.
Cohen's kappa in plain English
rbx has a great answer. However, it is a little bit verbose. Here is my summary and intuition behind the Kappa metric. Kappa is an important measure on classifier performance, especially on imbalance
Cohen's kappa in plain English rbx has a great answer. However, it is a little bit verbose. Here is my summary and intuition behind the Kappa metric. Kappa is an important measure on classifier performance, especially on imbalanced data set. For example, in credit card fraud detection, the marginal distribution of the response variable is high skewed, that using accuracy as a measure will not be useful. In other words, for given fraud detection example, 99.9% of the transactions will be non-fraud transactions. We can have a trivial classifier that always says non-fraud to every transaction, and we will still have 99.9% of the accuracy. On the other hand, Kappa will "fix" this problem by consider the marginal distribution of the response variable. Using Kappa, the aforementioned trivial classifier will have a very small Kappa. In plain English, it measures how much better the classifier is, compared to guessing with the target distribution.
Cohen's kappa in plain English rbx has a great answer. However, it is a little bit verbose. Here is my summary and intuition behind the Kappa metric. Kappa is an important measure on classifier performance, especially on imbalance
825
Cohen's kappa in plain English
What value of Cohen's kappa is strong depends on several factors including for example, the number of categories or codes that are used affects kappa$^1$ and the probability that each code will be populated. "For example, given equiprobable codes and observers who are 85% accurate: value of kappa number of codes 0.49 2 0.60 3 0.66 5 0.69 10" Now, what if we do not have equiprobable codes but have different "base rates"? For two codes the kappa plots from Bruckner et al. would look like ...Nonetheless (... continuing Wikipedia quote), magnitude guidelines have appeared in the literature. Perhaps the first was Landis and Koch, who characterized values <0 as indicating no agreement 0.00–0.20 as slight, 0.21–0.40 as fair, 0.41–0.60 as moderate, 0.61–0.80 as substantial, and 0.81–1 as almost perfect agreement. This set of guidelines is however by no means universally accepted; Landis and Koch supplied no evidence to support it, basing it instead on personal opinion. It has been noted that these guidelines may be more harmful than helpful. Fleiss's equally arbitrary guidelines characterize kappas over >0.75 as excellent, 0.40 to 0.75 as fair to good, and <0.40 as poor." (end Wikipedia quote) For a (hard to find) upgrade of the FalliObs Windows program to account for the number of codes originally offered by Bakeman et al.$^1$ follow link to ComKappa3. The program description$^2$ relates that the standard error of kappa can be estimated, allowing the obtained kappa to be tested for significance against a null distribution (Bakeman & Gottman, 1997; Fleiss, Cohen, & Everitt, 1969). For further reading for other kappa measures see ANALYSIS OF BEHAVIORAL STREAMS. Also see Using Cohen's kappa statistic for evaluating a binary classifier for a similar question. 1 Bakeman, R.; Quera, V.; McArthur, D.; Robinson, B. F. (1997). "Detecting sequential patterns and determining their reliability with fallible observers". Psychological Methods. 2: 357–370. doi:10.1037/1082-989X.2.4.357 2 Robinson BF, Bakeman R. ComKappa: A Windows’ 95 program for calculating kappa and related statistics. Behavior Research Methods. 1998;30:731-2.
Cohen's kappa in plain English
What value of Cohen's kappa is strong depends on several factors including for example, the number of categories or codes that are used affects kappa$^1$ and the probability that each code will be pop
Cohen's kappa in plain English What value of Cohen's kappa is strong depends on several factors including for example, the number of categories or codes that are used affects kappa$^1$ and the probability that each code will be populated. "For example, given equiprobable codes and observers who are 85% accurate: value of kappa number of codes 0.49 2 0.60 3 0.66 5 0.69 10" Now, what if we do not have equiprobable codes but have different "base rates"? For two codes the kappa plots from Bruckner et al. would look like ...Nonetheless (... continuing Wikipedia quote), magnitude guidelines have appeared in the literature. Perhaps the first was Landis and Koch, who characterized values <0 as indicating no agreement 0.00–0.20 as slight, 0.21–0.40 as fair, 0.41–0.60 as moderate, 0.61–0.80 as substantial, and 0.81–1 as almost perfect agreement. This set of guidelines is however by no means universally accepted; Landis and Koch supplied no evidence to support it, basing it instead on personal opinion. It has been noted that these guidelines may be more harmful than helpful. Fleiss's equally arbitrary guidelines characterize kappas over >0.75 as excellent, 0.40 to 0.75 as fair to good, and <0.40 as poor." (end Wikipedia quote) For a (hard to find) upgrade of the FalliObs Windows program to account for the number of codes originally offered by Bakeman et al.$^1$ follow link to ComKappa3. The program description$^2$ relates that the standard error of kappa can be estimated, allowing the obtained kappa to be tested for significance against a null distribution (Bakeman & Gottman, 1997; Fleiss, Cohen, & Everitt, 1969). For further reading for other kappa measures see ANALYSIS OF BEHAVIORAL STREAMS. Also see Using Cohen's kappa statistic for evaluating a binary classifier for a similar question. 1 Bakeman, R.; Quera, V.; McArthur, D.; Robinson, B. F. (1997). "Detecting sequential patterns and determining their reliability with fallible observers". Psychological Methods. 2: 357–370. doi:10.1037/1082-989X.2.4.357 2 Robinson BF, Bakeman R. ComKappa: A Windows’ 95 program for calculating kappa and related statistics. Behavior Research Methods. 1998;30:731-2.
Cohen's kappa in plain English What value of Cohen's kappa is strong depends on several factors including for example, the number of categories or codes that are used affects kappa$^1$ and the probability that each code will be pop
826
Cohen's kappa in plain English
to answer your question (in plain english :-) ): How does Kappa help in evaluating the prediction performance of classifiers? What does it tell?!! You should consider the kappa as a measure of agreement between 2 individuals such that the result can be interpreted as: Poor agreement = 0.20 or less Fair agreement = 0.20 to 0.40 Moderate agreement = 0.40 to 0.60 Good agreement = 0.60 to 0.80 Very good agreement = 0.80 to 1.00
Cohen's kappa in plain English
to answer your question (in plain english :-) ): How does Kappa help in evaluating the prediction performance of classifiers? What does it tell?!! You should consider the kappa as a measure of agre
Cohen's kappa in plain English to answer your question (in plain english :-) ): How does Kappa help in evaluating the prediction performance of classifiers? What does it tell?!! You should consider the kappa as a measure of agreement between 2 individuals such that the result can be interpreted as: Poor agreement = 0.20 or less Fair agreement = 0.20 to 0.40 Moderate agreement = 0.40 to 0.60 Good agreement = 0.60 to 0.80 Very good agreement = 0.80 to 1.00
Cohen's kappa in plain English to answer your question (in plain english :-) ): How does Kappa help in evaluating the prediction performance of classifiers? What does it tell?!! You should consider the kappa as a measure of agre
827
When is it ok to remove the intercept in a linear regression model?
The shortest answer: never, unless you are sure that your linear approximation of the data generating process (linear regression model) either by some theoretical or any other reasons is forced to go through the origin. If not the other regression parameters will be biased even if intercept is statistically insignificant (strange but it is so, consult Brooks Introductory Econometrics for instance). Finally, as I do often explain to my students, by leaving the intercept term you insure that the residual term is zero-mean. For your two models case we need more context. It may happen that linear model is not suitable here. For example, you need to log transform first if the model is multiplicative. Having exponentially growing processes it may occasionally happen that $R^2$ for the model without the intercept is "much" higher. Screen the data, test the model with RESET test or any other linear specification test, this may help to see if my guess is true. And, building the models highest $R^2$ is one of the last statistical properties I do really concern about, but it is nice to present to the people who are not so well familiar with econometrics (there are many dirty tricks to make determination close to 1 :)).
When is it ok to remove the intercept in a linear regression model?
The shortest answer: never, unless you are sure that your linear approximation of the data generating process (linear regression model) either by some theoretical or any other reasons is forced to go
When is it ok to remove the intercept in a linear regression model? The shortest answer: never, unless you are sure that your linear approximation of the data generating process (linear regression model) either by some theoretical or any other reasons is forced to go through the origin. If not the other regression parameters will be biased even if intercept is statistically insignificant (strange but it is so, consult Brooks Introductory Econometrics for instance). Finally, as I do often explain to my students, by leaving the intercept term you insure that the residual term is zero-mean. For your two models case we need more context. It may happen that linear model is not suitable here. For example, you need to log transform first if the model is multiplicative. Having exponentially growing processes it may occasionally happen that $R^2$ for the model without the intercept is "much" higher. Screen the data, test the model with RESET test or any other linear specification test, this may help to see if my guess is true. And, building the models highest $R^2$ is one of the last statistical properties I do really concern about, but it is nice to present to the people who are not so well familiar with econometrics (there are many dirty tricks to make determination close to 1 :)).
When is it ok to remove the intercept in a linear regression model? The shortest answer: never, unless you are sure that your linear approximation of the data generating process (linear regression model) either by some theoretical or any other reasons is forced to go
828
When is it ok to remove the intercept in a linear regression model?
Removing the intercept is a different model, but there are plenty of examples where it is legitimate. Answers so far have already discussed in detail the example where the true intercept is 0. I will focus on a few examples where we may be interested in an atypical model parametrization. Example 1: The ANOVA-style Model. For categorical variables, we typically create binary vectors encoding group membership. The standard regression model is parametrized as intercept + k - 1 dummy vectors. The intercept codes the expected value for the "reference" group, or the omitted vector, and the remaining vectors test the difference between each group and the reference. But in some cases, it may be useful to have each groups' expected value. dat <- mtcars dat$vs <- factor(dat$vs) ## intercept model: vs coefficient becomes difference lm(mpg ~ vs + hp, data = dat) Coefficients: (Intercept) vs1 hp 26.96300 2.57622 -0.05453 ## no intercept: two vs coefficients, ## conditional expectations for both groups lm(mpg ~ 0 + vs + hp, data = dat) Coefficients: vs0 vs1 hp 26.96300 29.53922 -0.05453 Example 2: The case of standardized data. In some cases, one may be working with standardized data. In this case, the intercept is 0 by design. I think a classic example of this was old style structural equation models or factor, which operated just on the covariance matrices of data. In the case below, it is probably a good idea to estimate the intercept anyway, if only to drop the additional degree of freedom (which you really should have lost anyway because the mean was estimated), but there are a handful of situations where by construction, means may be 0 (e.g., certain experiments where participants assign ratings, but are constrained to give out equal positives and negatives). dat <- as.data.frame(scale(mtcars)) ## intercept is 0 by design lm(mpg ~ hp + wt, data = dat) Coefficients: (Intercept) hp wt 3.813e-17 -3.615e-01 -6.296e-01 ## leaving the intercept out lm(mpg ~ 0 + hp + wt, data = dat) Coefficients: hp wt -0.3615 -0.6296 Example 3: Multivariate Models and Hidden Intercepts. This example is similar to the first in many ways. In this case, the data has been stacked so that two different variables are now in one long vector. A second variable encodes information about whether the response vector, y, belongs to mpg or disp. In this case, to get the separate intercepts for each outcome, you suppress the overall intercept and include both dummy vectors for measure. This is a sort of multivariate analysis. It is not typically done using lm() because you have repeated measures and should probably allow for the nonindepence. However, there are some interesting cases where this is necessary. For example when trying to do a mediation analysis with random effects, to get the full variance covariance matrix, you need both models estimated simultaneously, which can be done by stacking the data and some clever use of dummy vectors. ## stack data for multivariate analysis dat <- reshape(mtcars, varying = c(1, 3), v.names = "y", timevar = "measure", times = c("mpg", "disp"), direction = "long") dat$measure <- factor(dat$measure) ## two regressions with intercepts only lm(cbind(mpg, disp) ~ 1, data = mtcars) Coefficients: mpg disp (Intercept) 20.09 230.72 ## using the stacked data, measure is difference ## between outcome means lm(y ~ measure, data = dat) Coefficients: (Intercept) measurempg 230.7 -210.6 ## separate 'intercept' for each outcome lm(y ~ 0 + measure, data = dat) Coefficients: measuredisp measurempg 230.72 20.09 I am not arguing that intercepts should generally be removed, but it is good to be flexible.
When is it ok to remove the intercept in a linear regression model?
Removing the intercept is a different model, but there are plenty of examples where it is legitimate. Answers so far have already discussed in detail the example where the true intercept is 0. I wil
When is it ok to remove the intercept in a linear regression model? Removing the intercept is a different model, but there are plenty of examples where it is legitimate. Answers so far have already discussed in detail the example where the true intercept is 0. I will focus on a few examples where we may be interested in an atypical model parametrization. Example 1: The ANOVA-style Model. For categorical variables, we typically create binary vectors encoding group membership. The standard regression model is parametrized as intercept + k - 1 dummy vectors. The intercept codes the expected value for the "reference" group, or the omitted vector, and the remaining vectors test the difference between each group and the reference. But in some cases, it may be useful to have each groups' expected value. dat <- mtcars dat$vs <- factor(dat$vs) ## intercept model: vs coefficient becomes difference lm(mpg ~ vs + hp, data = dat) Coefficients: (Intercept) vs1 hp 26.96300 2.57622 -0.05453 ## no intercept: two vs coefficients, ## conditional expectations for both groups lm(mpg ~ 0 + vs + hp, data = dat) Coefficients: vs0 vs1 hp 26.96300 29.53922 -0.05453 Example 2: The case of standardized data. In some cases, one may be working with standardized data. In this case, the intercept is 0 by design. I think a classic example of this was old style structural equation models or factor, which operated just on the covariance matrices of data. In the case below, it is probably a good idea to estimate the intercept anyway, if only to drop the additional degree of freedom (which you really should have lost anyway because the mean was estimated), but there are a handful of situations where by construction, means may be 0 (e.g., certain experiments where participants assign ratings, but are constrained to give out equal positives and negatives). dat <- as.data.frame(scale(mtcars)) ## intercept is 0 by design lm(mpg ~ hp + wt, data = dat) Coefficients: (Intercept) hp wt 3.813e-17 -3.615e-01 -6.296e-01 ## leaving the intercept out lm(mpg ~ 0 + hp + wt, data = dat) Coefficients: hp wt -0.3615 -0.6296 Example 3: Multivariate Models and Hidden Intercepts. This example is similar to the first in many ways. In this case, the data has been stacked so that two different variables are now in one long vector. A second variable encodes information about whether the response vector, y, belongs to mpg or disp. In this case, to get the separate intercepts for each outcome, you suppress the overall intercept and include both dummy vectors for measure. This is a sort of multivariate analysis. It is not typically done using lm() because you have repeated measures and should probably allow for the nonindepence. However, there are some interesting cases where this is necessary. For example when trying to do a mediation analysis with random effects, to get the full variance covariance matrix, you need both models estimated simultaneously, which can be done by stacking the data and some clever use of dummy vectors. ## stack data for multivariate analysis dat <- reshape(mtcars, varying = c(1, 3), v.names = "y", timevar = "measure", times = c("mpg", "disp"), direction = "long") dat$measure <- factor(dat$measure) ## two regressions with intercepts only lm(cbind(mpg, disp) ~ 1, data = mtcars) Coefficients: mpg disp (Intercept) 20.09 230.72 ## using the stacked data, measure is difference ## between outcome means lm(y ~ measure, data = dat) Coefficients: (Intercept) measurempg 230.7 -210.6 ## separate 'intercept' for each outcome lm(y ~ 0 + measure, data = dat) Coefficients: measuredisp measurempg 230.72 20.09 I am not arguing that intercepts should generally be removed, but it is good to be flexible.
When is it ok to remove the intercept in a linear regression model? Removing the intercept is a different model, but there are plenty of examples where it is legitimate. Answers so far have already discussed in detail the example where the true intercept is 0. I wil
829
When is it ok to remove the intercept in a linear regression model?
There are good answers here. Two small things: Regarding a higher $R^2$ when the intercept is dropped, you should read this excellent answer by @cardinal. (In short, statistical software sometimes uses a different definition for $R^2$ when the intercept is forced to 0. So the reported $R^2$ for models with and without an intercept might simply not be comparable.) Several people make the point that you should be certain the intercept must be 0 (for theoretical reasons) before dropping it, and not just that it isn't 'significant'. I think that's right, but it's not the whole story. You also need to know that the true data generating function is perfectly linear throughout the range of $X$ that you are working with and all the way down to 0. Remember that it is always possible that the function is approximately linear within your data, but actually slightly curving. It may be quite reasonable to treat the function as though it were linear within the range of your observations, even if it isn't perfectly so, but if it isn't and you drop the intercept you will end up with a worse approximation to the underlying function even if the true intercept is 0.
When is it ok to remove the intercept in a linear regression model?
There are good answers here. Two small things: Regarding a higher $R^2$ when the intercept is dropped, you should read this excellent answer by @cardinal. (In short, statistical software sometimes
When is it ok to remove the intercept in a linear regression model? There are good answers here. Two small things: Regarding a higher $R^2$ when the intercept is dropped, you should read this excellent answer by @cardinal. (In short, statistical software sometimes uses a different definition for $R^2$ when the intercept is forced to 0. So the reported $R^2$ for models with and without an intercept might simply not be comparable.) Several people make the point that you should be certain the intercept must be 0 (for theoretical reasons) before dropping it, and not just that it isn't 'significant'. I think that's right, but it's not the whole story. You also need to know that the true data generating function is perfectly linear throughout the range of $X$ that you are working with and all the way down to 0. Remember that it is always possible that the function is approximately linear within your data, but actually slightly curving. It may be quite reasonable to treat the function as though it were linear within the range of your observations, even if it isn't perfectly so, but if it isn't and you drop the intercept you will end up with a worse approximation to the underlying function even if the true intercept is 0.
When is it ok to remove the intercept in a linear regression model? There are good answers here. Two small things: Regarding a higher $R^2$ when the intercept is dropped, you should read this excellent answer by @cardinal. (In short, statistical software sometimes
830
When is it ok to remove the intercept in a linear regression model?
You shouldn't drop the intercept, regardless of whether you are likely or not to ever see all the explanatory variables having values of zero. If you remove the intercept then the other estimates all become biased. Even if the true value of the intercept is approximately zero (which is all you can conclude from your data), you are messing around with the slopes if you force it to be exactly zero. UNLESS - you are measuring something with a very clear and obvious physical model that demands intercept be zero (eg you have height, width and length of a rectangular prism as explanatory variables and the response variable is volume with some measurement error). If your response variable is value of the house, you definitely need to leave the intercept in.
When is it ok to remove the intercept in a linear regression model?
You shouldn't drop the intercept, regardless of whether you are likely or not to ever see all the explanatory variables having values of zero. If you remove the intercept then the other estimates all
When is it ok to remove the intercept in a linear regression model? You shouldn't drop the intercept, regardless of whether you are likely or not to ever see all the explanatory variables having values of zero. If you remove the intercept then the other estimates all become biased. Even if the true value of the intercept is approximately zero (which is all you can conclude from your data), you are messing around with the slopes if you force it to be exactly zero. UNLESS - you are measuring something with a very clear and obvious physical model that demands intercept be zero (eg you have height, width and length of a rectangular prism as explanatory variables and the response variable is volume with some measurement error). If your response variable is value of the house, you definitely need to leave the intercept in.
When is it ok to remove the intercept in a linear regression model? You shouldn't drop the intercept, regardless of whether you are likely or not to ever see all the explanatory variables having values of zero. If you remove the intercept then the other estimates all
831
When is it ok to remove the intercept in a linear regression model?
OK, so you've changed the question a LOT You can leave out the intercept when you know it's 0. That's it. And no, you can't do it because it's not significantly different from 0, you have to know it's 0 or your residuals are biased. And, in that case it is 0 so it won't make any difference if you leave it out... therefore, never leave it out. The finding you have with $R^2$ suggests the data are not linear. And, given that you had area as a predictor that particular one is probably definitely not linear. You could transform the predictor to fix that.
When is it ok to remove the intercept in a linear regression model?
OK, so you've changed the question a LOT You can leave out the intercept when you know it's 0. That's it. And no, you can't do it because it's not significantly different from 0, you have to know it
When is it ok to remove the intercept in a linear regression model? OK, so you've changed the question a LOT You can leave out the intercept when you know it's 0. That's it. And no, you can't do it because it's not significantly different from 0, you have to know it's 0 or your residuals are biased. And, in that case it is 0 so it won't make any difference if you leave it out... therefore, never leave it out. The finding you have with $R^2$ suggests the data are not linear. And, given that you had area as a predictor that particular one is probably definitely not linear. You could transform the predictor to fix that.
When is it ok to remove the intercept in a linear regression model? OK, so you've changed the question a LOT You can leave out the intercept when you know it's 0. That's it. And no, you can't do it because it's not significantly different from 0, you have to know it
832
When is it ok to remove the intercept in a linear regression model?
Most multiple regression models include a constant term (i.e., the intercept), since this ensures that the model will be unbiased--i.e., the mean of the residuals will be exactly zero. (The coefficients in a regression model are estimated by least squares--i.e., minimizing the mean squared error. Now, the mean squared error is equal to the variance of the errors plus the square of their mean: this is a mathematical identity. Changing the value of the constant in the model changes the mean of the errors but doesn't affect the variance. Hence, if the sum of squared errors is to be minimized, the constant must be chosen such that the mean of the errors is zero.) In a simple regression model, the constant represents the Y-intercept of the regression line, in unstandardized form. In a multiple regression model, the constant represents the value that would be predicted for the dependent variable if all the independent variables were simultaneously equal to zero--a situation which may not physically or economically meaningful. If you are not particularly interested in what would happen if all the independent variables were simultaneously zero, then you normally leave the constant in the model regardless of its statistical significance. In addition to ensuring that the in-sample errors are unbiased, the presence of the constant allows the regression line to "seek its own level" and provide the best fit to data which may only be locally linear. However, in rare cases you may wish to exclude the constant from the model. This is a model-fitting option in the regression procedure in any software package, and it is sometimes referred to as regression through the origin, or RTO for short. Usually, this will be done only if: it is possible to imagine the independent variables all assuming the value zero simultaneously, and you feel that in this case it should logically follow that the dependent variable will also be equal to zero; or else the constant is redundant with the set of independent variables you wish to use. An example of case (1) would be a model in which all variables--dependent and independent--represented first differences of other time series. If you are regressing the first difference of Y on the first difference of X, you are directly predicting changes in Y as a linear function of changes in X, without reference to the current levels of the variables. In this case it might be reasonable (although not required) to assume that Y should be unchanged, on the average, whenever X is unchanged--i.e., that Y should not have an upward or downward trend in the absence of any change in the level of X. An example of case (2) would be a situation in which you wish to use a full set of seasonal indicator variables--e.g., you are using quarterly data, and you wish to include variables Q1, Q2, Q3, and Q4 representing additive seasonal effects. Thus, Q1 might look like 1 0 0 0 1 0 0 0 ..., Q2 would look like 0 1 0 0 0 1 0 0 ..., and so on. You could not use all four of these and a constant in the same model, since Q1+Q2+Q3+Q4 = 1 1 1 1 1 1 1 1 . . . . , which is the same as a constant term. I.e., the five variables Q1, Q2, Q3, Q4, and CONSTANT are not linearly independent: any one of them can be expressed as a linear combination of the other four. A technical prerequisite for fitting a linear regression model is that the independent variables must be linearly independent; otherwise the least-squares coefficients cannot be determined uniquely, and we say the regression "fails." A word of warning: R-squared and the F statistic do not have the same meaning in an RTO model as they do in an ordinary regression model, and they are not calculated in the same way by all software. See this article for some caveats. You should not try to compare R-squared between models that do and do not include a constant term, although it is OK to compare the standard error of the regression. Note that the term "independent" is used in (at least) three different ways in regression jargon: any single variable may be called an independent variable if it is being used as a predictor, rather than as the predictee. A group of variables is linearly independent if no one of them can be expressed exactly as a linear combination of the others. A pair of variables is said to be statistically independent if they are not only linearly independent but also utterly uninformative with respect to each other. In a regression model, you want your dependent variable to be statistically dependent on the independent variables, which must be linearly (but not necessarily statistically) independent among themselves.
When is it ok to remove the intercept in a linear regression model?
Most multiple regression models include a constant term (i.e., the intercept), since this ensures that the model will be unbiased--i.e., the mean of the residuals will be exactly zero. (The coefficien
When is it ok to remove the intercept in a linear regression model? Most multiple regression models include a constant term (i.e., the intercept), since this ensures that the model will be unbiased--i.e., the mean of the residuals will be exactly zero. (The coefficients in a regression model are estimated by least squares--i.e., minimizing the mean squared error. Now, the mean squared error is equal to the variance of the errors plus the square of their mean: this is a mathematical identity. Changing the value of the constant in the model changes the mean of the errors but doesn't affect the variance. Hence, if the sum of squared errors is to be minimized, the constant must be chosen such that the mean of the errors is zero.) In a simple regression model, the constant represents the Y-intercept of the regression line, in unstandardized form. In a multiple regression model, the constant represents the value that would be predicted for the dependent variable if all the independent variables were simultaneously equal to zero--a situation which may not physically or economically meaningful. If you are not particularly interested in what would happen if all the independent variables were simultaneously zero, then you normally leave the constant in the model regardless of its statistical significance. In addition to ensuring that the in-sample errors are unbiased, the presence of the constant allows the regression line to "seek its own level" and provide the best fit to data which may only be locally linear. However, in rare cases you may wish to exclude the constant from the model. This is a model-fitting option in the regression procedure in any software package, and it is sometimes referred to as regression through the origin, or RTO for short. Usually, this will be done only if: it is possible to imagine the independent variables all assuming the value zero simultaneously, and you feel that in this case it should logically follow that the dependent variable will also be equal to zero; or else the constant is redundant with the set of independent variables you wish to use. An example of case (1) would be a model in which all variables--dependent and independent--represented first differences of other time series. If you are regressing the first difference of Y on the first difference of X, you are directly predicting changes in Y as a linear function of changes in X, without reference to the current levels of the variables. In this case it might be reasonable (although not required) to assume that Y should be unchanged, on the average, whenever X is unchanged--i.e., that Y should not have an upward or downward trend in the absence of any change in the level of X. An example of case (2) would be a situation in which you wish to use a full set of seasonal indicator variables--e.g., you are using quarterly data, and you wish to include variables Q1, Q2, Q3, and Q4 representing additive seasonal effects. Thus, Q1 might look like 1 0 0 0 1 0 0 0 ..., Q2 would look like 0 1 0 0 0 1 0 0 ..., and so on. You could not use all four of these and a constant in the same model, since Q1+Q2+Q3+Q4 = 1 1 1 1 1 1 1 1 . . . . , which is the same as a constant term. I.e., the five variables Q1, Q2, Q3, Q4, and CONSTANT are not linearly independent: any one of them can be expressed as a linear combination of the other four. A technical prerequisite for fitting a linear regression model is that the independent variables must be linearly independent; otherwise the least-squares coefficients cannot be determined uniquely, and we say the regression "fails." A word of warning: R-squared and the F statistic do not have the same meaning in an RTO model as they do in an ordinary regression model, and they are not calculated in the same way by all software. See this article for some caveats. You should not try to compare R-squared between models that do and do not include a constant term, although it is OK to compare the standard error of the regression. Note that the term "independent" is used in (at least) three different ways in regression jargon: any single variable may be called an independent variable if it is being used as a predictor, rather than as the predictee. A group of variables is linearly independent if no one of them can be expressed exactly as a linear combination of the others. A pair of variables is said to be statistically independent if they are not only linearly independent but also utterly uninformative with respect to each other. In a regression model, you want your dependent variable to be statistically dependent on the independent variables, which must be linearly (but not necessarily statistically) independent among themselves.
When is it ok to remove the intercept in a linear regression model? Most multiple regression models include a constant term (i.e., the intercept), since this ensures that the model will be unbiased--i.e., the mean of the residuals will be exactly zero. (The coefficien
833
When is it ok to remove the intercept in a linear regression model?
Short answer: (almost) NEVER. In the linear regression model $$ y = \alpha + \beta x + \epsilon $$, if you set $\alpha=0$, then you say that you KNOW that the expected value of $y$ given $x=0$ is zero. You almost never know that. $R^2$ becomes higher without intercept, not because the model is better, but because the definition of $R^2$ used is another one! $R^2$ is an expression of a comparison of the estimated model with some standard model, expressed as reduction in sum of squares compared to sum of squares with the standard model. In the model with intercept, the comparison sum of squares is around the mean. Without intercept, it is around zero! The last one is usually much higher, so it easier to get a large reduction in sum of squares. Conclusion: DO NOT LEAVE THE INTERCEPT OUT OF THE MODEL (unless you really, really know what you are doing). Some exceptions: One exception is a regression representing a one-way ANOVA with dummies for ALL the factor levels (usually one is left out) (but that is only seemingly an exception, the constant vector 1 is in the column space of the model matrix $X$.) Otherwise, such as physical relationships $s=v t$ where there are no constant. But even then, if the model is only approximate (speed is not really constant), it might be better to leave in a constant even if it cannot be interpreted. There are also special models which leave out the intercept. One example is paired data, twin studies.
When is it ok to remove the intercept in a linear regression model?
Short answer: (almost) NEVER. In the linear regression model $$ y = \alpha + \beta x + \epsilon $$, if you set $\alpha=0$, then you say that you KNOW that the expected value of $y$ given $x=0$ is z
When is it ok to remove the intercept in a linear regression model? Short answer: (almost) NEVER. In the linear regression model $$ y = \alpha + \beta x + \epsilon $$, if you set $\alpha=0$, then you say that you KNOW that the expected value of $y$ given $x=0$ is zero. You almost never know that. $R^2$ becomes higher without intercept, not because the model is better, but because the definition of $R^2$ used is another one! $R^2$ is an expression of a comparison of the estimated model with some standard model, expressed as reduction in sum of squares compared to sum of squares with the standard model. In the model with intercept, the comparison sum of squares is around the mean. Without intercept, it is around zero! The last one is usually much higher, so it easier to get a large reduction in sum of squares. Conclusion: DO NOT LEAVE THE INTERCEPT OUT OF THE MODEL (unless you really, really know what you are doing). Some exceptions: One exception is a regression representing a one-way ANOVA with dummies for ALL the factor levels (usually one is left out) (but that is only seemingly an exception, the constant vector 1 is in the column space of the model matrix $X$.) Otherwise, such as physical relationships $s=v t$ where there are no constant. But even then, if the model is only approximate (speed is not really constant), it might be better to leave in a constant even if it cannot be interpreted. There are also special models which leave out the intercept. One example is paired data, twin studies.
When is it ok to remove the intercept in a linear regression model? Short answer: (almost) NEVER. In the linear regression model $$ y = \alpha + \beta x + \epsilon $$, if you set $\alpha=0$, then you say that you KNOW that the expected value of $y$ given $x=0$ is z
834
When is it ok to remove the intercept in a linear regression model?
Full revision of my thoughts. Indeed dropping the intercept will cause a bias problem. Have you considered centering your data so an intercept would have some meaning and avoid explaining how some (unreasonable) values could give negative values? If you adjust all three explanatory variables by subtract the mean sqrft, mean lotsize and mean bath, then the intercept will now indicate the value (of a house?) with average sdrft, lotsize, and baths. This centering will not change the relative relationship of the independent variables. So, fitting the model on the centered data will still find baths as insignificant. Refit the model without the bath included. You may still get a large p-value for the intercept, but it should be included and you will have a model of the form y=a+b(sqrft)+c(lotsize).
When is it ok to remove the intercept in a linear regression model?
Full revision of my thoughts. Indeed dropping the intercept will cause a bias problem. Have you considered centering your data so an intercept would have some meaning and avoid explaining how some (u
When is it ok to remove the intercept in a linear regression model? Full revision of my thoughts. Indeed dropping the intercept will cause a bias problem. Have you considered centering your data so an intercept would have some meaning and avoid explaining how some (unreasonable) values could give negative values? If you adjust all three explanatory variables by subtract the mean sqrft, mean lotsize and mean bath, then the intercept will now indicate the value (of a house?) with average sdrft, lotsize, and baths. This centering will not change the relative relationship of the independent variables. So, fitting the model on the centered data will still find baths as insignificant. Refit the model without the bath included. You may still get a large p-value for the intercept, but it should be included and you will have a model of the form y=a+b(sqrft)+c(lotsize).
When is it ok to remove the intercept in a linear regression model? Full revision of my thoughts. Indeed dropping the intercept will cause a bias problem. Have you considered centering your data so an intercept would have some meaning and avoid explaining how some (u
835
When is it ok to remove the intercept in a linear regression model?
I just spent some time answering a similar question posted by someone else, but it was closed. There are some great answers here, but the answer I provide is a bit simpler. It might be more suited to people who have a weak understanding of regression. Q1: How do I interpret the intercept in my model? In regression models, the goal is to minimise the amount of unexplained variance in an outcome variable: y = b0 + b1⋅x + ϵ where y is the predicted value of your outcome measure (e.g., log_blood_hg), b0 is the intercept, b1 is the slope, x is a predictor variable, and ϵ is residual error. The intercept (b0) is the predicted mean value of y when all x = 0. In other words, it's the baseline value of y, before you've used any variables (e.g., species) to further minimise or explain the variance in log_blood_hg. By adding a slope (which estimates how a one-unit increase/decrease in log_blood_hg changes with a one unit increase in x, e.g., species), we add to what we already know about the outcome variable, which is its baseline value (i.e. intercept), based on change in another variable. Q2: When is it appropriate to include or not include the intercept, especially in regards to the fact that the models give very different results? For simple models like this, it's never really appropriate to drop the intercept. The models give different results when you drop the intercept because rather than grounding the slope in the baseline value of Y, it is forced to go through the origin of y, which is 0. Therefore, the slope gets steeper (i.e. more powerful and significant) because you've forced the line through the origin, not because it does a better job of minimizing the variance in y. In other words, you've artificially created a model which minimizes the variance in y by removing the intercept, or the initial grounding point for your model. There are cases where removing the intercept is appropriate - such as when describing a phenomenon with a 0-intercept. You can read about that here, as well as more reasons why removing an intercept isn't a good idea.
When is it ok to remove the intercept in a linear regression model?
I just spent some time answering a similar question posted by someone else, but it was closed. There are some great answers here, but the answer I provide is a bit simpler. It might be more suited to
When is it ok to remove the intercept in a linear regression model? I just spent some time answering a similar question posted by someone else, but it was closed. There are some great answers here, but the answer I provide is a bit simpler. It might be more suited to people who have a weak understanding of regression. Q1: How do I interpret the intercept in my model? In regression models, the goal is to minimise the amount of unexplained variance in an outcome variable: y = b0 + b1⋅x + ϵ where y is the predicted value of your outcome measure (e.g., log_blood_hg), b0 is the intercept, b1 is the slope, x is a predictor variable, and ϵ is residual error. The intercept (b0) is the predicted mean value of y when all x = 0. In other words, it's the baseline value of y, before you've used any variables (e.g., species) to further minimise or explain the variance in log_blood_hg. By adding a slope (which estimates how a one-unit increase/decrease in log_blood_hg changes with a one unit increase in x, e.g., species), we add to what we already know about the outcome variable, which is its baseline value (i.e. intercept), based on change in another variable. Q2: When is it appropriate to include or not include the intercept, especially in regards to the fact that the models give very different results? For simple models like this, it's never really appropriate to drop the intercept. The models give different results when you drop the intercept because rather than grounding the slope in the baseline value of Y, it is forced to go through the origin of y, which is 0. Therefore, the slope gets steeper (i.e. more powerful and significant) because you've forced the line through the origin, not because it does a better job of minimizing the variance in y. In other words, you've artificially created a model which minimizes the variance in y by removing the intercept, or the initial grounding point for your model. There are cases where removing the intercept is appropriate - such as when describing a phenomenon with a 0-intercept. You can read about that here, as well as more reasons why removing an intercept isn't a good idea.
When is it ok to remove the intercept in a linear regression model? I just spent some time answering a similar question posted by someone else, but it was closed. There are some great answers here, but the answer I provide is a bit simpler. It might be more suited to
836
When is it ok to remove the intercept in a linear regression model?
This question has already many answers and basically they relate to the two cases (the first and second point below). But the answers make not clear why it is the case. While pondering about this and thinking about why the intercept would be different from other regressors (why should we never exclude the intercept term but is this not the case for other regressors?), I came to the idea that the intercept is not different. The intercept does have a special role like computations of the $R^2$ value and making the mean of residuals zero, but that does not make it that you can not remove it like you can with other regressor terms. It can be ok to remove the intercept when ... When you have categorical variables. In that case the intercept is already in the column space of the regressors. Example this answer to 'Lm adding coefficients for different level of variable'. When you theoretically know that it should be zero. Example: fitting an exponential decay for which the function is $y(t) = y_0 e^{-ct}$. In some way you could see the $y_0$ as an intercept, because it behaves like an intercept when you plot the logarithm of $y$. But this function is different from $y(t) = y_0 e^{-ct} + b$ where this $b$ might be some baseline. Even when it is possible, due to measurement error, that the intercept/baseline $b$ is not zero, then one might whish to not include it (see next point). Not only when the intercept is theoretically exactly zero but also when the intercept is small. The variance in the estimate of the intercept might introduce an increased variance in the estimates or predictions. A model without the intercept could have better performance. In this way the intercept is like any other regressor variable that we might choose to eliminate based on some bad performance in a cross validation. If you do model selection and believe that regressor variables can be eliminated, the the same must be true for the intercept term. Because I am too lazy to prove this rigourously, I made an example in R code that demonstrates this set.seed(1) ### this function makes a linear fit ### either with or without intercept ### and it returns the squared error of the estimates sim = function(intercept = TRUE){ ### the data x = -10:10 u = 0.1+x y = u+rnorm(length(x)) ### fitting if (intercept == TRUE) { mod = lm(y ~ 1+x) } else { mod = lm(y ~ 0+x) } ### return the error in the estimates sum((predict(mod)-u)^2) } ### perform simulations r1 = replicate(10^4,sim(TRUE)) r0 = replicate(10^4,sim(FALSE)) ### with intercept 1.981789 mean(r1) ### without intercept 1.206731 mean(r0) Sidenote: Not including the intercept (or any other parameter), just to reduce the variance is a bit rigorous. Alternatives to not including the intercept could be some regularisation. The reason why we say that we should never exclude the intercept is, I guess, not because there is some theoretical reason that should give the intercept some advantage that other regressor terms do not have. But instead, it is because in practice the intercept is non zero and large enough to be important for the estimates.
When is it ok to remove the intercept in a linear regression model?
This question has already many answers and basically they relate to the two cases (the first and second point below). But the answers make not clear why it is the case. While pondering about this and
When is it ok to remove the intercept in a linear regression model? This question has already many answers and basically they relate to the two cases (the first and second point below). But the answers make not clear why it is the case. While pondering about this and thinking about why the intercept would be different from other regressors (why should we never exclude the intercept term but is this not the case for other regressors?), I came to the idea that the intercept is not different. The intercept does have a special role like computations of the $R^2$ value and making the mean of residuals zero, but that does not make it that you can not remove it like you can with other regressor terms. It can be ok to remove the intercept when ... When you have categorical variables. In that case the intercept is already in the column space of the regressors. Example this answer to 'Lm adding coefficients for different level of variable'. When you theoretically know that it should be zero. Example: fitting an exponential decay for which the function is $y(t) = y_0 e^{-ct}$. In some way you could see the $y_0$ as an intercept, because it behaves like an intercept when you plot the logarithm of $y$. But this function is different from $y(t) = y_0 e^{-ct} + b$ where this $b$ might be some baseline. Even when it is possible, due to measurement error, that the intercept/baseline $b$ is not zero, then one might whish to not include it (see next point). Not only when the intercept is theoretically exactly zero but also when the intercept is small. The variance in the estimate of the intercept might introduce an increased variance in the estimates or predictions. A model without the intercept could have better performance. In this way the intercept is like any other regressor variable that we might choose to eliminate based on some bad performance in a cross validation. If you do model selection and believe that regressor variables can be eliminated, the the same must be true for the intercept term. Because I am too lazy to prove this rigourously, I made an example in R code that demonstrates this set.seed(1) ### this function makes a linear fit ### either with or without intercept ### and it returns the squared error of the estimates sim = function(intercept = TRUE){ ### the data x = -10:10 u = 0.1+x y = u+rnorm(length(x)) ### fitting if (intercept == TRUE) { mod = lm(y ~ 1+x) } else { mod = lm(y ~ 0+x) } ### return the error in the estimates sum((predict(mod)-u)^2) } ### perform simulations r1 = replicate(10^4,sim(TRUE)) r0 = replicate(10^4,sim(FALSE)) ### with intercept 1.981789 mean(r1) ### without intercept 1.206731 mean(r0) Sidenote: Not including the intercept (or any other parameter), just to reduce the variance is a bit rigorous. Alternatives to not including the intercept could be some regularisation. The reason why we say that we should never exclude the intercept is, I guess, not because there is some theoretical reason that should give the intercept some advantage that other regressor terms do not have. But instead, it is because in practice the intercept is non zero and large enough to be important for the estimates.
When is it ok to remove the intercept in a linear regression model? This question has already many answers and basically they relate to the two cases (the first and second point below). But the answers make not clear why it is the case. While pondering about this and
837
Bottom to top explanation of the Mahalanobis distance?
Here is a scatterplot of some multivariate data (in two dimensions): What can we make of it when the axes are left out? Introduce coordinates that are suggested by the data themselves. The origin will be at the centroid of the points (the point of their averages). The first coordinate axis (blue in the next figure) will extend along the "spine" of the points, which (by definition) is any direction in which the variance is the greatest. The second coordinate axis (red in the figure) will extend perpendicularly to the first one. (In more than two dimensions, it will be chosen in that perpendicular direction in which the variance is as large as possible, and so on.) We need a scale. The standard deviation along each axis will do nicely to establish the units along the axes. Remember the 68-95-99.7 rule: about two-thirds (68%) of the points should be within one unit of the origin (along the axis); about 95% should be within two units. That makes it easy to eyeball the correct units. For reference, this figure includes the unit circle in these units: That doesn't really look like a circle, does it? That's because this picture is distorted (as evidenced by the different spacings among the numbers on the two axes). Let's redraw it with the axes in their proper orientations--left to right and bottom to top--and with a unit aspect ratio so that one unit horizontally really does equal one unit vertically: You measure the Mahalanobis distance in this picture rather than in the original. What happened here? We let the data tell us how to construct a coordinate system for making measurements in the scatterplot. That's all it is. Although we had a few choices to make along the way (we could always reverse either or both axes; and in rare situations the directions along the "spines"--the principal directions--are not unique), they do not change the distances in the final plot. Technical comments (Not for grandma, who probably started to lose interest as soon as numbers reappeared on the plots, but to address the remaining questions that were posed.) Unit vectors along the new axes are the eigenvectors (of either the covariance matrix or its inverse). We noted that undistorting the ellipse to make a circle divides the distance along each eigenvector by the standard deviation: the square root of the covariance. Letting $C$ stand for the covariance function, the new (Mahalanobis) distance between two points $x$ and $y$ is the distance from $x$ to $y$ divided by the square root of $C(x-y, x-y)$. The corresponding algebraic operations, thinking now of $C$ in terms of its representation as a matrix and $x$ and $y$ in terms of their representations as vectors, are written $\sqrt{(x-y)'C^{-1}(x-y)}$. This works regardless of what basis is used to represent vectors and matrices. In particular, this is the correct formula for the Mahalanobis distance in the original coordinates. The amounts by which the axes are expanded in the last step are the (square roots of the) eigenvalues of the inverse covariance matrix. Equivalently, the axes are shrunk by the (roots of the) eigenvalues of the covariance matrix. Thus, the more the scatter, the more the shrinking needed to convert that ellipse into a circle. Although this procedure always works with any dataset, it looks this nice (the classical football-shaped cloud) for data that are approximately multivariate Normal. In other cases, the point of averages might not be a good representation of the center of the data or the "spines" (general trends in the data) will not be identified accurately using variance as a measure of spread. The shifting of the coordinate origin, rotation, and expansion of the axes collectively form an affine transformation. Apart from that initial shift, this is a change of basis from the original one (using unit vectors pointing in the positive coordinate directions) to the new one (using a choice of unit eigenvectors). There is a strong connection with Principal Components Analysis (PCA). That alone goes a long way towards explaining the "where does it come from" and "why" questions--if you weren't already convinced by the elegance and utility of letting the data determine the coordinates you use to describe them and measure their differences. For multivariate Normal distributions (where we can carry out the same construction using properties of the probability density instead of the analogous properties of the point cloud), the Mahalanobis distance (to the new origin) appears in place of the "$x$" in the expression $\exp(-\frac{1}{2} x^2)$ that characterizes the probability density of the standard Normal distribution. Thus, in the new coordinates, a multivariate Normal distribution looks standard Normal when projected onto any line through the origin. In particular, it is standard Normal in each of the new coordinates. From this point of view, the only substantial sense in which multivariate Normal distributions differ among one another is in terms of how many dimensions they use. (Note that this number of dimensions may be, and sometimes is, less than the nominal number of dimensions.)
Bottom to top explanation of the Mahalanobis distance?
Here is a scatterplot of some multivariate data (in two dimensions): What can we make of it when the axes are left out? Introduce coordinates that are suggested by the data themselves. The origin wi
Bottom to top explanation of the Mahalanobis distance? Here is a scatterplot of some multivariate data (in two dimensions): What can we make of it when the axes are left out? Introduce coordinates that are suggested by the data themselves. The origin will be at the centroid of the points (the point of their averages). The first coordinate axis (blue in the next figure) will extend along the "spine" of the points, which (by definition) is any direction in which the variance is the greatest. The second coordinate axis (red in the figure) will extend perpendicularly to the first one. (In more than two dimensions, it will be chosen in that perpendicular direction in which the variance is as large as possible, and so on.) We need a scale. The standard deviation along each axis will do nicely to establish the units along the axes. Remember the 68-95-99.7 rule: about two-thirds (68%) of the points should be within one unit of the origin (along the axis); about 95% should be within two units. That makes it easy to eyeball the correct units. For reference, this figure includes the unit circle in these units: That doesn't really look like a circle, does it? That's because this picture is distorted (as evidenced by the different spacings among the numbers on the two axes). Let's redraw it with the axes in their proper orientations--left to right and bottom to top--and with a unit aspect ratio so that one unit horizontally really does equal one unit vertically: You measure the Mahalanobis distance in this picture rather than in the original. What happened here? We let the data tell us how to construct a coordinate system for making measurements in the scatterplot. That's all it is. Although we had a few choices to make along the way (we could always reverse either or both axes; and in rare situations the directions along the "spines"--the principal directions--are not unique), they do not change the distances in the final plot. Technical comments (Not for grandma, who probably started to lose interest as soon as numbers reappeared on the plots, but to address the remaining questions that were posed.) Unit vectors along the new axes are the eigenvectors (of either the covariance matrix or its inverse). We noted that undistorting the ellipse to make a circle divides the distance along each eigenvector by the standard deviation: the square root of the covariance. Letting $C$ stand for the covariance function, the new (Mahalanobis) distance between two points $x$ and $y$ is the distance from $x$ to $y$ divided by the square root of $C(x-y, x-y)$. The corresponding algebraic operations, thinking now of $C$ in terms of its representation as a matrix and $x$ and $y$ in terms of their representations as vectors, are written $\sqrt{(x-y)'C^{-1}(x-y)}$. This works regardless of what basis is used to represent vectors and matrices. In particular, this is the correct formula for the Mahalanobis distance in the original coordinates. The amounts by which the axes are expanded in the last step are the (square roots of the) eigenvalues of the inverse covariance matrix. Equivalently, the axes are shrunk by the (roots of the) eigenvalues of the covariance matrix. Thus, the more the scatter, the more the shrinking needed to convert that ellipse into a circle. Although this procedure always works with any dataset, it looks this nice (the classical football-shaped cloud) for data that are approximately multivariate Normal. In other cases, the point of averages might not be a good representation of the center of the data or the "spines" (general trends in the data) will not be identified accurately using variance as a measure of spread. The shifting of the coordinate origin, rotation, and expansion of the axes collectively form an affine transformation. Apart from that initial shift, this is a change of basis from the original one (using unit vectors pointing in the positive coordinate directions) to the new one (using a choice of unit eigenvectors). There is a strong connection with Principal Components Analysis (PCA). That alone goes a long way towards explaining the "where does it come from" and "why" questions--if you weren't already convinced by the elegance and utility of letting the data determine the coordinates you use to describe them and measure their differences. For multivariate Normal distributions (where we can carry out the same construction using properties of the probability density instead of the analogous properties of the point cloud), the Mahalanobis distance (to the new origin) appears in place of the "$x$" in the expression $\exp(-\frac{1}{2} x^2)$ that characterizes the probability density of the standard Normal distribution. Thus, in the new coordinates, a multivariate Normal distribution looks standard Normal when projected onto any line through the origin. In particular, it is standard Normal in each of the new coordinates. From this point of view, the only substantial sense in which multivariate Normal distributions differ among one another is in terms of how many dimensions they use. (Note that this number of dimensions may be, and sometimes is, less than the nominal number of dimensions.)
Bottom to top explanation of the Mahalanobis distance? Here is a scatterplot of some multivariate data (in two dimensions): What can we make of it when the axes are left out? Introduce coordinates that are suggested by the data themselves. The origin wi
838
Bottom to top explanation of the Mahalanobis distance?
My grandma cooks. Yours might too. Cooking is a delicious way to teach statistics. Pumpkin Habanero cookies are awesome! Think about how wonderful cinnamon and ginger can be in Christmas treats, then realize how hot they are on their own. The ingredients are: 3/4 c Pumpkin, canned 3/4 c Sugar, brown, light 1/2 c Sour cream 1 Eggs; beaten 1 ts Vanilla 1 c Raisins, seedless 1/2 c Walnuts; chopped 1 1/2 c Flour, all-purpose 1 ts Cinnamon 1/2 ts Ginger 1/2 ts Baking soda 1/2 ts Salt 1/2 ts Chiles, habanero, ground 1/4 ts Allspice 1/4 ts Nutmeg Imagine your coordinate axes for your domain being the ingredient volumes. Sugar. Flour. Salt. Baking Soda. Variation along those directions, all else being equal, doesn't have nearly the impact to the flavor quality as variation in count of habanero peppers. A 10% change in flour or butter is going to make it less great, but not killer. Adding just a small amount more habanero will knock you over a flavor cliff from addictive-dessert to testosterone based pain-contest. Mahalanobis isn't as much a distance in "ingredient volumes" as it is distance away from "best taste". The really "potent" ingredients, ones very sensitive to variation, are the ones you must most carefully control. If you think about any Gaussian distribution vs. the Standard Normal distribution, what is the difference? Center and scale based on central tendency (mean) and variation tendency (standard deviation). One is the coordinate transform of the other. Mahalanobis is that transform. It shows you what the world looks like if your distribution of interest was re-cast as a standard normal instead of a Gaussian.
Bottom to top explanation of the Mahalanobis distance?
My grandma cooks. Yours might too. Cooking is a delicious way to teach statistics. Pumpkin Habanero cookies are awesome! Think about how wonderful cinnamon and ginger can be in Christmas treats, th
Bottom to top explanation of the Mahalanobis distance? My grandma cooks. Yours might too. Cooking is a delicious way to teach statistics. Pumpkin Habanero cookies are awesome! Think about how wonderful cinnamon and ginger can be in Christmas treats, then realize how hot they are on their own. The ingredients are: 3/4 c Pumpkin, canned 3/4 c Sugar, brown, light 1/2 c Sour cream 1 Eggs; beaten 1 ts Vanilla 1 c Raisins, seedless 1/2 c Walnuts; chopped 1 1/2 c Flour, all-purpose 1 ts Cinnamon 1/2 ts Ginger 1/2 ts Baking soda 1/2 ts Salt 1/2 ts Chiles, habanero, ground 1/4 ts Allspice 1/4 ts Nutmeg Imagine your coordinate axes for your domain being the ingredient volumes. Sugar. Flour. Salt. Baking Soda. Variation along those directions, all else being equal, doesn't have nearly the impact to the flavor quality as variation in count of habanero peppers. A 10% change in flour or butter is going to make it less great, but not killer. Adding just a small amount more habanero will knock you over a flavor cliff from addictive-dessert to testosterone based pain-contest. Mahalanobis isn't as much a distance in "ingredient volumes" as it is distance away from "best taste". The really "potent" ingredients, ones very sensitive to variation, are the ones you must most carefully control. If you think about any Gaussian distribution vs. the Standard Normal distribution, what is the difference? Center and scale based on central tendency (mean) and variation tendency (standard deviation). One is the coordinate transform of the other. Mahalanobis is that transform. It shows you what the world looks like if your distribution of interest was re-cast as a standard normal instead of a Gaussian.
Bottom to top explanation of the Mahalanobis distance? My grandma cooks. Yours might too. Cooking is a delicious way to teach statistics. Pumpkin Habanero cookies are awesome! Think about how wonderful cinnamon and ginger can be in Christmas treats, th
839
Bottom to top explanation of the Mahalanobis distance?
I'd like to add a little technical information to Whuber's excellent answer. This information might not interest grandma, but perhaps her grandchild would find it helpful. The following is a bottom-to-top explanation of the relevant linear algebra. Mahalanobis distance is defined as $d(x,y)=\sqrt{(x-y)^T\Sigma^{-1}(x-y)}$, where $\Sigma$ is an estimate of the covariance matrix for some data; this implies it is symmetric. If the columns used to estimate $\Sigma$ are not linearly dependent, $\Sigma$ is positive definite. Symmetric matrices are diagonalizable and their eigenvalues and eigenvectors are real. PD matrices have eigenvalues which are all positive. The eigenvectors can be chosen to have unit length, and are orthogonal (i.e. orthonormal) so we can write $\Sigma=Q^TDQ$ and $\Sigma^{-1}=QD^{-\frac{1}{2}}D^{-\frac{1}{2}}Q^T$. Plugging that into the distance definition, $$\begin{align} d(x,y) &= \sqrt{\left[(x-y)^TQ\right]D^{-\frac{1}{2}}D^{-\frac{1}{2}}\left[Q^T(x-y)\right]} \\ &=\sqrt{z^Tz} \end{align} $$ The products in the square brackets are transposes, and the effect of multiplication by $Q$ is rotating the vector $(x-y)$ into an orthogonal basis. $D^{-\frac{1}{2}}$ is diagonal, precisely the inverse standard deviation of each feature in the orthogonal space.${}^*$ That is, $D^{-1}$ a precision matrix (inverse covariance). The matrix is diagonal because the data are in an orthogonal basis. The effect is to transform what Whuber calls a rotated ellipse into a circle by "flattening" its axes. Clearly $z^Tz$ is measured in the squared units, so taking the square root returns the distance into the original units. ${}^*$ We have data $X$ with observations stored in $n$ rows and features in $p$ columns and all column means are 0. Rotating it into an orthogonal basis is done via $\tilde X = XQ$. The estimator of the covariance matrix for the orthogonal data $\tilde X$ is given by $$ \begin{align} \tilde \Sigma &= \frac{1}{n-1} \tilde X^T \tilde X \\ &= \frac{1}{n-1} Q^TX^TXQ \\ &= \frac{1}{n-1} Q^T (n-1)\Sigma Q \\ &= Q^T Q D Q^T Q \\ &= D \end{align}$$ Because $D$ is a covariance matrix, we know that the variances of each feature are on the diagonal. The square root of the variance is the standard deviation.
Bottom to top explanation of the Mahalanobis distance?
I'd like to add a little technical information to Whuber's excellent answer. This information might not interest grandma, but perhaps her grandchild would find it helpful. The following is a bottom-to
Bottom to top explanation of the Mahalanobis distance? I'd like to add a little technical information to Whuber's excellent answer. This information might not interest grandma, but perhaps her grandchild would find it helpful. The following is a bottom-to-top explanation of the relevant linear algebra. Mahalanobis distance is defined as $d(x,y)=\sqrt{(x-y)^T\Sigma^{-1}(x-y)}$, where $\Sigma$ is an estimate of the covariance matrix for some data; this implies it is symmetric. If the columns used to estimate $\Sigma$ are not linearly dependent, $\Sigma$ is positive definite. Symmetric matrices are diagonalizable and their eigenvalues and eigenvectors are real. PD matrices have eigenvalues which are all positive. The eigenvectors can be chosen to have unit length, and are orthogonal (i.e. orthonormal) so we can write $\Sigma=Q^TDQ$ and $\Sigma^{-1}=QD^{-\frac{1}{2}}D^{-\frac{1}{2}}Q^T$. Plugging that into the distance definition, $$\begin{align} d(x,y) &= \sqrt{\left[(x-y)^TQ\right]D^{-\frac{1}{2}}D^{-\frac{1}{2}}\left[Q^T(x-y)\right]} \\ &=\sqrt{z^Tz} \end{align} $$ The products in the square brackets are transposes, and the effect of multiplication by $Q$ is rotating the vector $(x-y)$ into an orthogonal basis. $D^{-\frac{1}{2}}$ is diagonal, precisely the inverse standard deviation of each feature in the orthogonal space.${}^*$ That is, $D^{-1}$ a precision matrix (inverse covariance). The matrix is diagonal because the data are in an orthogonal basis. The effect is to transform what Whuber calls a rotated ellipse into a circle by "flattening" its axes. Clearly $z^Tz$ is measured in the squared units, so taking the square root returns the distance into the original units. ${}^*$ We have data $X$ with observations stored in $n$ rows and features in $p$ columns and all column means are 0. Rotating it into an orthogonal basis is done via $\tilde X = XQ$. The estimator of the covariance matrix for the orthogonal data $\tilde X$ is given by $$ \begin{align} \tilde \Sigma &= \frac{1}{n-1} \tilde X^T \tilde X \\ &= \frac{1}{n-1} Q^TX^TXQ \\ &= \frac{1}{n-1} Q^T (n-1)\Sigma Q \\ &= Q^T Q D Q^T Q \\ &= D \end{align}$$ Because $D$ is a covariance matrix, we know that the variances of each feature are on the diagonal. The square root of the variance is the standard deviation.
Bottom to top explanation of the Mahalanobis distance? I'd like to add a little technical information to Whuber's excellent answer. This information might not interest grandma, but perhaps her grandchild would find it helpful. The following is a bottom-to
840
Bottom to top explanation of the Mahalanobis distance?
As a starting point, I would see the Mahalanobis distance as a suitable deformation of the usual Euclidean distance $d(x,y)=\sqrt{\langle x,y \rangle}$ between vectors $x$ and $y$ in $\mathbb R^{n}$. The extra piece of information here is that $x$ and $y$ are actually random vectors, i.e. 2 different realizations of a vector $X$ of random variables, lying in the background of our discussion. The question that the Mahalanobis tries to address is the following: "how can I measure the "dissimilarity" between $x$ and $y$, knowing that they are realization of the same multivariate random variable?" Clearly the dissimilarity of any realization $x$ with itself should be equal to 0; moreover, the dissimilarity should be a symmetric function of the realizations and should reflect the existence of a random process in the background. This last aspect is taken into consideration by introducing the covariance matrix $C$ of the multivariate random variable. Collecting the above ideas we arrive quite naturally at $$D(x,y)=\sqrt{(x-y)\,C^{-1}(x-y)} $$ If the components $X_i$ of the multivariate random variable $X=(X_1,\dots,X_n)$ are uncorrelated, with, for example $C_{ij}=\delta_{ij}$ (we "normalized" the $X_i$'s in order to have $Var(X_i)=1$), then the Mahalanobis distance $D(x,y)$ is the Euclidean distance between $x$ and $y$. In presence non trivial correlations, the (estimated) correlation matrix $C(x,y)$ "deforms" the Euclidean distance.
Bottom to top explanation of the Mahalanobis distance?
As a starting point, I would see the Mahalanobis distance as a suitable deformation of the usual Euclidean distance $d(x,y)=\sqrt{\langle x,y \rangle}$ between vectors $x$ and $y$ in $\mathbb R^{n}$.
Bottom to top explanation of the Mahalanobis distance? As a starting point, I would see the Mahalanobis distance as a suitable deformation of the usual Euclidean distance $d(x,y)=\sqrt{\langle x,y \rangle}$ between vectors $x$ and $y$ in $\mathbb R^{n}$. The extra piece of information here is that $x$ and $y$ are actually random vectors, i.e. 2 different realizations of a vector $X$ of random variables, lying in the background of our discussion. The question that the Mahalanobis tries to address is the following: "how can I measure the "dissimilarity" between $x$ and $y$, knowing that they are realization of the same multivariate random variable?" Clearly the dissimilarity of any realization $x$ with itself should be equal to 0; moreover, the dissimilarity should be a symmetric function of the realizations and should reflect the existence of a random process in the background. This last aspect is taken into consideration by introducing the covariance matrix $C$ of the multivariate random variable. Collecting the above ideas we arrive quite naturally at $$D(x,y)=\sqrt{(x-y)\,C^{-1}(x-y)} $$ If the components $X_i$ of the multivariate random variable $X=(X_1,\dots,X_n)$ are uncorrelated, with, for example $C_{ij}=\delta_{ij}$ (we "normalized" the $X_i$'s in order to have $Var(X_i)=1$), then the Mahalanobis distance $D(x,y)$ is the Euclidean distance between $x$ and $y$. In presence non trivial correlations, the (estimated) correlation matrix $C(x,y)$ "deforms" the Euclidean distance.
Bottom to top explanation of the Mahalanobis distance? As a starting point, I would see the Mahalanobis distance as a suitable deformation of the usual Euclidean distance $d(x,y)=\sqrt{\langle x,y \rangle}$ between vectors $x$ and $y$ in $\mathbb R^{n}$.
841
Bottom to top explanation of the Mahalanobis distance?
Let's consider the two variables case. Seeing this picture of bivariate normal (thanks @whuber), you cannot simply claim that AB is larger than AC. There is a positive covariance; the two variables are related to each other. You can apply simple Euclidean measurements (straight lines like AB and AC) only if the variables are independent have variances equal to 1. Essentially, Mahalanobis distance measure does the following: it transforms the variables into uncorrelated variables with variances equal to 1, and then calculates simple Euclidean distance.
Bottom to top explanation of the Mahalanobis distance?
Let's consider the two variables case. Seeing this picture of bivariate normal (thanks @whuber), you cannot simply claim that AB is larger than AC. There is a positive covariance; the two variables a
Bottom to top explanation of the Mahalanobis distance? Let's consider the two variables case. Seeing this picture of bivariate normal (thanks @whuber), you cannot simply claim that AB is larger than AC. There is a positive covariance; the two variables are related to each other. You can apply simple Euclidean measurements (straight lines like AB and AC) only if the variables are independent have variances equal to 1. Essentially, Mahalanobis distance measure does the following: it transforms the variables into uncorrelated variables with variances equal to 1, and then calculates simple Euclidean distance.
Bottom to top explanation of the Mahalanobis distance? Let's consider the two variables case. Seeing this picture of bivariate normal (thanks @whuber), you cannot simply claim that AB is larger than AC. There is a positive covariance; the two variables a
842
Bottom to top explanation of the Mahalanobis distance?
I'll try to explain you as simply as possible: Mahalanobis distance measures the distance of a point x from a data distribution. The data distribution is characterized by a mean and the covariance matrix, thus is hypothesized as a multivariate gaussian. It is used in pattern recognition as similarity measure between the pattern (data distribution of training example of a class) and the test example. The covariance matrix gives the shape of how data is distributed in the feature space. The figure indicates three different classes and the red line indicates the same Mahalanobis distance for each class. All points lying on the red line have the same distance from the class mean, because it is used the covariance matrix. The key feature is the use of covariance as a normalization factor.
Bottom to top explanation of the Mahalanobis distance?
I'll try to explain you as simply as possible: Mahalanobis distance measures the distance of a point x from a data distribution. The data distribution is characterized by a mean and the covariance mat
Bottom to top explanation of the Mahalanobis distance? I'll try to explain you as simply as possible: Mahalanobis distance measures the distance of a point x from a data distribution. The data distribution is characterized by a mean and the covariance matrix, thus is hypothesized as a multivariate gaussian. It is used in pattern recognition as similarity measure between the pattern (data distribution of training example of a class) and the test example. The covariance matrix gives the shape of how data is distributed in the feature space. The figure indicates three different classes and the red line indicates the same Mahalanobis distance for each class. All points lying on the red line have the same distance from the class mean, because it is used the covariance matrix. The key feature is the use of covariance as a normalization factor.
Bottom to top explanation of the Mahalanobis distance? I'll try to explain you as simply as possible: Mahalanobis distance measures the distance of a point x from a data distribution. The data distribution is characterized by a mean and the covariance mat
843
Bottom to top explanation of the Mahalanobis distance?
I might be a bit late for answering this question. This paper in The Mahalanobis distance by MaesschalckD and Et. al. is a good start for understanding the Mahalanobis distance. They've provided a complete example with numerical values. What I like about it is the geometric representation of the problem is presented.
Bottom to top explanation of the Mahalanobis distance?
I might be a bit late for answering this question. This paper in The Mahalanobis distance by MaesschalckD and Et. al. is a good start for understanding the Mahalanobis distance. They've provided a com
Bottom to top explanation of the Mahalanobis distance? I might be a bit late for answering this question. This paper in The Mahalanobis distance by MaesschalckD and Et. al. is a good start for understanding the Mahalanobis distance. They've provided a complete example with numerical values. What I like about it is the geometric representation of the problem is presented.
Bottom to top explanation of the Mahalanobis distance? I might be a bit late for answering this question. This paper in The Mahalanobis distance by MaesschalckD and Et. al. is a good start for understanding the Mahalanobis distance. They've provided a com
844
Bottom to top explanation of the Mahalanobis distance?
Just to add to the excellent explanations above, the Mahalanobis distance arises naturally in (multivariate) linear regression. This is a simple consequence of some of the connections between the Mahalanobis distance and the Gaussian distribution discussed in the other answers, but I think it's worth spelling out anyway. Suppose we have some data $(x_1, y_1), \ldots, (x_N, y_N)$, with $x_i \in \mathbb{R}^n$ and $y_i \in \mathbb{R}^m$. Let's assume that there exists a parameter vector $\beta_0 \in \mathbb{R}^m$ and a parameter matrix $\beta_1 \in \mathbb{R}^{m \times n}$ such that $y_i = \beta_0 + \beta_1 x_i + \epsilon_i$, where $\epsilon_1, \ldots, \epsilon_N$ are iid $m$-dimensional Gaussian random vectors with mean $0$ and covariance $C$ (and they are independent of the $x_i$). Then $y_i$ given $x_i$ is Gaussian with mean $\beta_0 + \beta_1 x_i$ and covariance $C$. It follows that the negative log-likelihood of $y_i$ given $x_i$ (as a function of $\beta = (\beta_0, \beta_1)$) is given by \begin{equation} -\log p(y_i \mid x_i; \beta) = \frac{m}{2} \log (2\pi\det C) + \frac{1}{2} (y_i - (\beta_0 + \beta_1 x_i))^\top C^{-1} (y_i - (\beta_0 + \beta x_i)). \end{equation} We are taking the covariance $C$ to be constant, so \begin{equation} \operatorname{argmin}_\beta [-\log p(y_i \mid x_i; \beta)] = \operatorname{argmin}_\beta D_C(\beta_0 + \beta_1 x_i, y_i), \end{equation} where \begin{equation} D_C(\hat y, y) = \sqrt{(y - \hat y)^\top C^{-1} (y - \hat y)} \end{equation} is the Mahalanobis distance between $\hat y, y \in \mathbb{R}^m$. By independence, the log-likelihood $\log p({\bf y} \mid {\bf x}; \beta)$ of ${\bf y} = (y_1, \ldots, y_N)$ given ${\bf x} = (x_1, \ldots, x_N)$ is given by the sum \begin{equation} \log p({\bf y} \mid {\bf x}; \beta) = \sum_{i=1}^N \log p(y_i \mid x_i; \beta) \end{equation} Therefore, \begin{equation} \operatorname{argmin}_\beta [-\log p({\bf y} \mid {\bf x}; \beta)] = \operatorname{argmin}_\beta \frac{1}{N} \sum_{i=1}^N D_C(\beta_0 + \beta_1 x_i, y_i), \end{equation} where the factor $1/N$ does not affect the argmin. In summary, the coefficients $\beta_0, \beta_1$ that minimize the negative log-likelihood (i.e. maximize the likelihood) of the observed data also minimize the empirical risk of the data with loss function given by the Mahalanobis distance.
Bottom to top explanation of the Mahalanobis distance?
Just to add to the excellent explanations above, the Mahalanobis distance arises naturally in (multivariate) linear regression. This is a simple consequence of some of the connections between the Maha
Bottom to top explanation of the Mahalanobis distance? Just to add to the excellent explanations above, the Mahalanobis distance arises naturally in (multivariate) linear regression. This is a simple consequence of some of the connections between the Mahalanobis distance and the Gaussian distribution discussed in the other answers, but I think it's worth spelling out anyway. Suppose we have some data $(x_1, y_1), \ldots, (x_N, y_N)$, with $x_i \in \mathbb{R}^n$ and $y_i \in \mathbb{R}^m$. Let's assume that there exists a parameter vector $\beta_0 \in \mathbb{R}^m$ and a parameter matrix $\beta_1 \in \mathbb{R}^{m \times n}$ such that $y_i = \beta_0 + \beta_1 x_i + \epsilon_i$, where $\epsilon_1, \ldots, \epsilon_N$ are iid $m$-dimensional Gaussian random vectors with mean $0$ and covariance $C$ (and they are independent of the $x_i$). Then $y_i$ given $x_i$ is Gaussian with mean $\beta_0 + \beta_1 x_i$ and covariance $C$. It follows that the negative log-likelihood of $y_i$ given $x_i$ (as a function of $\beta = (\beta_0, \beta_1)$) is given by \begin{equation} -\log p(y_i \mid x_i; \beta) = \frac{m}{2} \log (2\pi\det C) + \frac{1}{2} (y_i - (\beta_0 + \beta_1 x_i))^\top C^{-1} (y_i - (\beta_0 + \beta x_i)). \end{equation} We are taking the covariance $C$ to be constant, so \begin{equation} \operatorname{argmin}_\beta [-\log p(y_i \mid x_i; \beta)] = \operatorname{argmin}_\beta D_C(\beta_0 + \beta_1 x_i, y_i), \end{equation} where \begin{equation} D_C(\hat y, y) = \sqrt{(y - \hat y)^\top C^{-1} (y - \hat y)} \end{equation} is the Mahalanobis distance between $\hat y, y \in \mathbb{R}^m$. By independence, the log-likelihood $\log p({\bf y} \mid {\bf x}; \beta)$ of ${\bf y} = (y_1, \ldots, y_N)$ given ${\bf x} = (x_1, \ldots, x_N)$ is given by the sum \begin{equation} \log p({\bf y} \mid {\bf x}; \beta) = \sum_{i=1}^N \log p(y_i \mid x_i; \beta) \end{equation} Therefore, \begin{equation} \operatorname{argmin}_\beta [-\log p({\bf y} \mid {\bf x}; \beta)] = \operatorname{argmin}_\beta \frac{1}{N} \sum_{i=1}^N D_C(\beta_0 + \beta_1 x_i, y_i), \end{equation} where the factor $1/N$ does not affect the argmin. In summary, the coefficients $\beta_0, \beta_1$ that minimize the negative log-likelihood (i.e. maximize the likelihood) of the observed data also minimize the empirical risk of the data with loss function given by the Mahalanobis distance.
Bottom to top explanation of the Mahalanobis distance? Just to add to the excellent explanations above, the Mahalanobis distance arises naturally in (multivariate) linear regression. This is a simple consequence of some of the connections between the Maha
845
Bottom to top explanation of the Mahalanobis distance?
Mahalanobis distance is an euclidian distance (natural distance) wich take into account the covariance of data. It give a bigger weight to noisy component and so is very usefull to check for similarity between two datasets. As you can see in your exemple here when variables are correlated, the distribution is shifted into one direction. You may want to remove this effects. If you take correlation into account in your distance, you can remove the shift effect.
Bottom to top explanation of the Mahalanobis distance?
Mahalanobis distance is an euclidian distance (natural distance) wich take into account the covariance of data. It give a bigger weight to noisy component and so is very usefull to check for similarit
Bottom to top explanation of the Mahalanobis distance? Mahalanobis distance is an euclidian distance (natural distance) wich take into account the covariance of data. It give a bigger weight to noisy component and so is very usefull to check for similarity between two datasets. As you can see in your exemple here when variables are correlated, the distribution is shifted into one direction. You may want to remove this effects. If you take correlation into account in your distance, you can remove the shift effect.
Bottom to top explanation of the Mahalanobis distance? Mahalanobis distance is an euclidian distance (natural distance) wich take into account the covariance of data. It give a bigger weight to noisy component and so is very usefull to check for similarit
846
Bottom to top explanation of the Mahalanobis distance?
Standardise distance to the mean of normal distribution In my understanding, Z-score and Mahalanobis distance are the standardising methods to measure the closeness to the mean of a distribution, where we need to consider the direction and variance of the dispersions. Question Which one, A or B, is closer to the mean of the distribution, or conversely which is more remote outlier to the distribution. Direction and variance of the dispersions To answer the question, I need to consider the direction and variance of the dispersions in the distribution. Although the visible distances to A and B look similar, B belongs to the distribution and A is an outlier. The distance to the mean should be shorter (closer to the mean) along the direction to B with the larger dispersion (higher variance). (Direction, Variance) to (Eigenvector, Eigenvalue) The eigenvectors $u_i$ tell the directions of the dispersions, and the eigenvalues $\lambda_i$ tell the variances of the dispersions. Higher eigenvalue means larger variance. Then if I scale the space by $\frac {1}{\sqrt{\lambda_A}}$ in the A direction and $\frac {1}{\sqrt{\lambda_B}}$ in the B direction, I can decide which is close to the mean by comparing: $$\frac {\Vert A-\mu \Vert}{\sqrt{\lambda_A}} \text {and} \frac {\Vert B-\mu \Vert}{\sqrt{\lambda_B}}$$ The distance in the direction with higher variance gets shorter. Standardise distance to mean In a univariate normal distribution where there is only one direction of dispersion and one variance $\sigma^2$, I can standardise the distance via Z-score as in the top half of the snapshot. In multivariate normal distribution, I can standardise the distance as in the bottom of the snapshot via the steps: Project $(X - \mu)$ into the U space where the unit eigenvectors $u_i$ are the orthonormal axes. This operation is changing the coordinate in X space to the coordinate in U space. Scale to the $u_i$ directions by $$\frac {1}{\sqrt{\lambda_i}}$$ where $\lambda_i$ is the eigenvalues for the eigenvectors $u_i$ and $\lambda_i$ is sorted by descending order so that $\lambda_1 \ge \lambda_2$. Division by the variance $\sigma^2$ in univariate corresponds to inverse of covariance matrix $\Sigma^{-1}$ in multivariate. Scale by $\frac {1}{\sigma}$ in univariate corresponds to scale by $\frac {1}{\sqrt{\lambda}}$ in multivariate. Inverse Cholesky transformation The steps of projecting into U space and scaling are, in my understanding, Inverse Cholesky Transformation as explained in Use the Cholesky transformation to correlate and uncorrelate variables. Eigen Decomposition Covariance matrix of the distribution X, which is $\Sigma$, can be decomposed as $\Sigma = U \Lambda U^T$ where $U$ is a matrix whose column is an eigenvector $u_i$ and $\Lambda$ is the eigenvalue matrix (sorted descending order). The first step of projecting into U space is applying the matrix $U^T$. This is the operation of de-correlation or acquiring the principal components in PCA (when eigen decomposition is being used instead of SVD). By de-correlation, the multi variate distribution can be the product of independent univariate distributions. Pattern Recognition and Machine Learning (Christopher Bishop) - 2.3 The Gaussian Distribution Related Resources What is Mahalanobis distance? Mahalanobis Distance 5 Multivariate Normal Distribution (slides not the transcript) Is Mahalanobis distance equivalent to the Euclidean one on the PCA-rotated data? STAT 505 Applied Multivariate Statistical Analysis - 4.6 - Geometry of the Multivariate Normal Distribution (PP 6.7) Geometric intuition for the multivariate Gaussian (part 2) Deriving the formula for multivariate Gaussian distribution Pattern Recognition and Machine Learning (Christopher Bishop) - 2.3 The Gaussian Distribution Why does univariate Mahalanobis distance not match z-score?
Bottom to top explanation of the Mahalanobis distance?
Standardise distance to the mean of normal distribution In my understanding, Z-score and Mahalanobis distance are the standardising methods to measure the closeness to the mean of a distribution, whe
Bottom to top explanation of the Mahalanobis distance? Standardise distance to the mean of normal distribution In my understanding, Z-score and Mahalanobis distance are the standardising methods to measure the closeness to the mean of a distribution, where we need to consider the direction and variance of the dispersions. Question Which one, A or B, is closer to the mean of the distribution, or conversely which is more remote outlier to the distribution. Direction and variance of the dispersions To answer the question, I need to consider the direction and variance of the dispersions in the distribution. Although the visible distances to A and B look similar, B belongs to the distribution and A is an outlier. The distance to the mean should be shorter (closer to the mean) along the direction to B with the larger dispersion (higher variance). (Direction, Variance) to (Eigenvector, Eigenvalue) The eigenvectors $u_i$ tell the directions of the dispersions, and the eigenvalues $\lambda_i$ tell the variances of the dispersions. Higher eigenvalue means larger variance. Then if I scale the space by $\frac {1}{\sqrt{\lambda_A}}$ in the A direction and $\frac {1}{\sqrt{\lambda_B}}$ in the B direction, I can decide which is close to the mean by comparing: $$\frac {\Vert A-\mu \Vert}{\sqrt{\lambda_A}} \text {and} \frac {\Vert B-\mu \Vert}{\sqrt{\lambda_B}}$$ The distance in the direction with higher variance gets shorter. Standardise distance to mean In a univariate normal distribution where there is only one direction of dispersion and one variance $\sigma^2$, I can standardise the distance via Z-score as in the top half of the snapshot. In multivariate normal distribution, I can standardise the distance as in the bottom of the snapshot via the steps: Project $(X - \mu)$ into the U space where the unit eigenvectors $u_i$ are the orthonormal axes. This operation is changing the coordinate in X space to the coordinate in U space. Scale to the $u_i$ directions by $$\frac {1}{\sqrt{\lambda_i}}$$ where $\lambda_i$ is the eigenvalues for the eigenvectors $u_i$ and $\lambda_i$ is sorted by descending order so that $\lambda_1 \ge \lambda_2$. Division by the variance $\sigma^2$ in univariate corresponds to inverse of covariance matrix $\Sigma^{-1}$ in multivariate. Scale by $\frac {1}{\sigma}$ in univariate corresponds to scale by $\frac {1}{\sqrt{\lambda}}$ in multivariate. Inverse Cholesky transformation The steps of projecting into U space and scaling are, in my understanding, Inverse Cholesky Transformation as explained in Use the Cholesky transformation to correlate and uncorrelate variables. Eigen Decomposition Covariance matrix of the distribution X, which is $\Sigma$, can be decomposed as $\Sigma = U \Lambda U^T$ where $U$ is a matrix whose column is an eigenvector $u_i$ and $\Lambda$ is the eigenvalue matrix (sorted descending order). The first step of projecting into U space is applying the matrix $U^T$. This is the operation of de-correlation or acquiring the principal components in PCA (when eigen decomposition is being used instead of SVD). By de-correlation, the multi variate distribution can be the product of independent univariate distributions. Pattern Recognition and Machine Learning (Christopher Bishop) - 2.3 The Gaussian Distribution Related Resources What is Mahalanobis distance? Mahalanobis Distance 5 Multivariate Normal Distribution (slides not the transcript) Is Mahalanobis distance equivalent to the Euclidean one on the PCA-rotated data? STAT 505 Applied Multivariate Statistical Analysis - 4.6 - Geometry of the Multivariate Normal Distribution (PP 6.7) Geometric intuition for the multivariate Gaussian (part 2) Deriving the formula for multivariate Gaussian distribution Pattern Recognition and Machine Learning (Christopher Bishop) - 2.3 The Gaussian Distribution Why does univariate Mahalanobis distance not match z-score?
Bottom to top explanation of the Mahalanobis distance? Standardise distance to the mean of normal distribution In my understanding, Z-score and Mahalanobis distance are the standardising methods to measure the closeness to the mean of a distribution, whe
847
Are large data sets inappropriate for hypothesis testing?
It is not true. If the null hypothesis is true then it will not be rejected more frequently at large sample sizes than small. There is an erroneous rejection rate that's usually set to 0.05 (alpha) but it is independent of sample size. Therefore, taken literally the statement is false. Nevertheless, it's possible that in some situations (even whole fields) all nulls are false and therefore all will be rejected if N is high enough. But is this a bad thing? What is true is that trivially small effects can be found to be "significant" with very large sample sizes. That does not suggest that you shouldn't have such large samples sizes. What it means is that the way you interpret your finding is dependent upon the effect size and sensitivity of the test. If you have a very small effect size and highly sensitive test you have to recognize that the statistically significant finding may not be meaningful or useful. Given some people don't believe that a test of the null hypothesis, when the null is true, always has an error rate equal to the cutoff point selected for any sample size, here's a simple simulation in R proving the point. Make N as large as you like and the rate of Type I errors will remain constant. # number of subjects in each condition n <- 100 # number of replications of the study in order to check the Type I error rate nsamp <- 10000 ps <- replicate(nsamp, { #population mean = 0, sd = 1 for both samples, therefore, no real effect y1 <- rnorm(n, 0, 1) y2 <- rnorm(n, 0, 1) tt <- t.test(y1, y2, var.equal = TRUE) tt$p.value }) sum(ps < .05) / nsamp # ~ .05 no matter how big n is. Note particularly that it is not an increasing value always finding effects when n is very large.
Are large data sets inappropriate for hypothesis testing?
It is not true. If the null hypothesis is true then it will not be rejected more frequently at large sample sizes than small. There is an erroneous rejection rate that's usually set to 0.05 (alpha) b
Are large data sets inappropriate for hypothesis testing? It is not true. If the null hypothesis is true then it will not be rejected more frequently at large sample sizes than small. There is an erroneous rejection rate that's usually set to 0.05 (alpha) but it is independent of sample size. Therefore, taken literally the statement is false. Nevertheless, it's possible that in some situations (even whole fields) all nulls are false and therefore all will be rejected if N is high enough. But is this a bad thing? What is true is that trivially small effects can be found to be "significant" with very large sample sizes. That does not suggest that you shouldn't have such large samples sizes. What it means is that the way you interpret your finding is dependent upon the effect size and sensitivity of the test. If you have a very small effect size and highly sensitive test you have to recognize that the statistically significant finding may not be meaningful or useful. Given some people don't believe that a test of the null hypothesis, when the null is true, always has an error rate equal to the cutoff point selected for any sample size, here's a simple simulation in R proving the point. Make N as large as you like and the rate of Type I errors will remain constant. # number of subjects in each condition n <- 100 # number of replications of the study in order to check the Type I error rate nsamp <- 10000 ps <- replicate(nsamp, { #population mean = 0, sd = 1 for both samples, therefore, no real effect y1 <- rnorm(n, 0, 1) y2 <- rnorm(n, 0, 1) tt <- t.test(y1, y2, var.equal = TRUE) tt$p.value }) sum(ps < .05) / nsamp # ~ .05 no matter how big n is. Note particularly that it is not an increasing value always finding effects when n is very large.
Are large data sets inappropriate for hypothesis testing? It is not true. If the null hypothesis is true then it will not be rejected more frequently at large sample sizes than small. There is an erroneous rejection rate that's usually set to 0.05 (alpha) b
848
Are large data sets inappropriate for hypothesis testing?
I agree with the answers that have appeared, but would like to add that perhaps the question could be redirected. Whether to test a hypothesis or not is a research question that ought, at least in general, be independent of how much data one has. If you really need to test a hypothesis, do so, and don't be afraid of your ability to detect small effects. But first ask whether that's part of your research objectives. Now for some quibbles: Some null hypotheses are absolutely true by construction. When you're testing a pseudorandom number generator for equidistribution, for instance, and that PRG is truly equidistributed (which would be a mathematical theorem), then the null holds. Probably most of you can think of more interesting real-world examples arising from randomization in experiments where the treatment really does have no effect. (I would hold out the entire literature on esp as an example. ;-) In a situation where a "simple" null is tested against a "compound" alternative, as in classic t-tests or z-tests, it typically takes a sample size proportional to $1/\epsilon^2$ to detect an effect size of $\epsilon$. There's a practical upper bound to this in any study, implying there's a practical lower bound on a detectable effect size. So, as a theoretical matter der Laan and Rose are correct, but we should take care in applying their conclusion.
Are large data sets inappropriate for hypothesis testing?
I agree with the answers that have appeared, but would like to add that perhaps the question could be redirected. Whether to test a hypothesis or not is a research question that ought, at least in ge
Are large data sets inappropriate for hypothesis testing? I agree with the answers that have appeared, but would like to add that perhaps the question could be redirected. Whether to test a hypothesis or not is a research question that ought, at least in general, be independent of how much data one has. If you really need to test a hypothesis, do so, and don't be afraid of your ability to detect small effects. But first ask whether that's part of your research objectives. Now for some quibbles: Some null hypotheses are absolutely true by construction. When you're testing a pseudorandom number generator for equidistribution, for instance, and that PRG is truly equidistributed (which would be a mathematical theorem), then the null holds. Probably most of you can think of more interesting real-world examples arising from randomization in experiments where the treatment really does have no effect. (I would hold out the entire literature on esp as an example. ;-) In a situation where a "simple" null is tested against a "compound" alternative, as in classic t-tests or z-tests, it typically takes a sample size proportional to $1/\epsilon^2$ to detect an effect size of $\epsilon$. There's a practical upper bound to this in any study, implying there's a practical lower bound on a detectable effect size. So, as a theoretical matter der Laan and Rose are correct, but we should take care in applying their conclusion.
Are large data sets inappropriate for hypothesis testing? I agree with the answers that have appeared, but would like to add that perhaps the question could be redirected. Whether to test a hypothesis or not is a research question that ought, at least in ge
849
Are large data sets inappropriate for hypothesis testing?
Hypothesis testing traditionally focused on p values to derive statistical significance when alpha is less than 0.05 has a major weakness. And, that is that with a large enough sample size any experiment can eventually reject the null hypothesis and detect trivially small differences that turn out to be statistically significant. This is the reason why drug companies structure clinical trials to obtain FDA approval with very large samples. The large sample will reduce the standard error to close to zero. This in turn will artificially boost the t stat and commensurately lower the p value to close to 0%. I gather within scientific communities that are not corrupted by economic incentives and related conflict of interest hypothesis testing is moving away from any p value measurements towards Effect Size measurements. This is because the unit of statistical distance or differentiation in Effect Size analysis is the standard deviation instead of the standard error. And, the standard deviation is completely independent from the sample size. The standard error on the other hand is totally dependent from the sample size. So, anyone who is skeptical of hypothesis testing reaching statistically significant results based on large samples and p value related methodologies is right to be skeptical. They should rerun the analysis using the same data but using instead Effect Size statistical tests. And, then observe if the Effect Size is deemed material or not. By doing so, you could observe that a bunch of differences that are statistically significant are associated with Effect Size that are immaterial. That's what clinical trial researchers sometimes mean when a result is statistically significant but not "clinically significant." They mean by that one treatment may be better than placebo, but the difference is so marginal that it would make no difference to the patient within a clinical context.
Are large data sets inappropriate for hypothesis testing?
Hypothesis testing traditionally focused on p values to derive statistical significance when alpha is less than 0.05 has a major weakness. And, that is that with a large enough sample size any experi
Are large data sets inappropriate for hypothesis testing? Hypothesis testing traditionally focused on p values to derive statistical significance when alpha is less than 0.05 has a major weakness. And, that is that with a large enough sample size any experiment can eventually reject the null hypothesis and detect trivially small differences that turn out to be statistically significant. This is the reason why drug companies structure clinical trials to obtain FDA approval with very large samples. The large sample will reduce the standard error to close to zero. This in turn will artificially boost the t stat and commensurately lower the p value to close to 0%. I gather within scientific communities that are not corrupted by economic incentives and related conflict of interest hypothesis testing is moving away from any p value measurements towards Effect Size measurements. This is because the unit of statistical distance or differentiation in Effect Size analysis is the standard deviation instead of the standard error. And, the standard deviation is completely independent from the sample size. The standard error on the other hand is totally dependent from the sample size. So, anyone who is skeptical of hypothesis testing reaching statistically significant results based on large samples and p value related methodologies is right to be skeptical. They should rerun the analysis using the same data but using instead Effect Size statistical tests. And, then observe if the Effect Size is deemed material or not. By doing so, you could observe that a bunch of differences that are statistically significant are associated with Effect Size that are immaterial. That's what clinical trial researchers sometimes mean when a result is statistically significant but not "clinically significant." They mean by that one treatment may be better than placebo, but the difference is so marginal that it would make no difference to the patient within a clinical context.
Are large data sets inappropriate for hypothesis testing? Hypothesis testing traditionally focused on p values to derive statistical significance when alpha is less than 0.05 has a major weakness. And, that is that with a large enough sample size any experi
850
Are large data sets inappropriate for hypothesis testing?
A (frequentist) hypothesis test, precisely, address the question of the probability of the observed data or something more extreme would be likely assuming the null hypothesis is true. This interpretation is indifferent to sample size. That interpretation is valid whether the sample is of size 5 or 1,000,000. An important caveat is that the test is only relevant to sampling errors. Any errors of measurement, sampling problems,coverage, data entry errors, etc are outside of the scope of sampling error. As sample size increases, non-sampling errors become more influential as small departures can produce significant departures from the random sampling model. As a result, tests of significance become less useful. This is in no way an indictment of significance testing. However, we need to be careful about our attributions. A result may be statistically significant. However, we need to be cautious about how we make attributions when sample size is large. Is that difference due to our hypothesized generating process vis a vis sampling error or is it the result of any of a number of possible non-sampling errors that could influence the test statistic (which the statistic does not account for)? Another consideration with large samples is the practical significance of a result. A significant test might suggest (even if we can rule out non-sampling error) a difference that is trivial in a practical sense. Even if that result is unlikely given the sampling model, is it significant in the context of the problem? Given a large enough sample, a difference in a few dollars might be enough to produce a result that is statistically significant when comparing income among two groups. Is this important in any meaningful sense? Statistical significance is no replacement for good judgment and subject matter knowledge. As an aside, the null is neither true nor false. It is a model. It is an assumption. We assume the null is true and assess our sample in terms of that assumption. If our sample would be unlikely given this assumption, we place more trust in our alternative. To question whether or not a null is ever true in practice is a misunderstanding of the logic of significance testing.
Are large data sets inappropriate for hypothesis testing?
A (frequentist) hypothesis test, precisely, address the question of the probability of the observed data or something more extreme would be likely assuming the null hypothesis is true. This interpret
Are large data sets inappropriate for hypothesis testing? A (frequentist) hypothesis test, precisely, address the question of the probability of the observed data or something more extreme would be likely assuming the null hypothesis is true. This interpretation is indifferent to sample size. That interpretation is valid whether the sample is of size 5 or 1,000,000. An important caveat is that the test is only relevant to sampling errors. Any errors of measurement, sampling problems,coverage, data entry errors, etc are outside of the scope of sampling error. As sample size increases, non-sampling errors become more influential as small departures can produce significant departures from the random sampling model. As a result, tests of significance become less useful. This is in no way an indictment of significance testing. However, we need to be careful about our attributions. A result may be statistically significant. However, we need to be cautious about how we make attributions when sample size is large. Is that difference due to our hypothesized generating process vis a vis sampling error or is it the result of any of a number of possible non-sampling errors that could influence the test statistic (which the statistic does not account for)? Another consideration with large samples is the practical significance of a result. A significant test might suggest (even if we can rule out non-sampling error) a difference that is trivial in a practical sense. Even if that result is unlikely given the sampling model, is it significant in the context of the problem? Given a large enough sample, a difference in a few dollars might be enough to produce a result that is statistically significant when comparing income among two groups. Is this important in any meaningful sense? Statistical significance is no replacement for good judgment and subject matter knowledge. As an aside, the null is neither true nor false. It is a model. It is an assumption. We assume the null is true and assess our sample in terms of that assumption. If our sample would be unlikely given this assumption, we place more trust in our alternative. To question whether or not a null is ever true in practice is a misunderstanding of the logic of significance testing.
Are large data sets inappropriate for hypothesis testing? A (frequentist) hypothesis test, precisely, address the question of the probability of the observed data or something more extreme would be likely assuming the null hypothesis is true. This interpret
851
Are large data sets inappropriate for hypothesis testing?
One simple point not made directly in another answer is that it's simply not true that "all null hypotheses are false." The simple hypothesis that a physical coin has heads probability exactly equal to 0.5, ok, that is false. But the compound hypothesis that a physical coin has heads probability greater than 0.499 and less than 0.501 may be true. If so, no hypothesis test -- no matter how many coin flips go into it -- is going to be able to reject this hypothesis with a probability greater than $\alpha$ (the tests's bound on false positives). The medical industry tests "non-inferiority" hypotheses all the time, for this reason -- e.g. a new cancer drug has to show that its patients' probability of progression-free survival isn't less than 3 percentage points lower than an existing drug's, at some confidence level (the $\alpha$, usually 0.05).
Are large data sets inappropriate for hypothesis testing?
One simple point not made directly in another answer is that it's simply not true that "all null hypotheses are false." The simple hypothesis that a physical coin has heads probability exactly equal t
Are large data sets inappropriate for hypothesis testing? One simple point not made directly in another answer is that it's simply not true that "all null hypotheses are false." The simple hypothesis that a physical coin has heads probability exactly equal to 0.5, ok, that is false. But the compound hypothesis that a physical coin has heads probability greater than 0.499 and less than 0.501 may be true. If so, no hypothesis test -- no matter how many coin flips go into it -- is going to be able to reject this hypothesis with a probability greater than $\alpha$ (the tests's bound on false positives). The medical industry tests "non-inferiority" hypotheses all the time, for this reason -- e.g. a new cancer drug has to show that its patients' probability of progression-free survival isn't less than 3 percentage points lower than an existing drug's, at some confidence level (the $\alpha$, usually 0.05).
Are large data sets inappropriate for hypothesis testing? One simple point not made directly in another answer is that it's simply not true that "all null hypotheses are false." The simple hypothesis that a physical coin has heads probability exactly equal t
852
Are large data sets inappropriate for hypothesis testing?
In a certain sense, [all] many null hypothesis are [always] false (the group of people living in houses with odd numbers does never exactly earn the same on average as the group of people living in houses with even numbers). In the frequentist framework, the question that is asked is whether the difference in income between the two group is larger than $T_{\alpha}n^{-0.5}$ (where $T_{\alpha}$ is the $\alpha$ quantile of the distribution of the test statistic under the null). Obviously, for $n$ growing without bounds, this band becomes increasingly easy to break through. This is not a defect of statistical tests. Simply a consequence of the fact that without further information (a prior) we have that a large number of small inconsistencies with the null have to be taken as evidence against the null. No matter how trivial these inconsistencies turn out to be. In large studies, it becomes then interesting to re-frame the issue as a bayesian test, i.e. ask oneself (for instance), what is $\hat{P}(|\bar{\mu}_1-\bar{\mu}_2|^2>\eta|\eta, X)$.
Are large data sets inappropriate for hypothesis testing?
In a certain sense, [all] many null hypothesis are [always] false (the group of people living in houses with odd numbers does never exactly earn the same on average as the group of people living in ho
Are large data sets inappropriate for hypothesis testing? In a certain sense, [all] many null hypothesis are [always] false (the group of people living in houses with odd numbers does never exactly earn the same on average as the group of people living in houses with even numbers). In the frequentist framework, the question that is asked is whether the difference in income between the two group is larger than $T_{\alpha}n^{-0.5}$ (where $T_{\alpha}$ is the $\alpha$ quantile of the distribution of the test statistic under the null). Obviously, for $n$ growing without bounds, this band becomes increasingly easy to break through. This is not a defect of statistical tests. Simply a consequence of the fact that without further information (a prior) we have that a large number of small inconsistencies with the null have to be taken as evidence against the null. No matter how trivial these inconsistencies turn out to be. In large studies, it becomes then interesting to re-frame the issue as a bayesian test, i.e. ask oneself (for instance), what is $\hat{P}(|\bar{\mu}_1-\bar{\mu}_2|^2>\eta|\eta, X)$.
Are large data sets inappropriate for hypothesis testing? In a certain sense, [all] many null hypothesis are [always] false (the group of people living in houses with odd numbers does never exactly earn the same on average as the group of people living in ho
853
Are large data sets inappropriate for hypothesis testing?
Hypothesis testing for large data should the desired level of difference into account, rather than whether there is a difference or not. You're not interested in the H0 that the estimate is exactly 0. A general approach would be to test whether the difference between the null hypothesis and the observed value is larger than a given cut-off value. An simple example with the T-test: You can make following assumptions for big sample sizes, given you have equal sample sizes and standard deviations in both groups, and $\bar{X_1} > \bar{X_2}$ : $$T=\frac{\bar{X1}-\bar{X2}-\delta}{\sqrt{\frac{S^2}{n}}}+\frac{\delta}{\sqrt{\frac{S^2}{n}}} \approx N(\frac{\delta}{\sqrt{\frac{S^2}{n}}},1)$$ hence $$T=\frac{\bar{X1}-\bar{X2}}{\sqrt{\frac{S^2}{n}}} \approx N(\frac{\delta}{\sqrt{\frac{S^2}{n}}},1)$$ as your null hypothesis $H_0:\bar{X1}-\bar{X2} = \delta $ implies: $$\frac{\bar{X1}-\bar{X2}-\delta}{\sqrt{\frac{S^2}{n}}}\approx N(0,1)$$ This you can easily use to test for a significant and relevant difference. In R you can make use of the noncentrality parameter of the T distributions to generalize this result for smaller sample sizes as well. You should take into account that this is a one-sided test, the alternative $H_A$ is $\bar{X1}-\bar{X2} > \delta $. mod.test <- function(x1,x2,dif,...){ avg.x1 <- mean(x1) avg.x2 <- mean(x2) sd.x1 <- sd(x1) sd.x2 <- sd(x2) sd.comb <- sqrt((sd.x1^2+sd.x2^2)/2) n <- length(x1) t.val <- (abs(avg.x1-avg.x2))*sqrt(n)/sd.comb ncp <- (dif*sqrt(n)/sd.comb) p.val <- pt(t.val,n-1,ncp=ncp,lower.tail=FALSE) return(p.val) } n <- 5000 test1 <- replicate(100, t.test(rnorm(n),rnorm(n,0.05))$p.value) table(test1<0.05) test2 <- replicate(100, t.test(rnorm(n),rnorm(n,0.5))$p.value) table(test2<0.05) test3 <- replicate(100, mod.test(rnorm(n),rnorm(n,0.05),dif=0.3)) table(test3<0.05) test4 <- replicate(100, mod.test(rnorm(n),rnorm(n,0.5),dif=0.3)) table(test4<0.05) Which gives : > table(test1<0.05) FALSE TRUE 24 76 > table(test2<0.05) TRUE 100 > table(test3<0.05) FALSE 100 > table(test4<0.05) TRUE 100
Are large data sets inappropriate for hypothesis testing?
Hypothesis testing for large data should the desired level of difference into account, rather than whether there is a difference or not. You're not interested in the H0 that the estimate is exactly 0.
Are large data sets inappropriate for hypothesis testing? Hypothesis testing for large data should the desired level of difference into account, rather than whether there is a difference or not. You're not interested in the H0 that the estimate is exactly 0. A general approach would be to test whether the difference between the null hypothesis and the observed value is larger than a given cut-off value. An simple example with the T-test: You can make following assumptions for big sample sizes, given you have equal sample sizes and standard deviations in both groups, and $\bar{X_1} > \bar{X_2}$ : $$T=\frac{\bar{X1}-\bar{X2}-\delta}{\sqrt{\frac{S^2}{n}}}+\frac{\delta}{\sqrt{\frac{S^2}{n}}} \approx N(\frac{\delta}{\sqrt{\frac{S^2}{n}}},1)$$ hence $$T=\frac{\bar{X1}-\bar{X2}}{\sqrt{\frac{S^2}{n}}} \approx N(\frac{\delta}{\sqrt{\frac{S^2}{n}}},1)$$ as your null hypothesis $H_0:\bar{X1}-\bar{X2} = \delta $ implies: $$\frac{\bar{X1}-\bar{X2}-\delta}{\sqrt{\frac{S^2}{n}}}\approx N(0,1)$$ This you can easily use to test for a significant and relevant difference. In R you can make use of the noncentrality parameter of the T distributions to generalize this result for smaller sample sizes as well. You should take into account that this is a one-sided test, the alternative $H_A$ is $\bar{X1}-\bar{X2} > \delta $. mod.test <- function(x1,x2,dif,...){ avg.x1 <- mean(x1) avg.x2 <- mean(x2) sd.x1 <- sd(x1) sd.x2 <- sd(x2) sd.comb <- sqrt((sd.x1^2+sd.x2^2)/2) n <- length(x1) t.val <- (abs(avg.x1-avg.x2))*sqrt(n)/sd.comb ncp <- (dif*sqrt(n)/sd.comb) p.val <- pt(t.val,n-1,ncp=ncp,lower.tail=FALSE) return(p.val) } n <- 5000 test1 <- replicate(100, t.test(rnorm(n),rnorm(n,0.05))$p.value) table(test1<0.05) test2 <- replicate(100, t.test(rnorm(n),rnorm(n,0.5))$p.value) table(test2<0.05) test3 <- replicate(100, mod.test(rnorm(n),rnorm(n,0.05),dif=0.3)) table(test3<0.05) test4 <- replicate(100, mod.test(rnorm(n),rnorm(n,0.5),dif=0.3)) table(test4<0.05) Which gives : > table(test1<0.05) FALSE TRUE 24 76 > table(test2<0.05) TRUE 100 > table(test3<0.05) FALSE 100 > table(test4<0.05) TRUE 100
Are large data sets inappropriate for hypothesis testing? Hypothesis testing for large data should the desired level of difference into account, rather than whether there is a difference or not. You're not interested in the H0 that the estimate is exactly 0.
854
Are large data sets inappropriate for hypothesis testing?
"Does it mean that hypothesis testing is worthless for large data sets?" No, it doesn't mean that. The general message is that decisions made after conducting a hypothesis test should always take into account the estimated effect size, and not only the p-value. Particularly, in experiments with very large sample sizes, this necessity to consider the effect size becomes dramatic. Of course, in general, users don't like this because the procedure becomes less "automatic". Consider this simulation example. Suppose you have a random sample of 1 million observations from a standard normal distribution, n <- 10^6 x <- rnorm(n) and another random sample of 1 million observations from a normal distribution with mean equal to $0.01$ and variance equal to one. y <- rnorm(n, mean = 0.01) Comparing the means of the two populations with a t-test at the canonical $95\%$ confidence level, we get a tiny p-value of approximately $2.5\times 10^{-14}$. t.test(x, y) Welch Two Sample t-test data: x and y t = -7.6218, df = 1999984, p-value = 2.503e-14 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -0.013554059 -0.008009031 sample estimates: mean of x mean of y 0.0008947038 0.0116762485 It's correct to say that the t-test "detected" that the means of the two populations are different. But take a look at the very short $95\%$ confidence interval for the difference between the two population means: $[-0.013, -0.008]$. Is a difference between the two population means of this order of magnitude relevant to the particular problem we are studying or not?
Are large data sets inappropriate for hypothesis testing?
"Does it mean that hypothesis testing is worthless for large data sets?" No, it doesn't mean that. The general message is that decisions made after conducting a hypothesis test should always take into
Are large data sets inappropriate for hypothesis testing? "Does it mean that hypothesis testing is worthless for large data sets?" No, it doesn't mean that. The general message is that decisions made after conducting a hypothesis test should always take into account the estimated effect size, and not only the p-value. Particularly, in experiments with very large sample sizes, this necessity to consider the effect size becomes dramatic. Of course, in general, users don't like this because the procedure becomes less "automatic". Consider this simulation example. Suppose you have a random sample of 1 million observations from a standard normal distribution, n <- 10^6 x <- rnorm(n) and another random sample of 1 million observations from a normal distribution with mean equal to $0.01$ and variance equal to one. y <- rnorm(n, mean = 0.01) Comparing the means of the two populations with a t-test at the canonical $95\%$ confidence level, we get a tiny p-value of approximately $2.5\times 10^{-14}$. t.test(x, y) Welch Two Sample t-test data: x and y t = -7.6218, df = 1999984, p-value = 2.503e-14 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -0.013554059 -0.008009031 sample estimates: mean of x mean of y 0.0008947038 0.0116762485 It's correct to say that the t-test "detected" that the means of the two populations are different. But take a look at the very short $95\%$ confidence interval for the difference between the two population means: $[-0.013, -0.008]$. Is a difference between the two population means of this order of magnitude relevant to the particular problem we are studying or not?
Are large data sets inappropriate for hypothesis testing? "Does it mean that hypothesis testing is worthless for large data sets?" No, it doesn't mean that. The general message is that decisions made after conducting a hypothesis test should always take into
855
Are large data sets inappropriate for hypothesis testing?
The short answer is "no". Research on hypothesis testing in the asymptotic regime of infinite observations and multiple hypotheses has been very, very active in the past 15-20 years, because of microarray data and financial data applications. The long answer is in the course page of Stat 329, "Large-Scale Simultaneous Inference", taught in 2010 by Brad Efron. A full chapter (#2) is devoted to large-scale hypothesis testing.
Are large data sets inappropriate for hypothesis testing?
The short answer is "no". Research on hypothesis testing in the asymptotic regime of infinite observations and multiple hypotheses has been very, very active in the past 15-20 years, because of microa
Are large data sets inappropriate for hypothesis testing? The short answer is "no". Research on hypothesis testing in the asymptotic regime of infinite observations and multiple hypotheses has been very, very active in the past 15-20 years, because of microarray data and financial data applications. The long answer is in the course page of Stat 329, "Large-Scale Simultaneous Inference", taught in 2010 by Brad Efron. A full chapter (#2) is devoted to large-scale hypothesis testing.
Are large data sets inappropriate for hypothesis testing? The short answer is "no". Research on hypothesis testing in the asymptotic regime of infinite observations and multiple hypotheses has been very, very active in the past 15-20 years, because of microa
856
Are large data sets inappropriate for hypothesis testing?
I think its a problem of most significance tests having some general undefined class of implicit alternatives to the null, which we never know. Often these classes may contain some sort of "sure thing" hypothesis, in which the data fits perfectly (i.e. a hypothesis of the form $H_{ST}:d_{1}=1.23,d_{2}=1.11,\dots$ where $d_{i}$ is the ith data point). The value of the log likelihood is such an example of a significance test which has this property. But one is usually not interested in these sure thing hypothesis. If you think about what you actually want to do with the hypothesis test, you will soon recognise that you should only reject the null hypothesis if you have something better to replace it with. Even if your null does not explain the data, there is no use in throwing it out, unless you have a replacement. Now would you always replace the null with the "sure thing" hypothesis? Probably not, because you can't use these "sure thing" hypothesis to generalise beyond your data set. It's not much more than printing out your data. So, what you should do is specify the hypothesis that you would actually be interested in acting on if they were true. Then do the appropriate test for comparing those alternatives to each other - and not to some irrelevant class of hypothesis which you know to be false or unusable. Take the simple case of testing the normal mean. Now the true difference may be small, but adopting a position similar to that in @keith's answer, we simply test the mean at various discrete values that are of interest to us. So for example, we could have $H_{0}:\mu=0$ vs $H_{1}:\mu\in\{\pm 1,\pm 2,\pm 3,\pm 4,\pm 5,\pm 6\}$. The problem then transfers to looking at what level do we want to do these tests at. This has a relation to the idea of effect size: at what level of graininess would have an influence on your decision making? This may call for steps of size $0.5$ or $100$ or something else, depending on the meaning of the test and of the parameters. For instance if you were comparing the average wealth of two groups, would anyone care if there was a difference of two dollars, even if it was 10,000 standard errors away from zero? I know I wouldn't. The conclusion is basically that you need to specify your hypothesis space - those hypothesis that you are actually interested in. It seems that with big data, this becomes a very important thing to do, simply because your data has so much resolving power. It also seems like it is important to compare like hypothesis - point with point, compound with compound - to get well behaved results.
Are large data sets inappropriate for hypothesis testing?
I think its a problem of most significance tests having some general undefined class of implicit alternatives to the null, which we never know. Often these classes may contain some sort of "sure thin
Are large data sets inappropriate for hypothesis testing? I think its a problem of most significance tests having some general undefined class of implicit alternatives to the null, which we never know. Often these classes may contain some sort of "sure thing" hypothesis, in which the data fits perfectly (i.e. a hypothesis of the form $H_{ST}:d_{1}=1.23,d_{2}=1.11,\dots$ where $d_{i}$ is the ith data point). The value of the log likelihood is such an example of a significance test which has this property. But one is usually not interested in these sure thing hypothesis. If you think about what you actually want to do with the hypothesis test, you will soon recognise that you should only reject the null hypothesis if you have something better to replace it with. Even if your null does not explain the data, there is no use in throwing it out, unless you have a replacement. Now would you always replace the null with the "sure thing" hypothesis? Probably not, because you can't use these "sure thing" hypothesis to generalise beyond your data set. It's not much more than printing out your data. So, what you should do is specify the hypothesis that you would actually be interested in acting on if they were true. Then do the appropriate test for comparing those alternatives to each other - and not to some irrelevant class of hypothesis which you know to be false or unusable. Take the simple case of testing the normal mean. Now the true difference may be small, but adopting a position similar to that in @keith's answer, we simply test the mean at various discrete values that are of interest to us. So for example, we could have $H_{0}:\mu=0$ vs $H_{1}:\mu\in\{\pm 1,\pm 2,\pm 3,\pm 4,\pm 5,\pm 6\}$. The problem then transfers to looking at what level do we want to do these tests at. This has a relation to the idea of effect size: at what level of graininess would have an influence on your decision making? This may call for steps of size $0.5$ or $100$ or something else, depending on the meaning of the test and of the parameters. For instance if you were comparing the average wealth of two groups, would anyone care if there was a difference of two dollars, even if it was 10,000 standard errors away from zero? I know I wouldn't. The conclusion is basically that you need to specify your hypothesis space - those hypothesis that you are actually interested in. It seems that with big data, this becomes a very important thing to do, simply because your data has so much resolving power. It also seems like it is important to compare like hypothesis - point with point, compound with compound - to get well behaved results.
Are large data sets inappropriate for hypothesis testing? I think its a problem of most significance tests having some general undefined class of implicit alternatives to the null, which we never know. Often these classes may contain some sort of "sure thin
857
Are large data sets inappropriate for hypothesis testing?
No. It is true, that all useful point hypothesis tests are consistent and thus will show up a significant result if only the sample size is large enough and some irrelevant effect exists. To overcome this drawback of statistical hypotheses testing (already mentioned by the answer of Gaetan Lion above), there are relevance tests. These are similar to equivalence tests but even less common. For a relevance test, the size of a minimum relevant effect is prespecified. A relevance test can base on a confidence interval for the effect: If the confidence interval and the relevance region are disjoint, you may reject the null. However, van der Laan and Rose assume in their statement, that even true null hypotheses are tested in studies. If a null hypothesis is true, the propability to reject is not larger than alpha, especially in the case of large samples and even misspecified I can only see that the sample distribution is systematically different from the population distribution,
Are large data sets inappropriate for hypothesis testing?
No. It is true, that all useful point hypothesis tests are consistent and thus will show up a significant result if only the sample size is large enough and some irrelevant effect exists. To overcome
Are large data sets inappropriate for hypothesis testing? No. It is true, that all useful point hypothesis tests are consistent and thus will show up a significant result if only the sample size is large enough and some irrelevant effect exists. To overcome this drawback of statistical hypotheses testing (already mentioned by the answer of Gaetan Lion above), there are relevance tests. These are similar to equivalence tests but even less common. For a relevance test, the size of a minimum relevant effect is prespecified. A relevance test can base on a confidence interval for the effect: If the confidence interval and the relevance region are disjoint, you may reject the null. However, van der Laan and Rose assume in their statement, that even true null hypotheses are tested in studies. If a null hypothesis is true, the propability to reject is not larger than alpha, especially in the case of large samples and even misspecified I can only see that the sample distribution is systematically different from the population distribution,
Are large data sets inappropriate for hypothesis testing? No. It is true, that all useful point hypothesis tests are consistent and thus will show up a significant result if only the sample size is large enough and some irrelevant effect exists. To overcome
858
Are large data sets inappropriate for hypothesis testing?
The article you mention does have a valid point, as far as standard frequentist tests are concerned. That is why testing for a given effect size is very important. To illustrate, here is an anova between 3 groups, where group B slightly different than group A and C. try this in r: treat_diff=0.001 #size of treatment difference ns=c(10, 100, 1000, 10000, 100000, 1000000) #values for sample size per group considered reps=10 #number of test repetitions for each sample size considered p_mat=data.frame(n=factor(), p=double()) #create empty dataframe for outputs for (n in ns){ #for each sample size for (i in c(1:reps)){ #repeat anova test ‘reps’ time treatA=data.frame(treatment="A", val=rnorm(n)) treatB=data.frame(treatment="B", val=rnorm(n)+treat_diff) #this is the group that has the means slightly different from the other groups treatC=data.frame(treatment="C", val=rnorm(n)) all_treatment=rbind(treatA, treatB, treatC) treatment_aov=aov(val~treatment, data=all_treatment) aov_summary=summary(treatment_aov) p=aov_summary[[1]][["Pr(>F)"]][1] temp_df=data.frame(n=n, p=p) p_mat=rbind(p_mat, temp_df) } } library(ggplot2) p <- ggplot(p_mat, aes(factor(n), p)) p + geom_boxplot() As expected, with greater number of samples per test, the statistical significance of the test increases:
Are large data sets inappropriate for hypothesis testing?
The article you mention does have a valid point, as far as standard frequentist tests are concerned. That is why testing for a given effect size is very important. To illustrate, here is an anova betw
Are large data sets inappropriate for hypothesis testing? The article you mention does have a valid point, as far as standard frequentist tests are concerned. That is why testing for a given effect size is very important. To illustrate, here is an anova between 3 groups, where group B slightly different than group A and C. try this in r: treat_diff=0.001 #size of treatment difference ns=c(10, 100, 1000, 10000, 100000, 1000000) #values for sample size per group considered reps=10 #number of test repetitions for each sample size considered p_mat=data.frame(n=factor(), p=double()) #create empty dataframe for outputs for (n in ns){ #for each sample size for (i in c(1:reps)){ #repeat anova test ‘reps’ time treatA=data.frame(treatment="A", val=rnorm(n)) treatB=data.frame(treatment="B", val=rnorm(n)+treat_diff) #this is the group that has the means slightly different from the other groups treatC=data.frame(treatment="C", val=rnorm(n)) all_treatment=rbind(treatA, treatB, treatC) treatment_aov=aov(val~treatment, data=all_treatment) aov_summary=summary(treatment_aov) p=aov_summary[[1]][["Pr(>F)"]][1] temp_df=data.frame(n=n, p=p) p_mat=rbind(p_mat, temp_df) } } library(ggplot2) p <- ggplot(p_mat, aes(factor(n), p)) p + geom_boxplot() As expected, with greater number of samples per test, the statistical significance of the test increases:
Are large data sets inappropriate for hypothesis testing? The article you mention does have a valid point, as far as standard frequentist tests are concerned. That is why testing for a given effect size is very important. To illustrate, here is an anova betw
859
Are large data sets inappropriate for hypothesis testing?
I think what they mean is that one often makes an assumption about the probability density of the null hypothesis which has a 'simple' form but does not correspond to the true probability density. Now with small data sets, you might not have enough sensitivity to see this effect but with a large enough data set you will reject the null hypothesis and conclude that there is a new effect instead of concluding that your assumption about the null hypothesis is wrong.
Are large data sets inappropriate for hypothesis testing?
I think what they mean is that one often makes an assumption about the probability density of the null hypothesis which has a 'simple' form but does not correspond to the true probability density. No
Are large data sets inappropriate for hypothesis testing? I think what they mean is that one often makes an assumption about the probability density of the null hypothesis which has a 'simple' form but does not correspond to the true probability density. Now with small data sets, you might not have enough sensitivity to see this effect but with a large enough data set you will reject the null hypothesis and conclude that there is a new effect instead of concluding that your assumption about the null hypothesis is wrong.
Are large data sets inappropriate for hypothesis testing? I think what they mean is that one often makes an assumption about the probability density of the null hypothesis which has a 'simple' form but does not correspond to the true probability density. No
860
Are large data sets inappropriate for hypothesis testing?
Isn't all this a matter of type I error versus type II error (or power) ? If one fixes the type I error probability ($\alpha$) at 0.05, then , obviously (except in the discrete case), it will be 0.05 whether the sample is large or not. But for a given type I error probability, 0.05 e.g., the power, or the probability that you will detect the effect when it is there (so the probability to reject $H_0$ (=detect the effect) when $H_1$ is true (=when the effect is there)), is larger for large sample sizes. Power increases with sample size (all other things equal). But the statement that "We know that for large enough sample sizes, every study—including ones in which the null hypothesis of no effect is true — will declare a statistically significant effect." is incorrect.
Are large data sets inappropriate for hypothesis testing?
Isn't all this a matter of type I error versus type II error (or power) ? If one fixes the type I error probability ($\alpha$) at 0.05, then , obviously (except in the discrete case), it will be 0.05
Are large data sets inappropriate for hypothesis testing? Isn't all this a matter of type I error versus type II error (or power) ? If one fixes the type I error probability ($\alpha$) at 0.05, then , obviously (except in the discrete case), it will be 0.05 whether the sample is large or not. But for a given type I error probability, 0.05 e.g., the power, or the probability that you will detect the effect when it is there (so the probability to reject $H_0$ (=detect the effect) when $H_1$ is true (=when the effect is there)), is larger for large sample sizes. Power increases with sample size (all other things equal). But the statement that "We know that for large enough sample sizes, every study—including ones in which the null hypothesis of no effect is true — will declare a statistically significant effect." is incorrect.
Are large data sets inappropriate for hypothesis testing? Isn't all this a matter of type I error versus type II error (or power) ? If one fixes the type I error probability ($\alpha$) at 0.05, then , obviously (except in the discrete case), it will be 0.05
861
Are large data sets inappropriate for hypothesis testing?
"We know that for large enough sample sizes, every study—including ones in which the null hypothesis of no effect is true—will declare a statistically significant effect". Well, in a sense all (most) Null hypothesis are false. The parameter under consideration has to be equal to the hypothesized value right down to an infinite number of decimal points which is an absolute rarity. So it is highly likely that the test will declare a statistically significant effect as sample size increases.
Are large data sets inappropriate for hypothesis testing?
"We know that for large enough sample sizes, every study—including ones in which the null hypothesis of no effect is true—will declare a statistically significant effect". Well, in a sense all (most)
Are large data sets inappropriate for hypothesis testing? "We know that for large enough sample sizes, every study—including ones in which the null hypothesis of no effect is true—will declare a statistically significant effect". Well, in a sense all (most) Null hypothesis are false. The parameter under consideration has to be equal to the hypothesized value right down to an infinite number of decimal points which is an absolute rarity. So it is highly likely that the test will declare a statistically significant effect as sample size increases.
Are large data sets inappropriate for hypothesis testing? "We know that for large enough sample sizes, every study—including ones in which the null hypothesis of no effect is true—will declare a statistically significant effect". Well, in a sense all (most)
862
Are large data sets inappropriate for hypothesis testing?
This is a critic of Bayesian inference, a different way to thing statistics (different from frequentists form that everyone learns in others courses others than statistics, we learn both). "The critic is that you can proof anything with a large sample because gives to you a p-value." That's why we look in A LOT of others metrics, aic,f,rmse,anova..... None of my professors answered me about how to lead this thing just "make a sample, so your dataset will be small and this will not happen" But I'm not happy with that, but I use this way :/
Are large data sets inappropriate for hypothesis testing?
This is a critic of Bayesian inference, a different way to thing statistics (different from frequentists form that everyone learns in others courses others than statistics, we learn both). "The critic
Are large data sets inappropriate for hypothesis testing? This is a critic of Bayesian inference, a different way to thing statistics (different from frequentists form that everyone learns in others courses others than statistics, we learn both). "The critic is that you can proof anything with a large sample because gives to you a p-value." That's why we look in A LOT of others metrics, aic,f,rmse,anova..... None of my professors answered me about how to lead this thing just "make a sample, so your dataset will be small and this will not happen" But I'm not happy with that, but I use this way :/
Are large data sets inappropriate for hypothesis testing? This is a critic of Bayesian inference, a different way to thing statistics (different from frequentists form that everyone learns in others courses others than statistics, we learn both). "The critic
863
Deriving the conditional distributions of a multivariate normal distribution
You can prove it by explicitly calculating the conditional density by brute force, as in Procrastinator's link (+1) in the comments. But, there's also a theorem that says all conditional distributions of a multivariate normal distribution are normal. Therefore, all that's left is to calculate the mean vector and covariance matrix. I remember we derived this in a time series class in college by cleverly defining a third variable and using its properties to derive the result more simply than the brute force solution in the link (as long as you're comfortable with matrix algebra). I'm going from memory but it was something like this: It is worth pointing out that the proof below only assumes that $\Sigma_{22}$ is nonsingular, $\Sigma_{11}$ and $\Sigma$ may well be singular. Let ${\bf x}_{1}$ be the first partition and ${\bf x}_2$ the second. Now define ${\bf z} = {\bf x}_1 + {\bf A} {\bf x}_2 $ where ${\bf A} = -\Sigma_{12} \Sigma^{-1}_{22}$. Now we can write \begin{align*} {\rm cov}({\bf z}, {\bf x}_2) &= {\rm cov}( {\bf x}_{1}, {\bf x}_2 ) + {\rm cov}({\bf A}{\bf x}_2, {\bf x}_2) \\ &= \Sigma_{12} + {\bf A} {\rm var}({\bf x}_2) \\ &= \Sigma_{12} - \Sigma_{12} \Sigma^{-1}_{22} \Sigma_{22} \\ &= 0 \end{align*} Therefore ${\bf z}$ and ${\bf x}_2$ are uncorrelated and, since they are jointly normal, they are independent. Now, clearly $E({\bf z}) = {\boldsymbol \mu}_1 + {\bf A} {\boldsymbol \mu}_2$, therefore it follows that \begin{align*} E({\bf x}_1 | {\bf x}_2) &= E( {\bf z} - {\bf A} {\bf x}_2 | {\bf x}_2) \\ & = E({\bf z}|{\bf x}_2) - E({\bf A}{\bf x}_2|{\bf x}_2) \\ & = E({\bf z}) - {\bf A}{\bf x}_2 \\ & = {\boldsymbol \mu}_1 + {\bf A} ({\boldsymbol \mu}_2 - {\bf x}_2) \\ & = {\boldsymbol \mu}_1 + \Sigma_{12} \Sigma^{-1}_{22} ({\bf x}_2- {\boldsymbol \mu}_2) \end{align*} which proves the first part. For the covariance matrix, note that \begin{align*} {\rm var}({\bf x}_1|{\bf x}_2) &= {\rm var}({\bf z} - {\bf A} {\bf x}_2 | {\bf x}_2) \\ &= {\rm var}({\bf z}|{\bf x}_2) + {\rm var}({\bf A} {\bf x}_2 | {\bf x}_2) - {\bf A}{\rm cov}({\bf z}, -{\bf x}_2) - {\rm cov}({\bf z}, -{\bf x}_2) {\bf A}' \\ &= {\rm var}({\bf z}|{\bf x}_2) \\ &= {\rm var}({\bf z}) \end{align*} Now we're almost done: \begin{align*} {\rm var}({\bf x}_1|{\bf x}_2) = {\rm var}( {\bf z} ) &= {\rm var}( {\bf x}_1 + {\bf A} {\bf x}_2 ) \\ &= {\rm var}( {\bf x}_1 ) + {\bf A} {\rm var}( {\bf x}_2 ) {\bf A}' + {\bf A} {\rm cov}({\bf x}_1,{\bf x}_2) + {\rm cov}({\bf x}_2,{\bf x}_1) {\bf A}' \\ &= \Sigma_{11} +\Sigma_{12} \Sigma^{-1}_{22} \Sigma_{22}\Sigma^{-1}_{22}\Sigma_{21} - 2 \Sigma_{12} \Sigma_{22}^{-1} \Sigma_{21} \\ &= \Sigma_{11} +\Sigma_{12} \Sigma^{-1}_{22}\Sigma_{21} - 2 \Sigma_{12} \Sigma_{22}^{-1} \Sigma_{21} \\ &= \Sigma_{11} -\Sigma_{12} \Sigma^{-1}_{22}\Sigma_{21} \end{align*} which proves the second part. Note: For those not very familiar with the matrix algebra used here, this is an excellent resource. Edit: One property used here this is not in the matrix cookbook (good catch @FlyingPig) is property 6 on the wikipedia page about covariance matrices: which is that for two random vectors $\bf x, y$, $${\rm var}({\bf x}+{\bf y}) = {\rm var}({\bf x})+{\rm var}({\bf y}) + {\rm cov}({\bf x},{\bf y}) + {\rm cov}({\bf y},{\bf x})$$ For scalars, of course, ${\rm cov}(X,Y)={\rm cov}(Y,X)$ but for vectors they are different insofar as the matrices are arranged differently.
Deriving the conditional distributions of a multivariate normal distribution
You can prove it by explicitly calculating the conditional density by brute force, as in Procrastinator's link (+1) in the comments. But, there's also a theorem that says all conditional distributions
Deriving the conditional distributions of a multivariate normal distribution You can prove it by explicitly calculating the conditional density by brute force, as in Procrastinator's link (+1) in the comments. But, there's also a theorem that says all conditional distributions of a multivariate normal distribution are normal. Therefore, all that's left is to calculate the mean vector and covariance matrix. I remember we derived this in a time series class in college by cleverly defining a third variable and using its properties to derive the result more simply than the brute force solution in the link (as long as you're comfortable with matrix algebra). I'm going from memory but it was something like this: It is worth pointing out that the proof below only assumes that $\Sigma_{22}$ is nonsingular, $\Sigma_{11}$ and $\Sigma$ may well be singular. Let ${\bf x}_{1}$ be the first partition and ${\bf x}_2$ the second. Now define ${\bf z} = {\bf x}_1 + {\bf A} {\bf x}_2 $ where ${\bf A} = -\Sigma_{12} \Sigma^{-1}_{22}$. Now we can write \begin{align*} {\rm cov}({\bf z}, {\bf x}_2) &= {\rm cov}( {\bf x}_{1}, {\bf x}_2 ) + {\rm cov}({\bf A}{\bf x}_2, {\bf x}_2) \\ &= \Sigma_{12} + {\bf A} {\rm var}({\bf x}_2) \\ &= \Sigma_{12} - \Sigma_{12} \Sigma^{-1}_{22} \Sigma_{22} \\ &= 0 \end{align*} Therefore ${\bf z}$ and ${\bf x}_2$ are uncorrelated and, since they are jointly normal, they are independent. Now, clearly $E({\bf z}) = {\boldsymbol \mu}_1 + {\bf A} {\boldsymbol \mu}_2$, therefore it follows that \begin{align*} E({\bf x}_1 | {\bf x}_2) &= E( {\bf z} - {\bf A} {\bf x}_2 | {\bf x}_2) \\ & = E({\bf z}|{\bf x}_2) - E({\bf A}{\bf x}_2|{\bf x}_2) \\ & = E({\bf z}) - {\bf A}{\bf x}_2 \\ & = {\boldsymbol \mu}_1 + {\bf A} ({\boldsymbol \mu}_2 - {\bf x}_2) \\ & = {\boldsymbol \mu}_1 + \Sigma_{12} \Sigma^{-1}_{22} ({\bf x}_2- {\boldsymbol \mu}_2) \end{align*} which proves the first part. For the covariance matrix, note that \begin{align*} {\rm var}({\bf x}_1|{\bf x}_2) &= {\rm var}({\bf z} - {\bf A} {\bf x}_2 | {\bf x}_2) \\ &= {\rm var}({\bf z}|{\bf x}_2) + {\rm var}({\bf A} {\bf x}_2 | {\bf x}_2) - {\bf A}{\rm cov}({\bf z}, -{\bf x}_2) - {\rm cov}({\bf z}, -{\bf x}_2) {\bf A}' \\ &= {\rm var}({\bf z}|{\bf x}_2) \\ &= {\rm var}({\bf z}) \end{align*} Now we're almost done: \begin{align*} {\rm var}({\bf x}_1|{\bf x}_2) = {\rm var}( {\bf z} ) &= {\rm var}( {\bf x}_1 + {\bf A} {\bf x}_2 ) \\ &= {\rm var}( {\bf x}_1 ) + {\bf A} {\rm var}( {\bf x}_2 ) {\bf A}' + {\bf A} {\rm cov}({\bf x}_1,{\bf x}_2) + {\rm cov}({\bf x}_2,{\bf x}_1) {\bf A}' \\ &= \Sigma_{11} +\Sigma_{12} \Sigma^{-1}_{22} \Sigma_{22}\Sigma^{-1}_{22}\Sigma_{21} - 2 \Sigma_{12} \Sigma_{22}^{-1} \Sigma_{21} \\ &= \Sigma_{11} +\Sigma_{12} \Sigma^{-1}_{22}\Sigma_{21} - 2 \Sigma_{12} \Sigma_{22}^{-1} \Sigma_{21} \\ &= \Sigma_{11} -\Sigma_{12} \Sigma^{-1}_{22}\Sigma_{21} \end{align*} which proves the second part. Note: For those not very familiar with the matrix algebra used here, this is an excellent resource. Edit: One property used here this is not in the matrix cookbook (good catch @FlyingPig) is property 6 on the wikipedia page about covariance matrices: which is that for two random vectors $\bf x, y$, $${\rm var}({\bf x}+{\bf y}) = {\rm var}({\bf x})+{\rm var}({\bf y}) + {\rm cov}({\bf x},{\bf y}) + {\rm cov}({\bf y},{\bf x})$$ For scalars, of course, ${\rm cov}(X,Y)={\rm cov}(Y,X)$ but for vectors they are different insofar as the matrices are arranged differently.
Deriving the conditional distributions of a multivariate normal distribution You can prove it by explicitly calculating the conditional density by brute force, as in Procrastinator's link (+1) in the comments. But, there's also a theorem that says all conditional distributions
864
Deriving the conditional distributions of a multivariate normal distribution
The answer by Macro is great, but here is an even simpler way that does not require you to use any outside theorem asserting the conditional distribution. It involves writing the Mahalanobis distance in a form that separates the argument variable for the conditioning statement, and then factorising the normal density accordingly. Rewriting the Mahalanobis distance for a conditional vector: This derivation uses a matrix inversion formula that uses the Schur complement $\boldsymbol{\Sigma}_* \equiv \boldsymbol{\Sigma}_{11} - \boldsymbol{\Sigma}_{12} \boldsymbol{\Sigma}_{22}^{-1} \boldsymbol{\Sigma}_{21}$. We first use the blockwise inversion formula to write the inverse-variance matrix as: $$\begin{equation} \begin{aligned} \boldsymbol{\Sigma}^{-1} = \begin{bmatrix} \boldsymbol{\Sigma}_{11} & \boldsymbol{\Sigma}_{12} \\ \boldsymbol{\Sigma}_{21} & \boldsymbol{\Sigma}_{22} \\ \end{bmatrix}^{-1} = \begin{bmatrix} \boldsymbol{\Sigma}_{11}^* & \boldsymbol{\Sigma}_{12}^* \\ \boldsymbol{\Sigma}_{21}^* & \boldsymbol{\Sigma}_{22}^* \\ \end{bmatrix}, \end{aligned} \end{equation}$$ where: $$\begin{equation} \begin{aligned} \begin{matrix} \boldsymbol{\Sigma}_{11}^* = \boldsymbol{\Sigma}_*^{-1} \text{ } \quad \quad \quad \quad & & & & & \boldsymbol{\Sigma}_{12}^* = -\boldsymbol{\Sigma}_*^{-1} \boldsymbol{\Sigma}_{12} \boldsymbol{\Sigma}_{22}^{-1}, \quad \quad \quad \\[6pt] \boldsymbol{\Sigma}_{21}^* = - \boldsymbol{\Sigma}_{22}^{-1} \boldsymbol{\Sigma}_{21} \boldsymbol{\Sigma}_*^{-1} & & & & & \boldsymbol{\Sigma}_{22}^* = \boldsymbol{\Sigma}_{22}^{-1} + \boldsymbol{\Sigma}_{22}^{-1} \boldsymbol{\Sigma}_{21} \boldsymbol{\Sigma}_*^{-1} \boldsymbol{\Sigma}_{12} \boldsymbol{\Sigma}_{22}^{-1}. \text{ } \\[6pt] \end{matrix} \end{aligned} \end{equation}$$ Using this formula we can now write the Mahalanobis distance as: $$\begin{equation} \begin{aligned} (\boldsymbol{y} &- \boldsymbol{\mu})^\text{T} \boldsymbol{\Sigma}^{-1} (\boldsymbol{y} - \boldsymbol{\mu}) \\[6pt] &= \begin{bmatrix} \boldsymbol{y}_1 - \boldsymbol{\mu}_1 \\ \boldsymbol{y}_2 - \boldsymbol{\mu}_2 \end{bmatrix}^\text{T} \begin{bmatrix} \boldsymbol{\Sigma}_{11}^* & \boldsymbol{\Sigma}_{12}^* \\ \boldsymbol{\Sigma}_{21}^* & \boldsymbol{\Sigma}_{22}^* \\ \end{bmatrix} \begin{bmatrix} \boldsymbol{y}_1 - \boldsymbol{\mu}_1 \\ \boldsymbol{y}_2 - \boldsymbol{\mu}_2 \end{bmatrix} \\[6pt] &= \quad (\boldsymbol{y}_1 - \boldsymbol{\mu}_1)^\text{T} \boldsymbol{\Sigma}_{11}^* (\boldsymbol{y}_1 - \boldsymbol{\mu}_1) + (\boldsymbol{y}_1 - \boldsymbol{\mu}_1)^\text{T} \boldsymbol{\Sigma}_{12}^* (\boldsymbol{y}_2 - \boldsymbol{\mu}_2) \\[6pt] &\quad + (\boldsymbol{y}_2 - \boldsymbol{\mu}_2)^\text{T} \boldsymbol{\Sigma}_{21}^* (\boldsymbol{y}_1 - \boldsymbol{\mu}_1) + (\boldsymbol{y}_2 - \boldsymbol{\mu}_2)^\text{T} \boldsymbol{\Sigma}_{22}^* (\boldsymbol{y}_2 - \boldsymbol{\mu}_2) \\[6pt] &= \quad (\boldsymbol{y}_1 - \boldsymbol{\mu}_1)^\text{T} \boldsymbol{\Sigma}_*^{-1} (\boldsymbol{y}_1 - \boldsymbol{\mu}_1) - (\boldsymbol{y}_1 - \boldsymbol{\mu}_1)^\text{T} \boldsymbol{\Sigma}_*^{-1} \boldsymbol{\Sigma}_{12} \boldsymbol{\Sigma}_{22}^{-1} (\boldsymbol{y}_2 - \boldsymbol{\mu}_2) \\[6pt] &\quad - (\boldsymbol{y}_2 - \boldsymbol{\mu}_2)^\text{T} \boldsymbol{\Sigma}_{22}^{-1} \boldsymbol{\Sigma}_{21} \boldsymbol{\Sigma}_*^{-1} (\boldsymbol{y}_1 - \boldsymbol{\mu}_1) + (\boldsymbol{y}_2 - \boldsymbol{\mu}_2)^\text{T} \boldsymbol{\Sigma}_{22}^{-1} (\boldsymbol{y}_2 - \boldsymbol{\mu}_2) \\[6pt] &\quad + (\boldsymbol{y}_2 - \boldsymbol{\mu}_2)^\text{T} \boldsymbol{\Sigma}_{22}^{-1} \boldsymbol{\Sigma}_{21} \boldsymbol{\Sigma}_*^{-1} \boldsymbol{\Sigma}_{12} \boldsymbol{\Sigma}_{22}^{-1} (\boldsymbol{y}_2 - \boldsymbol{\mu}_2) \\[6pt] &= (\boldsymbol{y}_1 - (\boldsymbol{\mu}_1 + \boldsymbol{\Sigma}_{12} \boldsymbol{\Sigma}_{22}^{-1} (\boldsymbol{y}_2 - \boldsymbol{\mu}_2)))^\text{T} \boldsymbol{\Sigma}_*^{-1} (\boldsymbol{y}_1 - (\boldsymbol{\mu}_1 + \boldsymbol{\Sigma}_{12} \boldsymbol{\Sigma}_{22}^{-1} (\boldsymbol{y}_2 - \boldsymbol{\mu}_2))) \\[6pt] &\quad + (\boldsymbol{y}_2 - \boldsymbol{\mu}_2)^\text{T} \boldsymbol{\Sigma}_{22}^{-1} (\boldsymbol{y}_2 - \boldsymbol{\mu}_2) \\[6pt] &= (\boldsymbol{y}_1 - \boldsymbol{\mu}_*)^\text{T} \boldsymbol{\Sigma}_*^{-1} (\boldsymbol{y}_1 - \boldsymbol{\mu}_*) + (\boldsymbol{y}_2 - \boldsymbol{\mu}_2)^\text{T} \boldsymbol{\Sigma}_{22}^{-1} (\boldsymbol{y}_2 - \boldsymbol{\mu}_2) , \\[6pt] \end{aligned} \end{equation}$$ where $\boldsymbol{\mu}_* \equiv \boldsymbol{\mu}_1 + \boldsymbol{\Sigma}_{12} \boldsymbol{\Sigma}_{22}^{-1} (\boldsymbol{y}_2 - \boldsymbol{\mu}_2)$ is the conditional mean vector. Note that this result is a general result that does not assume normality of the random vectors involved in the decomposition. It gives a useful way of decomposing the Mahalanobis distance so that it consists of a sum of quadratic forms on the marginal and conditional parts. In the conditional part the conditioning vector $\boldsymbol{y}_2$ is absorbed into the mean vector and variance matrix. To clarify the form, we repeat the equation with labelling of terms: $$(\boldsymbol{y} - \boldsymbol{\mu})^\text{T} \boldsymbol{\Sigma}^{-1} (\boldsymbol{y} - \boldsymbol{\mu}) = \underbrace{(\boldsymbol{y}_1 - \boldsymbol{\mu}_*)^\text{T} \boldsymbol{\Sigma}_*^{-1} (\boldsymbol{y}_1 - \boldsymbol{\mu}_*)}_\text{Conditional Part} + \underbrace{(\boldsymbol{y}_2 - \boldsymbol{\mu}_2)^\text{T} \boldsymbol{\Sigma}_{22}^{-1} (\boldsymbol{y}_2 - \boldsymbol{\mu}_2)}_\text{Marginal Part}.$$ Deriving the conditional distribution: Now that we have the above form for the Mahalanobis distance, the rest is easy. We have: $$\begin{equation} \begin{aligned} p(\boldsymbol{y}_1 | \boldsymbol{y}_2, \boldsymbol{\mu}, \boldsymbol{\Sigma}) &\overset{\boldsymbol{y}_1}{\propto} p(\boldsymbol{y}_1 , \boldsymbol{y}_2 | \boldsymbol{\mu}, \boldsymbol{\Sigma}) \\[12pt] &= \text{N}(\boldsymbol{y} | \boldsymbol{\mu}, \boldsymbol{\Sigma}) \\[10pt] &\overset{\boldsymbol{y}_1}{\propto} \exp \Big( - \frac{1}{2} (\boldsymbol{y} - \boldsymbol{\mu})^\text{T} \boldsymbol{\Sigma}^{-1} (\boldsymbol{y} - \boldsymbol{\mu}) \Big) \\[6pt] &\overset{\boldsymbol{y}_1}{\propto} \exp \Big( - \frac{1}{2} (\boldsymbol{y}_1 - \boldsymbol{\mu}_*)^\text{T} \boldsymbol{\Sigma}_*^{-1} (\boldsymbol{y}_1 - \boldsymbol{\mu}_*) \Big) \\[6pt] &\overset{\boldsymbol{y}_1}{\propto}\text{N}(\boldsymbol{y}_1 | \boldsymbol{\mu}_*, \boldsymbol{\Sigma}_*). \\[6pt] \end{aligned} \end{equation}$$ This establishes that the conditional distribution is also multivariate normal, with the specified conditional mean vector and conditional variance matrix.
Deriving the conditional distributions of a multivariate normal distribution
The answer by Macro is great, but here is an even simpler way that does not require you to use any outside theorem asserting the conditional distribution. It involves writing the Mahalanobis distance
Deriving the conditional distributions of a multivariate normal distribution The answer by Macro is great, but here is an even simpler way that does not require you to use any outside theorem asserting the conditional distribution. It involves writing the Mahalanobis distance in a form that separates the argument variable for the conditioning statement, and then factorising the normal density accordingly. Rewriting the Mahalanobis distance for a conditional vector: This derivation uses a matrix inversion formula that uses the Schur complement $\boldsymbol{\Sigma}_* \equiv \boldsymbol{\Sigma}_{11} - \boldsymbol{\Sigma}_{12} \boldsymbol{\Sigma}_{22}^{-1} \boldsymbol{\Sigma}_{21}$. We first use the blockwise inversion formula to write the inverse-variance matrix as: $$\begin{equation} \begin{aligned} \boldsymbol{\Sigma}^{-1} = \begin{bmatrix} \boldsymbol{\Sigma}_{11} & \boldsymbol{\Sigma}_{12} \\ \boldsymbol{\Sigma}_{21} & \boldsymbol{\Sigma}_{22} \\ \end{bmatrix}^{-1} = \begin{bmatrix} \boldsymbol{\Sigma}_{11}^* & \boldsymbol{\Sigma}_{12}^* \\ \boldsymbol{\Sigma}_{21}^* & \boldsymbol{\Sigma}_{22}^* \\ \end{bmatrix}, \end{aligned} \end{equation}$$ where: $$\begin{equation} \begin{aligned} \begin{matrix} \boldsymbol{\Sigma}_{11}^* = \boldsymbol{\Sigma}_*^{-1} \text{ } \quad \quad \quad \quad & & & & & \boldsymbol{\Sigma}_{12}^* = -\boldsymbol{\Sigma}_*^{-1} \boldsymbol{\Sigma}_{12} \boldsymbol{\Sigma}_{22}^{-1}, \quad \quad \quad \\[6pt] \boldsymbol{\Sigma}_{21}^* = - \boldsymbol{\Sigma}_{22}^{-1} \boldsymbol{\Sigma}_{21} \boldsymbol{\Sigma}_*^{-1} & & & & & \boldsymbol{\Sigma}_{22}^* = \boldsymbol{\Sigma}_{22}^{-1} + \boldsymbol{\Sigma}_{22}^{-1} \boldsymbol{\Sigma}_{21} \boldsymbol{\Sigma}_*^{-1} \boldsymbol{\Sigma}_{12} \boldsymbol{\Sigma}_{22}^{-1}. \text{ } \\[6pt] \end{matrix} \end{aligned} \end{equation}$$ Using this formula we can now write the Mahalanobis distance as: $$\begin{equation} \begin{aligned} (\boldsymbol{y} &- \boldsymbol{\mu})^\text{T} \boldsymbol{\Sigma}^{-1} (\boldsymbol{y} - \boldsymbol{\mu}) \\[6pt] &= \begin{bmatrix} \boldsymbol{y}_1 - \boldsymbol{\mu}_1 \\ \boldsymbol{y}_2 - \boldsymbol{\mu}_2 \end{bmatrix}^\text{T} \begin{bmatrix} \boldsymbol{\Sigma}_{11}^* & \boldsymbol{\Sigma}_{12}^* \\ \boldsymbol{\Sigma}_{21}^* & \boldsymbol{\Sigma}_{22}^* \\ \end{bmatrix} \begin{bmatrix} \boldsymbol{y}_1 - \boldsymbol{\mu}_1 \\ \boldsymbol{y}_2 - \boldsymbol{\mu}_2 \end{bmatrix} \\[6pt] &= \quad (\boldsymbol{y}_1 - \boldsymbol{\mu}_1)^\text{T} \boldsymbol{\Sigma}_{11}^* (\boldsymbol{y}_1 - \boldsymbol{\mu}_1) + (\boldsymbol{y}_1 - \boldsymbol{\mu}_1)^\text{T} \boldsymbol{\Sigma}_{12}^* (\boldsymbol{y}_2 - \boldsymbol{\mu}_2) \\[6pt] &\quad + (\boldsymbol{y}_2 - \boldsymbol{\mu}_2)^\text{T} \boldsymbol{\Sigma}_{21}^* (\boldsymbol{y}_1 - \boldsymbol{\mu}_1) + (\boldsymbol{y}_2 - \boldsymbol{\mu}_2)^\text{T} \boldsymbol{\Sigma}_{22}^* (\boldsymbol{y}_2 - \boldsymbol{\mu}_2) \\[6pt] &= \quad (\boldsymbol{y}_1 - \boldsymbol{\mu}_1)^\text{T} \boldsymbol{\Sigma}_*^{-1} (\boldsymbol{y}_1 - \boldsymbol{\mu}_1) - (\boldsymbol{y}_1 - \boldsymbol{\mu}_1)^\text{T} \boldsymbol{\Sigma}_*^{-1} \boldsymbol{\Sigma}_{12} \boldsymbol{\Sigma}_{22}^{-1} (\boldsymbol{y}_2 - \boldsymbol{\mu}_2) \\[6pt] &\quad - (\boldsymbol{y}_2 - \boldsymbol{\mu}_2)^\text{T} \boldsymbol{\Sigma}_{22}^{-1} \boldsymbol{\Sigma}_{21} \boldsymbol{\Sigma}_*^{-1} (\boldsymbol{y}_1 - \boldsymbol{\mu}_1) + (\boldsymbol{y}_2 - \boldsymbol{\mu}_2)^\text{T} \boldsymbol{\Sigma}_{22}^{-1} (\boldsymbol{y}_2 - \boldsymbol{\mu}_2) \\[6pt] &\quad + (\boldsymbol{y}_2 - \boldsymbol{\mu}_2)^\text{T} \boldsymbol{\Sigma}_{22}^{-1} \boldsymbol{\Sigma}_{21} \boldsymbol{\Sigma}_*^{-1} \boldsymbol{\Sigma}_{12} \boldsymbol{\Sigma}_{22}^{-1} (\boldsymbol{y}_2 - \boldsymbol{\mu}_2) \\[6pt] &= (\boldsymbol{y}_1 - (\boldsymbol{\mu}_1 + \boldsymbol{\Sigma}_{12} \boldsymbol{\Sigma}_{22}^{-1} (\boldsymbol{y}_2 - \boldsymbol{\mu}_2)))^\text{T} \boldsymbol{\Sigma}_*^{-1} (\boldsymbol{y}_1 - (\boldsymbol{\mu}_1 + \boldsymbol{\Sigma}_{12} \boldsymbol{\Sigma}_{22}^{-1} (\boldsymbol{y}_2 - \boldsymbol{\mu}_2))) \\[6pt] &\quad + (\boldsymbol{y}_2 - \boldsymbol{\mu}_2)^\text{T} \boldsymbol{\Sigma}_{22}^{-1} (\boldsymbol{y}_2 - \boldsymbol{\mu}_2) \\[6pt] &= (\boldsymbol{y}_1 - \boldsymbol{\mu}_*)^\text{T} \boldsymbol{\Sigma}_*^{-1} (\boldsymbol{y}_1 - \boldsymbol{\mu}_*) + (\boldsymbol{y}_2 - \boldsymbol{\mu}_2)^\text{T} \boldsymbol{\Sigma}_{22}^{-1} (\boldsymbol{y}_2 - \boldsymbol{\mu}_2) , \\[6pt] \end{aligned} \end{equation}$$ where $\boldsymbol{\mu}_* \equiv \boldsymbol{\mu}_1 + \boldsymbol{\Sigma}_{12} \boldsymbol{\Sigma}_{22}^{-1} (\boldsymbol{y}_2 - \boldsymbol{\mu}_2)$ is the conditional mean vector. Note that this result is a general result that does not assume normality of the random vectors involved in the decomposition. It gives a useful way of decomposing the Mahalanobis distance so that it consists of a sum of quadratic forms on the marginal and conditional parts. In the conditional part the conditioning vector $\boldsymbol{y}_2$ is absorbed into the mean vector and variance matrix. To clarify the form, we repeat the equation with labelling of terms: $$(\boldsymbol{y} - \boldsymbol{\mu})^\text{T} \boldsymbol{\Sigma}^{-1} (\boldsymbol{y} - \boldsymbol{\mu}) = \underbrace{(\boldsymbol{y}_1 - \boldsymbol{\mu}_*)^\text{T} \boldsymbol{\Sigma}_*^{-1} (\boldsymbol{y}_1 - \boldsymbol{\mu}_*)}_\text{Conditional Part} + \underbrace{(\boldsymbol{y}_2 - \boldsymbol{\mu}_2)^\text{T} \boldsymbol{\Sigma}_{22}^{-1} (\boldsymbol{y}_2 - \boldsymbol{\mu}_2)}_\text{Marginal Part}.$$ Deriving the conditional distribution: Now that we have the above form for the Mahalanobis distance, the rest is easy. We have: $$\begin{equation} \begin{aligned} p(\boldsymbol{y}_1 | \boldsymbol{y}_2, \boldsymbol{\mu}, \boldsymbol{\Sigma}) &\overset{\boldsymbol{y}_1}{\propto} p(\boldsymbol{y}_1 , \boldsymbol{y}_2 | \boldsymbol{\mu}, \boldsymbol{\Sigma}) \\[12pt] &= \text{N}(\boldsymbol{y} | \boldsymbol{\mu}, \boldsymbol{\Sigma}) \\[10pt] &\overset{\boldsymbol{y}_1}{\propto} \exp \Big( - \frac{1}{2} (\boldsymbol{y} - \boldsymbol{\mu})^\text{T} \boldsymbol{\Sigma}^{-1} (\boldsymbol{y} - \boldsymbol{\mu}) \Big) \\[6pt] &\overset{\boldsymbol{y}_1}{\propto} \exp \Big( - \frac{1}{2} (\boldsymbol{y}_1 - \boldsymbol{\mu}_*)^\text{T} \boldsymbol{\Sigma}_*^{-1} (\boldsymbol{y}_1 - \boldsymbol{\mu}_*) \Big) \\[6pt] &\overset{\boldsymbol{y}_1}{\propto}\text{N}(\boldsymbol{y}_1 | \boldsymbol{\mu}_*, \boldsymbol{\Sigma}_*). \\[6pt] \end{aligned} \end{equation}$$ This establishes that the conditional distribution is also multivariate normal, with the specified conditional mean vector and conditional variance matrix.
Deriving the conditional distributions of a multivariate normal distribution The answer by Macro is great, but here is an even simpler way that does not require you to use any outside theorem asserting the conditional distribution. It involves writing the Mahalanobis distance
865
What is the difference between a neural network and a deep neural network, and why do the deep ones work better?
Let's start with a triviliaty: Deep neural network is simply a feedforward network with many hidden layers. This is more or less all there is to say about the definition. Neural networks can be recurrent or feedforward; feedforward ones do not have any loops in their graph and can be organized in layers. If there are "many" layers, then we say that the network is deep. How many layers does a network have to have in order to qualify as deep? There is no definite answer to this (it's a bit like asking how many grains make a heap), but usually having two or more hidden layers counts as deep. In contrast, a network with only a single hidden layer is conventionally called "shallow". I suspect that there will be some inflation going on here, and in ten years people might think that anything with less than, say, ten layers is shallow and suitable only for kindergarten exercises. Informally, "deep" suggests that the network is tough to handle. Here is an illustration, adapted from here: But the real question you are asking is, of course, Why would having many layers be beneficial? I think that the somewhat astonishing answer is that nobody really knows. There are some common explanations that I will briefly review below, but none of them has been convincingly demonstrated to be true, and one cannot even be sure that having many layers is really beneficial. I say that this is astonishing, because deep learning is massively popular, is breaking all the records (from image recognition, to playing Go, to automatic translation, etc.) every year, is getting used by the industry, etc. etc. And we are still not quite sure why it works so well. I base my discussion on the Deep Learning book by Goodfellow, Bengio, and Courville which went out in 2017 and is widely considered to be the book on deep learning. (It's freely available online.) The relevant section is 6.4.1 Universal Approximation Properties and Depth. You wrote that 10 years ago in class I learned that having several layers or one layer (not counting the input and output layers) was equivalent in terms of the functions a neural network is able to represent [...] You must be referring to the so called Universal approximation theorem, proved by Cybenko in 1989 and generalized by various people in the 1990s. It basically says that a shallow neural network (with 1 hidden layer) can approximate any function, i.e. can in principle learn anything. This is true for various nonlinear activation functions, including rectified linear units that most neural networks are using today (the textbook references Leshno et al. 1993 for this result). If so, then why is everybody using deep nets? Well, a naive answer is that because they work better. Here is a figure from the Deep Learning book showing that it helps to have more layers in one particular task, but the same phenomenon is often observed across various tasks and domains: We know that a shallow network could perform as good as the deeper ones. But it does not; and they usually do not. The question is --- why? Possible answers: Maybe a shallow network would need more neurons then the deep one? Maybe a shallow network is more difficult to train with our current algorithms (e.g. it has more nasty local minima, or the convergence rate is slower, or whatever)? Maybe a shallow architecture does not fit to the kind of problems we are usually trying to solve (e.g. object recognition is a quintessential "deep", hierarchical process)? Something else? The Deep Learning book argues for bullet points #1 and #3. First, it argues that the number of units in a shallow network grows exponentially with task complexity. So in order to be useful a shallow network might need to be very big; possibly much bigger than a deep network. This is based on a number of papers proving that shallow networks would in some cases need exponentially many neurons; but whether e.g. MNIST classification or Go playing are such cases is not really clear. Second, the book says this: Choosing a deep model encodes a very general belief that the function we want to learn should involve composition of several simpler functions. This can be interpreted from a representation learning point of view as saying that we believe the learning problem consists of discovering a set of underlying factors of variation that can in turn be described in terms of other, simpler underlying factors of variation. I think the current "consensus" is that it's a combination of bullet points #1 and #3: for real-world tasks deep architecture are often beneficial and shallow architecture would be inefficient and require a lot more neurons for the same performance. But it's far from proven. Consider e.g. Zagoruyko and Komodakis, 2016, Wide Residual Networks. Residual networks with 150+ layers appeared in 2015 and won various image recognition contests. This was a big success and looked like a compelling argument in favour of deepness; here is one figure from a presentation by the first author on the residual network paper (note that the time confusingly goes to the left here): But the paper linked above shows that a "wide" residual network with "only" 16 layers can outperform "deep" ones with 150+ layers. If this is true, then the whole point of the above figure breaks down. Or consider Ba and Caruana, 2014, Do Deep Nets Really Need to be Deep?: In this paper we provide empirical evidence that shallow nets are capable of learning the same function as deep nets, and in some cases with the same number of parameters as the deep nets. We do this by first training a state-of-the-art deep model, and then training a shallow model to mimic the deep model. The mimic model is trained using the model compression scheme described in the next section. Remarkably, with model compression we are able to train shallow nets to be as accurate as some deep models, even though we are not able to train these shallow nets to be as accurate as the deep nets when the shallow nets are trained directly on the original labeled training data. If a shallow net with the same number of parameters as a deep net can learn to mimic a deep net with high fidelity, then it is clear that the function learned by that deep net does not really have to be deep. If true, this would mean that the correct explanation is rather my bullet #2, and not #1 or #3. As I said --- nobody really knows for sure yet. Concluding remarks The amount of progress achieved in the deep learning over the last ~10 years is truly amazing, but most of this progress was achieved by trial and error, and we still lack very basic understanding about what exactly makes deep nets to work so well. Even the list of things that people consider to be crucial for setting up an effective deep network seems to change every couple of years. The deep learning renaissance started in 2006 when Geoffrey Hinton (who had been working on neural networks for 20+ years without much interest from anybody) published a couple of breakthrough papers offering an effective way to train deep networks (Science paper, Neural computation paper). The trick was to use unsupervised pre-training before starting the gradient descent. These papers revolutionized the field, and for a couple of years people thought that unsupervised pre-training was the key. Then in 2010 Martens showed that deep neural networks can be trained with second-order methods (so called Hessian-free methods) and can outperform networks trained with pre-training: Deep learning via Hessian-free optimization. Then in 2013 Sutskever et al. showed that stochastic gradient descent with some very clever tricks can outperform Hessian-free methods: On the importance of initialization and momentum in deep learning. Also, around 2010 people realized that using rectified linear units instead of sigmoid units makes a huge difference for gradient descent. Dropout appeared in 2014. Residual networks appeared in 2015. People keep coming up with more and more effective ways to train deep networks and what seemed like a key insight 10 years ago is often considered a nuisance today. All of that is largely driven by trial and error and there is little understanding of what makes some things work so well and some other things not. Training deep networks is like a big bag of tricks. Successful tricks are usually rationalized post factum. We don't even know why deep networks reach a performance plateau; just 10 years people used to blame local minima, but the current thinking is that this is not the point (when the perfomance plateaus, the gradients tend to stay large). This is such a basic question about deep networks, and we don't even know this. Update: This is more or less the subject of Ali Rahimi's NIPS 2017 talk on machine learning as alchemy: https://www.youtube.com/watch?v=Qi1Yry33TQE. [This answer was entirely re-written in April 2017, so some of the comments below do not apply anymore.]
What is the difference between a neural network and a deep neural network, and why do the deep ones
Let's start with a triviliaty: Deep neural network is simply a feedforward network with many hidden layers. This is more or less all there is to say about the definition. Neural networks can be recurr
What is the difference between a neural network and a deep neural network, and why do the deep ones work better? Let's start with a triviliaty: Deep neural network is simply a feedforward network with many hidden layers. This is more or less all there is to say about the definition. Neural networks can be recurrent or feedforward; feedforward ones do not have any loops in their graph and can be organized in layers. If there are "many" layers, then we say that the network is deep. How many layers does a network have to have in order to qualify as deep? There is no definite answer to this (it's a bit like asking how many grains make a heap), but usually having two or more hidden layers counts as deep. In contrast, a network with only a single hidden layer is conventionally called "shallow". I suspect that there will be some inflation going on here, and in ten years people might think that anything with less than, say, ten layers is shallow and suitable only for kindergarten exercises. Informally, "deep" suggests that the network is tough to handle. Here is an illustration, adapted from here: But the real question you are asking is, of course, Why would having many layers be beneficial? I think that the somewhat astonishing answer is that nobody really knows. There are some common explanations that I will briefly review below, but none of them has been convincingly demonstrated to be true, and one cannot even be sure that having many layers is really beneficial. I say that this is astonishing, because deep learning is massively popular, is breaking all the records (from image recognition, to playing Go, to automatic translation, etc.) every year, is getting used by the industry, etc. etc. And we are still not quite sure why it works so well. I base my discussion on the Deep Learning book by Goodfellow, Bengio, and Courville which went out in 2017 and is widely considered to be the book on deep learning. (It's freely available online.) The relevant section is 6.4.1 Universal Approximation Properties and Depth. You wrote that 10 years ago in class I learned that having several layers or one layer (not counting the input and output layers) was equivalent in terms of the functions a neural network is able to represent [...] You must be referring to the so called Universal approximation theorem, proved by Cybenko in 1989 and generalized by various people in the 1990s. It basically says that a shallow neural network (with 1 hidden layer) can approximate any function, i.e. can in principle learn anything. This is true for various nonlinear activation functions, including rectified linear units that most neural networks are using today (the textbook references Leshno et al. 1993 for this result). If so, then why is everybody using deep nets? Well, a naive answer is that because they work better. Here is a figure from the Deep Learning book showing that it helps to have more layers in one particular task, but the same phenomenon is often observed across various tasks and domains: We know that a shallow network could perform as good as the deeper ones. But it does not; and they usually do not. The question is --- why? Possible answers: Maybe a shallow network would need more neurons then the deep one? Maybe a shallow network is more difficult to train with our current algorithms (e.g. it has more nasty local minima, or the convergence rate is slower, or whatever)? Maybe a shallow architecture does not fit to the kind of problems we are usually trying to solve (e.g. object recognition is a quintessential "deep", hierarchical process)? Something else? The Deep Learning book argues for bullet points #1 and #3. First, it argues that the number of units in a shallow network grows exponentially with task complexity. So in order to be useful a shallow network might need to be very big; possibly much bigger than a deep network. This is based on a number of papers proving that shallow networks would in some cases need exponentially many neurons; but whether e.g. MNIST classification or Go playing are such cases is not really clear. Second, the book says this: Choosing a deep model encodes a very general belief that the function we want to learn should involve composition of several simpler functions. This can be interpreted from a representation learning point of view as saying that we believe the learning problem consists of discovering a set of underlying factors of variation that can in turn be described in terms of other, simpler underlying factors of variation. I think the current "consensus" is that it's a combination of bullet points #1 and #3: for real-world tasks deep architecture are often beneficial and shallow architecture would be inefficient and require a lot more neurons for the same performance. But it's far from proven. Consider e.g. Zagoruyko and Komodakis, 2016, Wide Residual Networks. Residual networks with 150+ layers appeared in 2015 and won various image recognition contests. This was a big success and looked like a compelling argument in favour of deepness; here is one figure from a presentation by the first author on the residual network paper (note that the time confusingly goes to the left here): But the paper linked above shows that a "wide" residual network with "only" 16 layers can outperform "deep" ones with 150+ layers. If this is true, then the whole point of the above figure breaks down. Or consider Ba and Caruana, 2014, Do Deep Nets Really Need to be Deep?: In this paper we provide empirical evidence that shallow nets are capable of learning the same function as deep nets, and in some cases with the same number of parameters as the deep nets. We do this by first training a state-of-the-art deep model, and then training a shallow model to mimic the deep model. The mimic model is trained using the model compression scheme described in the next section. Remarkably, with model compression we are able to train shallow nets to be as accurate as some deep models, even though we are not able to train these shallow nets to be as accurate as the deep nets when the shallow nets are trained directly on the original labeled training data. If a shallow net with the same number of parameters as a deep net can learn to mimic a deep net with high fidelity, then it is clear that the function learned by that deep net does not really have to be deep. If true, this would mean that the correct explanation is rather my bullet #2, and not #1 or #3. As I said --- nobody really knows for sure yet. Concluding remarks The amount of progress achieved in the deep learning over the last ~10 years is truly amazing, but most of this progress was achieved by trial and error, and we still lack very basic understanding about what exactly makes deep nets to work so well. Even the list of things that people consider to be crucial for setting up an effective deep network seems to change every couple of years. The deep learning renaissance started in 2006 when Geoffrey Hinton (who had been working on neural networks for 20+ years without much interest from anybody) published a couple of breakthrough papers offering an effective way to train deep networks (Science paper, Neural computation paper). The trick was to use unsupervised pre-training before starting the gradient descent. These papers revolutionized the field, and for a couple of years people thought that unsupervised pre-training was the key. Then in 2010 Martens showed that deep neural networks can be trained with second-order methods (so called Hessian-free methods) and can outperform networks trained with pre-training: Deep learning via Hessian-free optimization. Then in 2013 Sutskever et al. showed that stochastic gradient descent with some very clever tricks can outperform Hessian-free methods: On the importance of initialization and momentum in deep learning. Also, around 2010 people realized that using rectified linear units instead of sigmoid units makes a huge difference for gradient descent. Dropout appeared in 2014. Residual networks appeared in 2015. People keep coming up with more and more effective ways to train deep networks and what seemed like a key insight 10 years ago is often considered a nuisance today. All of that is largely driven by trial and error and there is little understanding of what makes some things work so well and some other things not. Training deep networks is like a big bag of tricks. Successful tricks are usually rationalized post factum. We don't even know why deep networks reach a performance plateau; just 10 years people used to blame local minima, but the current thinking is that this is not the point (when the perfomance plateaus, the gradients tend to stay large). This is such a basic question about deep networks, and we don't even know this. Update: This is more or less the subject of Ali Rahimi's NIPS 2017 talk on machine learning as alchemy: https://www.youtube.com/watch?v=Qi1Yry33TQE. [This answer was entirely re-written in April 2017, so some of the comments below do not apply anymore.]
What is the difference between a neural network and a deep neural network, and why do the deep ones Let's start with a triviliaty: Deep neural network is simply a feedforward network with many hidden layers. This is more or less all there is to say about the definition. Neural networks can be recurr
866
What is the difference between a neural network and a deep neural network, and why do the deep ones work better?
Good answer so far, though there are a couple of things nobody around here mentioned, here's my 0.02$ I'll just answer in the form of a story, should make things more fun and clear. No tldr here. In the process you should be able to understand what the difference is. There are multiple reasons why DNNs sparked when they did (stars had to align, like all things similar, it's just the matter of right place, right time etc). One reason is the availability of data, lots of data (labeled data). If you want to be able to generalize and learn something like 'generic priors' or 'universal priors' (aka the basic building blocks that can be re-used between tasks / applications) then you need lots of data. And wild data, might I add, not sterile data-sets carefully recorded in the lab with controlled lighting and all. Mechanical Turk made that (labeling) possible. Second, the possibility to train larger networks faster using GPUs made experimentation faster. ReLU units made things computationally faster as well and provided their regularization since you needed to use more units in one layer to be able to compress the same information since layers now were more sparse, so it also went nice with dropout. Also, they helped with an important problem that happens when you stack multiple layers. More about that later. Various multiple tricks that improved performance. Like using mini-batches (which is in fact detrimental for final error) or convolutions (which actually don't capture as much variance as local receptive fields) but are computationally faster. In the meantime people were debating if they liked em more skinny or more chubby, smaller or taller, with or without freckles, etc. Optimization was like does it fizz or does it bang so research was moving towards more complex methods of training like conjugate gradient and newtons method, finally they all realized there's no free lunch. Networks were burping. What slowed things down was the vanishing gradient problem. People went like: whoa, that's far out, man! In a nutshell it means that it was hard to adjust the error on layers closer to the inputs. As you add more layers on the cake, gets too wobbly. You couldn't back-propagate meaningful error back to the first layers. The more layers, the worse it got. Bummer. Some people figured out that using the cross-entropy as a loss function (well, again, classification and image recognition) provides some sort of regularization and helps against the network getting saturated and in turn the gradient wasn't able to hide that well. What also made things possible was the per-layer pre-training using unsupervised methods. Basically, you take a bunch of auto-encoders and learn increasingly less abstract representations as you increase the compression ratio. The weights from these networks were used to initialize the supervised version. This solved the vanishing gradient problem in another way: you're already starting supervised training from a much better start position. So all the other networks got up and started to revolt. But the networks needed supervision anyway, otherwise it was impossible to keep the big data still. Now, for the last part that finally sort of leads to your answer which is too complex to put in a nutshell: why more layers and not just one. Because we can! and because context and invariant feature descriptors. and pools. Here's an example: you have a data set of images, how are you going to train a plan NN using that data? Well, naively, you take let's say each row and you concatenate it into one long vector and that's your input. What do you learn? Well, some fuzzy nonsense functions that might not look like anything, because of the many many types of variances that the objects in the image contain and you are not able to distinguish between relevant and irrelevant things. And at some point the network needs to forget to be able to re-learn new stuff. So there's the capacity issue. This is more non-linear dynamics, but the intuition is that you need to increase the number of neurons to be able to include more information in your network. So the point is that if you just input the image as one piece, adding extra layers does not do too much for you since you're not able to learn abstractions, which is very important. Doing things holistically thus does not work that well, unless you're doing simpler things with the network like focusing on a specific type of object, so you limit yourself to one class and you pick on some global properties as a classification goal. So what's there to do? Look at the edge of your screen and try to read this text. Problem? As stupid as it sounds, you need to look at what you're reading. Otherwise it's too fuzzy / there's not enough resolution / granularity. Let's call the focus area the receptive field. Networks need to be able to focus too. Basically instead of using the whole image as input, you move a sliding window along the image and then you use that as input to the network (a bit less stochastic than what humans do). Now you also have a chance to capture correlations between pixels and hence objects and you also can distinguish between sleepy cat sitting on a sofa and an upside-down cat bungee jumping. Neat, faith in humanity restored. The network can learn local abstractions in an image on multiple levels. The network learns filters, initially simple ones and then builds up on those to learn more complex filters. So, to sum things up: receptive fields / convolutions, unsupervised initialization, rectified linear units, dropout or other regularization methods. If you're very serious about this I recommend you take a look at Schmidhuber's Deep Learning in Neural Networks: An Overview here's the url for the preprint http://arxiv.org/abs/1404.7828 And remember: big learning, deep data. Word.
What is the difference between a neural network and a deep neural network, and why do the deep ones
Good answer so far, though there are a couple of things nobody around here mentioned, here's my 0.02$ I'll just answer in the form of a story, should make things more fun and clear. No tldr here. In t
What is the difference between a neural network and a deep neural network, and why do the deep ones work better? Good answer so far, though there are a couple of things nobody around here mentioned, here's my 0.02$ I'll just answer in the form of a story, should make things more fun and clear. No tldr here. In the process you should be able to understand what the difference is. There are multiple reasons why DNNs sparked when they did (stars had to align, like all things similar, it's just the matter of right place, right time etc). One reason is the availability of data, lots of data (labeled data). If you want to be able to generalize and learn something like 'generic priors' or 'universal priors' (aka the basic building blocks that can be re-used between tasks / applications) then you need lots of data. And wild data, might I add, not sterile data-sets carefully recorded in the lab with controlled lighting and all. Mechanical Turk made that (labeling) possible. Second, the possibility to train larger networks faster using GPUs made experimentation faster. ReLU units made things computationally faster as well and provided their regularization since you needed to use more units in one layer to be able to compress the same information since layers now were more sparse, so it also went nice with dropout. Also, they helped with an important problem that happens when you stack multiple layers. More about that later. Various multiple tricks that improved performance. Like using mini-batches (which is in fact detrimental for final error) or convolutions (which actually don't capture as much variance as local receptive fields) but are computationally faster. In the meantime people were debating if they liked em more skinny or more chubby, smaller or taller, with or without freckles, etc. Optimization was like does it fizz or does it bang so research was moving towards more complex methods of training like conjugate gradient and newtons method, finally they all realized there's no free lunch. Networks were burping. What slowed things down was the vanishing gradient problem. People went like: whoa, that's far out, man! In a nutshell it means that it was hard to adjust the error on layers closer to the inputs. As you add more layers on the cake, gets too wobbly. You couldn't back-propagate meaningful error back to the first layers. The more layers, the worse it got. Bummer. Some people figured out that using the cross-entropy as a loss function (well, again, classification and image recognition) provides some sort of regularization and helps against the network getting saturated and in turn the gradient wasn't able to hide that well. What also made things possible was the per-layer pre-training using unsupervised methods. Basically, you take a bunch of auto-encoders and learn increasingly less abstract representations as you increase the compression ratio. The weights from these networks were used to initialize the supervised version. This solved the vanishing gradient problem in another way: you're already starting supervised training from a much better start position. So all the other networks got up and started to revolt. But the networks needed supervision anyway, otherwise it was impossible to keep the big data still. Now, for the last part that finally sort of leads to your answer which is too complex to put in a nutshell: why more layers and not just one. Because we can! and because context and invariant feature descriptors. and pools. Here's an example: you have a data set of images, how are you going to train a plan NN using that data? Well, naively, you take let's say each row and you concatenate it into one long vector and that's your input. What do you learn? Well, some fuzzy nonsense functions that might not look like anything, because of the many many types of variances that the objects in the image contain and you are not able to distinguish between relevant and irrelevant things. And at some point the network needs to forget to be able to re-learn new stuff. So there's the capacity issue. This is more non-linear dynamics, but the intuition is that you need to increase the number of neurons to be able to include more information in your network. So the point is that if you just input the image as one piece, adding extra layers does not do too much for you since you're not able to learn abstractions, which is very important. Doing things holistically thus does not work that well, unless you're doing simpler things with the network like focusing on a specific type of object, so you limit yourself to one class and you pick on some global properties as a classification goal. So what's there to do? Look at the edge of your screen and try to read this text. Problem? As stupid as it sounds, you need to look at what you're reading. Otherwise it's too fuzzy / there's not enough resolution / granularity. Let's call the focus area the receptive field. Networks need to be able to focus too. Basically instead of using the whole image as input, you move a sliding window along the image and then you use that as input to the network (a bit less stochastic than what humans do). Now you also have a chance to capture correlations between pixels and hence objects and you also can distinguish between sleepy cat sitting on a sofa and an upside-down cat bungee jumping. Neat, faith in humanity restored. The network can learn local abstractions in an image on multiple levels. The network learns filters, initially simple ones and then builds up on those to learn more complex filters. So, to sum things up: receptive fields / convolutions, unsupervised initialization, rectified linear units, dropout or other regularization methods. If you're very serious about this I recommend you take a look at Schmidhuber's Deep Learning in Neural Networks: An Overview here's the url for the preprint http://arxiv.org/abs/1404.7828 And remember: big learning, deep data. Word.
What is the difference between a neural network and a deep neural network, and why do the deep ones Good answer so far, though there are a couple of things nobody around here mentioned, here's my 0.02$ I'll just answer in the form of a story, should make things more fun and clear. No tldr here. In t
867
What is the difference between a neural network and a deep neural network, and why do the deep ones work better?
In layman terms, the main difference with the classic Neural Networks is that they have much more hidden layers. The idea is to add labels to the layers to make several layers of abstraction: For example, a deep neural network for object recognition: Layer 1: Single pixels Layer 2: Edges Layer 3: Forms(circles, squares) Layer n: Whole object You can find a good explanation at this question in Quora. And, if you are interested in this subject I would reccoment to take a look at this book.
What is the difference between a neural network and a deep neural network, and why do the deep ones
In layman terms, the main difference with the classic Neural Networks is that they have much more hidden layers. The idea is to add labels to the layers to make several layers of abstraction: For exam
What is the difference between a neural network and a deep neural network, and why do the deep ones work better? In layman terms, the main difference with the classic Neural Networks is that they have much more hidden layers. The idea is to add labels to the layers to make several layers of abstraction: For example, a deep neural network for object recognition: Layer 1: Single pixels Layer 2: Edges Layer 3: Forms(circles, squares) Layer n: Whole object You can find a good explanation at this question in Quora. And, if you are interested in this subject I would reccoment to take a look at this book.
What is the difference between a neural network and a deep neural network, and why do the deep ones In layman terms, the main difference with the classic Neural Networks is that they have much more hidden layers. The idea is to add labels to the layers to make several layers of abstraction: For exam
868
What is the difference between a neural network and a deep neural network, and why do the deep ones work better?
NN: one hidden layer is enough but can have multiple layers nevertheless, left to right ordering (model: feed forward NN) trained only in supervised way (backpropagation) when multiple layers are used, train all the layers at the same time (same algorithm: backpropagation), more layers makes it difficult to use as errors become too small hard to understand what is learned at each layer DNN: multiple layers are required, undirected edges (model: restricted boltzman machine) first trained in an unsupervised way, where the networks learns relevant features by learning to reproduce its input, then trained in a supervised way that fines tune the features in order to classify train the layers one by one from input to output layer (algorithm: contrastive divergence) each layer clearly contains features of increasing abstraction The move to DNN is due to three independant breakthroughs which happened in 2006. Regarding theorems on NN, the one the question alludes to is: universal approximation theorem or Cybenko theorem: a feed-forward neural network with a single hidden layer can approximate any continuous function. However in practice it may require much more neurons if a single hidden layer is used.
What is the difference between a neural network and a deep neural network, and why do the deep ones
NN: one hidden layer is enough but can have multiple layers nevertheless, left to right ordering (model: feed forward NN) trained only in supervised way (backpropagation) when multiple layers are use
What is the difference between a neural network and a deep neural network, and why do the deep ones work better? NN: one hidden layer is enough but can have multiple layers nevertheless, left to right ordering (model: feed forward NN) trained only in supervised way (backpropagation) when multiple layers are used, train all the layers at the same time (same algorithm: backpropagation), more layers makes it difficult to use as errors become too small hard to understand what is learned at each layer DNN: multiple layers are required, undirected edges (model: restricted boltzman machine) first trained in an unsupervised way, where the networks learns relevant features by learning to reproduce its input, then trained in a supervised way that fines tune the features in order to classify train the layers one by one from input to output layer (algorithm: contrastive divergence) each layer clearly contains features of increasing abstraction The move to DNN is due to three independant breakthroughs which happened in 2006. Regarding theorems on NN, the one the question alludes to is: universal approximation theorem or Cybenko theorem: a feed-forward neural network with a single hidden layer can approximate any continuous function. However in practice it may require much more neurons if a single hidden layer is used.
What is the difference between a neural network and a deep neural network, and why do the deep ones NN: one hidden layer is enough but can have multiple layers nevertheless, left to right ordering (model: feed forward NN) trained only in supervised way (backpropagation) when multiple layers are use
869
What is the difference between a neural network and a deep neural network, and why do the deep ones work better?
To expand on David Gasquez's answer, one of the main differences between deep neural networks and traditional neural networks is that we don't just use backpropagation for deep neural nets. Why? Because backpropagation trains later layers more efficiently than it trains earlier layers--as you go earlier and earlier in the network, the errors get smaller and more diffuse. So a ten-layer network will basically be seven layers of random weights followed by three layers of fitted weights, and do just as well as a three layer network. See here for more. So the conceptual breakthrough is treating the separate problems (the labeled layers) as separate problems--if we first try to solve the problem of building a generically good first layer, and then try to solve the problem of building a generically good second layer, eventually we'll have a deep feature space that we can feed in to our actual problem.
What is the difference between a neural network and a deep neural network, and why do the deep ones
To expand on David Gasquez's answer, one of the main differences between deep neural networks and traditional neural networks is that we don't just use backpropagation for deep neural nets. Why? Becau
What is the difference between a neural network and a deep neural network, and why do the deep ones work better? To expand on David Gasquez's answer, one of the main differences between deep neural networks and traditional neural networks is that we don't just use backpropagation for deep neural nets. Why? Because backpropagation trains later layers more efficiently than it trains earlier layers--as you go earlier and earlier in the network, the errors get smaller and more diffuse. So a ten-layer network will basically be seven layers of random weights followed by three layers of fitted weights, and do just as well as a three layer network. See here for more. So the conceptual breakthrough is treating the separate problems (the labeled layers) as separate problems--if we first try to solve the problem of building a generically good first layer, and then try to solve the problem of building a generically good second layer, eventually we'll have a deep feature space that we can feed in to our actual problem.
What is the difference between a neural network and a deep neural network, and why do the deep ones To expand on David Gasquez's answer, one of the main differences between deep neural networks and traditional neural networks is that we don't just use backpropagation for deep neural nets. Why? Becau
870
What is the difference between a neural network and a deep neural network, and why do the deep ones work better?
I've also been confused a bit in the beginning by the difference between neural networks (NN) and deep neural networks (DNN), however the 'depth' refers only to the number of parameters & layers, unfortunately. You can take it as some sort of re-branding under the so-called 'Canadian Mafia'. Several years ago, I also had Neural Networks as a part of a class and we did digit recognition, wave approximation and similar applications by using NN, which had multiple hidden layers and outputs and all that jazz that DNN's have. However, what we didn't have then was computing power. The reason that made the move to DNN possible and desirable are the advances in hardware development. Simply put, now we can compute more, faster and more parallelized (DNN on GPU's), while before, time was the bottleneck for NN's. As referenced on the Wikipedia's page for Deep Learning, the 'deep' part refers mostly to having features interact in a non-linear fashion on multiple layers, therefore performing feature extraction and transformation. This was also done in standard NN's, however at a smaller scale. On the same page, here you have the definition 'A deep neural network (DNN) is an artificial neural network (ANN) with multiple hidden layers of units between the input and output layers.'
What is the difference between a neural network and a deep neural network, and why do the deep ones
I've also been confused a bit in the beginning by the difference between neural networks (NN) and deep neural networks (DNN), however the 'depth' refers only to the number of parameters & layers, unfo
What is the difference between a neural network and a deep neural network, and why do the deep ones work better? I've also been confused a bit in the beginning by the difference between neural networks (NN) and deep neural networks (DNN), however the 'depth' refers only to the number of parameters & layers, unfortunately. You can take it as some sort of re-branding under the so-called 'Canadian Mafia'. Several years ago, I also had Neural Networks as a part of a class and we did digit recognition, wave approximation and similar applications by using NN, which had multiple hidden layers and outputs and all that jazz that DNN's have. However, what we didn't have then was computing power. The reason that made the move to DNN possible and desirable are the advances in hardware development. Simply put, now we can compute more, faster and more parallelized (DNN on GPU's), while before, time was the bottleneck for NN's. As referenced on the Wikipedia's page for Deep Learning, the 'deep' part refers mostly to having features interact in a non-linear fashion on multiple layers, therefore performing feature extraction and transformation. This was also done in standard NN's, however at a smaller scale. On the same page, here you have the definition 'A deep neural network (DNN) is an artificial neural network (ANN) with multiple hidden layers of units between the input and output layers.'
What is the difference between a neural network and a deep neural network, and why do the deep ones I've also been confused a bit in the beginning by the difference between neural networks (NN) and deep neural networks (DNN), however the 'depth' refers only to the number of parameters & layers, unfo
871
What is the difference between a neural network and a deep neural network, and why do the deep ones work better?
As far as I know, what is called Deep Neural Network (DNN) today has nothing fundamentally or philosophically different from the old standard Neural Network (NN). Although, in theory, one can approximate an arbitrary NN using a shallow NN with only one hidden layer, however, this does not mean that the two networks will perform similarly when trained using the same algorithm and training data. In fact there is a growing interest in training shallow networks that perform similarly to deep networks. The way this is done, however, is by training a deep network first, and then training the shallow network to imitate the final output (i.e. the output of the penultimate layer) of the deep network. See, what makes deep architectures favorable is that today's training techniques (back propagation) happen to work better when the neurons are laid out in a hierarchical structure. Another question that may be asked is: why Neural Networks (DNNs in particular) became so popular suddenly. To my understanding, the magic ingredients that made DNNs so popular recently are the following: A. Improved datasets and data processing capabilities 1. Large scale datasets with millions of diverse images became available 2. Fast GPU implementation was made available to public B. Improved training algorithms and network architectures 1. Rectified Linear Units (ReLU) instead of sigmoid or tanh 2. Deep network architectures evolved over the years A-1) Until very recently, at least in Computer Vision, we couldn't train models on millions of labeled images; simply because labeled datasets of that size did not exist. It turns out that, beside the number of images, the granularity of the label set is also a very crucial factor in the success of DNNs (see Figure 8 in this paper, by Azizpour et al.). A-2) A lot of engineering effort has gone into making it possible to train DNNs that work well in practice, most notably, the advent of GPU implementations. One of the first successful GPU implementations of DNNs, runs on two parallel GPUs; yet, it takes about a week to train a DNN on 1.2 million images of 1000 categories using high-end GPUs (see this paper, by Krizhevsky et al.). B-1) The use of simple Rectified Linear Units (ReLU) instead of sigmoid and tanh functions is probably the biggest building block in making training of DNNs possible. Note that both sigmoid and tanh functions have almost zero gradient almost everywhere, depending on how fast they transit from the low activation level to high; in the extreme case, when the transition is sudden, we get a step function that has slope zero everywhere except at one point where the transition happens. B-2) The story of how neural network architectures developed over the years reminds me of how evolution changes an organism's structure in nature. Parameter sharing (e.g. in convolutional layers), dropout regularization, initialization, learning rate schedule, spatial pooling, sub-sampling in the deeper layers, and many other tricks that are now considered standard in training DNNs were developed, evolved, end tailored over the years to make the training of the deep networks possible the way it is today.
What is the difference between a neural network and a deep neural network, and why do the deep ones
As far as I know, what is called Deep Neural Network (DNN) today has nothing fundamentally or philosophically different from the old standard Neural Network (NN). Although, in theory, one can approxim
What is the difference between a neural network and a deep neural network, and why do the deep ones work better? As far as I know, what is called Deep Neural Network (DNN) today has nothing fundamentally or philosophically different from the old standard Neural Network (NN). Although, in theory, one can approximate an arbitrary NN using a shallow NN with only one hidden layer, however, this does not mean that the two networks will perform similarly when trained using the same algorithm and training data. In fact there is a growing interest in training shallow networks that perform similarly to deep networks. The way this is done, however, is by training a deep network first, and then training the shallow network to imitate the final output (i.e. the output of the penultimate layer) of the deep network. See, what makes deep architectures favorable is that today's training techniques (back propagation) happen to work better when the neurons are laid out in a hierarchical structure. Another question that may be asked is: why Neural Networks (DNNs in particular) became so popular suddenly. To my understanding, the magic ingredients that made DNNs so popular recently are the following: A. Improved datasets and data processing capabilities 1. Large scale datasets with millions of diverse images became available 2. Fast GPU implementation was made available to public B. Improved training algorithms and network architectures 1. Rectified Linear Units (ReLU) instead of sigmoid or tanh 2. Deep network architectures evolved over the years A-1) Until very recently, at least in Computer Vision, we couldn't train models on millions of labeled images; simply because labeled datasets of that size did not exist. It turns out that, beside the number of images, the granularity of the label set is also a very crucial factor in the success of DNNs (see Figure 8 in this paper, by Azizpour et al.). A-2) A lot of engineering effort has gone into making it possible to train DNNs that work well in practice, most notably, the advent of GPU implementations. One of the first successful GPU implementations of DNNs, runs on two parallel GPUs; yet, it takes about a week to train a DNN on 1.2 million images of 1000 categories using high-end GPUs (see this paper, by Krizhevsky et al.). B-1) The use of simple Rectified Linear Units (ReLU) instead of sigmoid and tanh functions is probably the biggest building block in making training of DNNs possible. Note that both sigmoid and tanh functions have almost zero gradient almost everywhere, depending on how fast they transit from the low activation level to high; in the extreme case, when the transition is sudden, we get a step function that has slope zero everywhere except at one point where the transition happens. B-2) The story of how neural network architectures developed over the years reminds me of how evolution changes an organism's structure in nature. Parameter sharing (e.g. in convolutional layers), dropout regularization, initialization, learning rate schedule, spatial pooling, sub-sampling in the deeper layers, and many other tricks that are now considered standard in training DNNs were developed, evolved, end tailored over the years to make the training of the deep networks possible the way it is today.
What is the difference between a neural network and a deep neural network, and why do the deep ones As far as I know, what is called Deep Neural Network (DNN) today has nothing fundamentally or philosophically different from the old standard Neural Network (NN). Although, in theory, one can approxim
872
What is the difference between a neural network and a deep neural network, and why do the deep ones work better?
The difference between a "Deep" NN and a standard NN is purely qualitative: there is no definition of what that "Deep" means. "Deep" can mean anything from the extremely sophisticated architectures that are used by Google, Facebook and co which have 50-80 or even more layers, to 2 hidden layers (4 layers total) architectures. I wouldn't be surprised if you could even find articles claiming to do deep learning with a single hidden layer, because "deep" doesn't mean much. "Neural network" is also a word that doesn't have a very precise meaning. It covers an extremely large ensemble of models, from random boltzman machines (which are undirected graphs) to feedforward architectures with various activation functions. Most NNs will be trained using backprop, but it doesn't have to be the case so even the training algorithms aren't very homogenous. Overall, deep learning, deep NNs and NNs have all become catch-all words which capture a multitude of approaches. For good introductory references into "what changed": Deep Learning of Representations: Looking Forward, Bengio, 2013 is a good review + perspective for the future. Also see Do Deep Nets Really Need to be Deep? Ba & Caruana, 2013 which illustrate that being deep might not be useful for representation but for learning.
What is the difference between a neural network and a deep neural network, and why do the deep ones
The difference between a "Deep" NN and a standard NN is purely qualitative: there is no definition of what that "Deep" means. "Deep" can mean anything from the extremely sophisticated architectures th
What is the difference between a neural network and a deep neural network, and why do the deep ones work better? The difference between a "Deep" NN and a standard NN is purely qualitative: there is no definition of what that "Deep" means. "Deep" can mean anything from the extremely sophisticated architectures that are used by Google, Facebook and co which have 50-80 or even more layers, to 2 hidden layers (4 layers total) architectures. I wouldn't be surprised if you could even find articles claiming to do deep learning with a single hidden layer, because "deep" doesn't mean much. "Neural network" is also a word that doesn't have a very precise meaning. It covers an extremely large ensemble of models, from random boltzman machines (which are undirected graphs) to feedforward architectures with various activation functions. Most NNs will be trained using backprop, but it doesn't have to be the case so even the training algorithms aren't very homogenous. Overall, deep learning, deep NNs and NNs have all become catch-all words which capture a multitude of approaches. For good introductory references into "what changed": Deep Learning of Representations: Looking Forward, Bengio, 2013 is a good review + perspective for the future. Also see Do Deep Nets Really Need to be Deep? Ba & Caruana, 2013 which illustrate that being deep might not be useful for representation but for learning.
What is the difference between a neural network and a deep neural network, and why do the deep ones The difference between a "Deep" NN and a standard NN is purely qualitative: there is no definition of what that "Deep" means. "Deep" can mean anything from the extremely sophisticated architectures th
873
What is the difference between a neural network and a deep neural network, and why do the deep ones work better?
I wouldn't say there is any big philosophical difference between NN and DNN (in fact I would say DNN is just a marketing term to distinguish from 'failed' NN) . What has changed is the size of the data sets. Essentially neural networks are currently the best $O(n)$ statistical estimators, working well for high dimensional large datasets (e.g. imagenet). I think you should step back and see that this has created a resurgence in shallow AI -- e.g. bag of words for sentiment analysis and other language applications and visual bag of words was leading approach to image recognition before DNN. No one is saying bag of words is a true model of language, but it is an effective engineering solution. So I would say DNN are a better 'visual bag of words' -- see e.g. Szegedy et al. 2013 Intriguing properties of neural networks and Nguyen et al. Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images where it is clear that there is no higher order structures etc. being learned (or whatever is claimed for DNN).
What is the difference between a neural network and a deep neural network, and why do the deep ones
I wouldn't say there is any big philosophical difference between NN and DNN (in fact I would say DNN is just a marketing term to distinguish from 'failed' NN) . What has changed is the size of the d
What is the difference between a neural network and a deep neural network, and why do the deep ones work better? I wouldn't say there is any big philosophical difference between NN and DNN (in fact I would say DNN is just a marketing term to distinguish from 'failed' NN) . What has changed is the size of the data sets. Essentially neural networks are currently the best $O(n)$ statistical estimators, working well for high dimensional large datasets (e.g. imagenet). I think you should step back and see that this has created a resurgence in shallow AI -- e.g. bag of words for sentiment analysis and other language applications and visual bag of words was leading approach to image recognition before DNN. No one is saying bag of words is a true model of language, but it is an effective engineering solution. So I would say DNN are a better 'visual bag of words' -- see e.g. Szegedy et al. 2013 Intriguing properties of neural networks and Nguyen et al. Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images where it is clear that there is no higher order structures etc. being learned (or whatever is claimed for DNN).
What is the difference between a neural network and a deep neural network, and why do the deep ones I wouldn't say there is any big philosophical difference between NN and DNN (in fact I would say DNN is just a marketing term to distinguish from 'failed' NN) . What has changed is the size of the d
874
What is the difference between a neural network and a deep neural network, and why do the deep ones work better?
To answer the latter question, look at this paper from Telgarsky which says that for a certain classification problem "all shallow networks with fewer than exponentially (in k) many nodes exhibit error at least 1/6, whereas a deep network with 2 nodes in each of 2k layers achieves zero error." The classification problem in question is the n-alternating-point problem in which we consider the interval $[0,1-2^{-k}]$ so that the input $x_i$ are the $2^k$ uniformly distributed points in that interval, and the corresponding $y_i$ are given by $y_i=1$ if $i$ is odd, and $y_i=0$ if $i$ is even. We then ask, how well can shallow networks without exponential widths capture this relationship in comparison to deep networks with just two nodes in each layer? Essentially, we can approximate data better (exactly even) with a linear (in $k$) number of layers with just two nodes in each layer, whereas we would need exponentially many (in $k$) nodes to get the same result in a shallow network. The proof of the quotation involves noticing that the composition of non-linear activations applied to affine transformations (i.e. with a greater number of layers) manages to capture more variability in the data than summing those same functions (as in when we add nodes to layers).
What is the difference between a neural network and a deep neural network, and why do the deep ones
To answer the latter question, look at this paper from Telgarsky which says that for a certain classification problem "all shallow networks with fewer than exponentially (in k) many nodes exhibit erro
What is the difference between a neural network and a deep neural network, and why do the deep ones work better? To answer the latter question, look at this paper from Telgarsky which says that for a certain classification problem "all shallow networks with fewer than exponentially (in k) many nodes exhibit error at least 1/6, whereas a deep network with 2 nodes in each of 2k layers achieves zero error." The classification problem in question is the n-alternating-point problem in which we consider the interval $[0,1-2^{-k}]$ so that the input $x_i$ are the $2^k$ uniformly distributed points in that interval, and the corresponding $y_i$ are given by $y_i=1$ if $i$ is odd, and $y_i=0$ if $i$ is even. We then ask, how well can shallow networks without exponential widths capture this relationship in comparison to deep networks with just two nodes in each layer? Essentially, we can approximate data better (exactly even) with a linear (in $k$) number of layers with just two nodes in each layer, whereas we would need exponentially many (in $k$) nodes to get the same result in a shallow network. The proof of the quotation involves noticing that the composition of non-linear activations applied to affine transformations (i.e. with a greater number of layers) manages to capture more variability in the data than summing those same functions (as in when we add nodes to layers).
What is the difference between a neural network and a deep neural network, and why do the deep ones To answer the latter question, look at this paper from Telgarsky which says that for a certain classification problem "all shallow networks with fewer than exponentially (in k) many nodes exhibit erro
875
What is the difference between a neural network and a deep neural network, and why do the deep ones work better?
Deep Learning is a set of algorithms in machine learning that attempt to model high-level abstractions in data by using architectures composed of multiple non-linear transformations. Source: Arno Candel
What is the difference between a neural network and a deep neural network, and why do the deep ones
Deep Learning is a set of algorithms in machine learning that attempt to model high-level abstractions in data by using architectures composed of multiple non-linear transformations. Source: Arno Can
What is the difference between a neural network and a deep neural network, and why do the deep ones work better? Deep Learning is a set of algorithms in machine learning that attempt to model high-level abstractions in data by using architectures composed of multiple non-linear transformations. Source: Arno Candel
What is the difference between a neural network and a deep neural network, and why do the deep ones Deep Learning is a set of algorithms in machine learning that attempt to model high-level abstractions in data by using architectures composed of multiple non-linear transformations. Source: Arno Can
876
What is the difference between off-policy and on-policy learning?
First of all, there's no reason that an agent has to do the greedy action; Agents can explore or they can follow options. This is not what separates on-policy from off-policy learning. The reason that Q-learning is off-policy is that it updates its Q-values using the Q-value of the next state $s'$ and the greedy action $a'$. In other words, it estimates the return (total discounted future reward) for state-action pairs assuming a greedy policy were followed despite the fact that it's not following a greedy policy. The reason that SARSA is on-policy is that it updates its Q-values using the Q-value of the next state $s'$ and the current policy's action $a''$. It estimates the return for state-action pairs assuming the current policy continues to be followed. The distinction disappears if the current policy is a greedy policy. However, such an agent would not be good since it never explores. Have you looked at the book available for free online? Richard S. Sutton and Andrew G. Barto. Reinforcement learning: An introduction. Second edition, MIT Press, Cambridge, MA, 2018.
What is the difference between off-policy and on-policy learning?
First of all, there's no reason that an agent has to do the greedy action; Agents can explore or they can follow options. This is not what separates on-policy from off-policy learning. The reason t
What is the difference between off-policy and on-policy learning? First of all, there's no reason that an agent has to do the greedy action; Agents can explore or they can follow options. This is not what separates on-policy from off-policy learning. The reason that Q-learning is off-policy is that it updates its Q-values using the Q-value of the next state $s'$ and the greedy action $a'$. In other words, it estimates the return (total discounted future reward) for state-action pairs assuming a greedy policy were followed despite the fact that it's not following a greedy policy. The reason that SARSA is on-policy is that it updates its Q-values using the Q-value of the next state $s'$ and the current policy's action $a''$. It estimates the return for state-action pairs assuming the current policy continues to be followed. The distinction disappears if the current policy is a greedy policy. However, such an agent would not be good since it never explores. Have you looked at the book available for free online? Richard S. Sutton and Andrew G. Barto. Reinforcement learning: An introduction. Second edition, MIT Press, Cambridge, MA, 2018.
What is the difference between off-policy and on-policy learning? First of all, there's no reason that an agent has to do the greedy action; Agents can explore or they can follow options. This is not what separates on-policy from off-policy learning. The reason t
877
What is the difference between off-policy and on-policy learning?
First of all, what does policy (denoted by $\pi$) actually mean? Policy specifies an action $a$, that is taken in a state $s$ (or more precisely, $\pi$ is a probability, that an action $a$ is taken in a state $s$). Second, what types of learning do we have? Evaluate $Q(s,a)$ function: predict sum of future discounted rewards, where $a$ is an action and $s$ is a state. Find $\pi$ (actually, $\pi(a|s)$), that yields a maximum reward. Back to the original question. On-policy and off-policy learning is only related to the first task: evaluating $Q(s,a)$. The difference is this: In on-policy learning, the $Q(s,a)$ function is learned from actions that we took using our current policy $\pi(a|s)$. In off-policy learning, the $Q(s,a)$ function is learned from taking different actions (for example, random actions). We don't even need a policy at all! This is the update function for the on-policy SARSA algorithm: $Q(s,a) \leftarrow Q(s,a)+\alpha(r+\gamma Q(s',a')-Q(s,a))$, where $a'$ is the action, that was taken according to policy $\pi$. Compare it with the update function for the off-policy Q-learning algorithm: $Q(s,a) \leftarrow Q(s,a)+\alpha(r+\gamma \max_{a'}Q(s',a')-Q(s,a))$, where $a'$ are all actions, that were probed in state $s'$.
What is the difference between off-policy and on-policy learning?
First of all, what does policy (denoted by $\pi$) actually mean? Policy specifies an action $a$, that is taken in a state $s$ (or more precisely, $\pi$ is a probability, that an action $a$ is taken in
What is the difference between off-policy and on-policy learning? First of all, what does policy (denoted by $\pi$) actually mean? Policy specifies an action $a$, that is taken in a state $s$ (or more precisely, $\pi$ is a probability, that an action $a$ is taken in a state $s$). Second, what types of learning do we have? Evaluate $Q(s,a)$ function: predict sum of future discounted rewards, where $a$ is an action and $s$ is a state. Find $\pi$ (actually, $\pi(a|s)$), that yields a maximum reward. Back to the original question. On-policy and off-policy learning is only related to the first task: evaluating $Q(s,a)$. The difference is this: In on-policy learning, the $Q(s,a)$ function is learned from actions that we took using our current policy $\pi(a|s)$. In off-policy learning, the $Q(s,a)$ function is learned from taking different actions (for example, random actions). We don't even need a policy at all! This is the update function for the on-policy SARSA algorithm: $Q(s,a) \leftarrow Q(s,a)+\alpha(r+\gamma Q(s',a')-Q(s,a))$, where $a'$ is the action, that was taken according to policy $\pi$. Compare it with the update function for the off-policy Q-learning algorithm: $Q(s,a) \leftarrow Q(s,a)+\alpha(r+\gamma \max_{a'}Q(s',a')-Q(s,a))$, where $a'$ are all actions, that were probed in state $s'$.
What is the difference between off-policy and on-policy learning? First of all, what does policy (denoted by $\pi$) actually mean? Policy specifies an action $a$, that is taken in a state $s$ (or more precisely, $\pi$ is a probability, that an action $a$ is taken in
878
What is the difference between off-policy and on-policy learning?
On-policy methods estimate the value of a policy while using it for control. In off-policy methods, the policy used to generate behaviour, called the behaviour policy, may be unrelated to the policy that is evaluated and improved, called the estimation policy. An advantage of this seperation is that the estimation policy may be deterministic (e.g. greedy), while the behaviour policy can continue to sample all possible actions. For further details, see sections 5.4 and 5.6 of the book Reinforcement Learning: An Introduction by Barto and Sutton, first edition.
What is the difference between off-policy and on-policy learning?
On-policy methods estimate the value of a policy while using it for control. In off-policy methods, the policy used to generate behaviour, called the behaviour policy, may be unrelated to the policy
What is the difference between off-policy and on-policy learning? On-policy methods estimate the value of a policy while using it for control. In off-policy methods, the policy used to generate behaviour, called the behaviour policy, may be unrelated to the policy that is evaluated and improved, called the estimation policy. An advantage of this seperation is that the estimation policy may be deterministic (e.g. greedy), while the behaviour policy can continue to sample all possible actions. For further details, see sections 5.4 and 5.6 of the book Reinforcement Learning: An Introduction by Barto and Sutton, first edition.
What is the difference between off-policy and on-policy learning? On-policy methods estimate the value of a policy while using it for control. In off-policy methods, the policy used to generate behaviour, called the behaviour policy, may be unrelated to the policy
879
What is the difference between off-policy and on-policy learning?
The difference between Off-policy and On-policy methods is that with the first you do not need to follow any specific policy, your agent could even behave randomly and despite this, off-policy methods can still find the optimal policy. On the other hand on-policy methods are dependent on the policy used. In the case of Q-Learning, which is off-policy, it will find the optimal policy independent of the policy used during exploration, however this is true only when you visit the different states enough times. You can find in the original paper by Watkins the actual proof that shows this very nice property of Q-Learning. There is however a trade-off and that is off-policy methods tend to be slower than on-policy methods. Here a link with other interesting summary of the properties of both types of methods
What is the difference between off-policy and on-policy learning?
The difference between Off-policy and On-policy methods is that with the first you do not need to follow any specific policy, your agent could even behave randomly and despite this, off-policy methods
What is the difference between off-policy and on-policy learning? The difference between Off-policy and On-policy methods is that with the first you do not need to follow any specific policy, your agent could even behave randomly and despite this, off-policy methods can still find the optimal policy. On the other hand on-policy methods are dependent on the policy used. In the case of Q-Learning, which is off-policy, it will find the optimal policy independent of the policy used during exploration, however this is true only when you visit the different states enough times. You can find in the original paper by Watkins the actual proof that shows this very nice property of Q-Learning. There is however a trade-off and that is off-policy methods tend to be slower than on-policy methods. Here a link with other interesting summary of the properties of both types of methods
What is the difference between off-policy and on-policy learning? The difference between Off-policy and On-policy methods is that with the first you do not need to follow any specific policy, your agent could even behave randomly and despite this, off-policy methods
880
What is the difference between off-policy and on-policy learning?
On-policy learning: The same (ϵ-greedy) policy that is evaluated and improved is also used to select actions. For eg. SARSA TD Learning Algorithm Off-policy learning: The (greedy) policy that is evaluated and improved is different from the (ϵ-greedy) policy that is used to select actions. For eg. Q-Learning Algorithm
What is the difference between off-policy and on-policy learning?
On-policy learning: The same (ϵ-greedy) policy that is evaluated and improved is also used to select actions. For eg. SARSA TD Learning Algorithm Off-policy learning: The (greedy) policy that is evalu
What is the difference between off-policy and on-policy learning? On-policy learning: The same (ϵ-greedy) policy that is evaluated and improved is also used to select actions. For eg. SARSA TD Learning Algorithm Off-policy learning: The (greedy) policy that is evaluated and improved is different from the (ϵ-greedy) policy that is used to select actions. For eg. Q-Learning Algorithm
What is the difference between off-policy and on-policy learning? On-policy learning: The same (ϵ-greedy) policy that is evaluated and improved is also used to select actions. For eg. SARSA TD Learning Algorithm Off-policy learning: The (greedy) policy that is evalu
881
What is the difference between off-policy and on-policy learning?
From the Sutton book: "The on-policy approach in the preceding section is actually a compromise—it learns action values not for the optimal policy, but for a near-optimal policy that still explores. A more straightforward approach is to use two policies, one that is learned about and that becomes the optimal policy, and one that is more exploratory and is used to generate behavior. The policy being learned about is called the target policy, and the policy used to generate behavior is called the behavior policy. In this case we say that learning is from data “o↵” the target policy, and the overall process is termed o↵-policy learning."
What is the difference between off-policy and on-policy learning?
From the Sutton book: "The on-policy approach in the preceding section is actually a compromise—it learns action values not for the optimal policy, but for a near-optimal policy that still explores. A
What is the difference between off-policy and on-policy learning? From the Sutton book: "The on-policy approach in the preceding section is actually a compromise—it learns action values not for the optimal policy, but for a near-optimal policy that still explores. A more straightforward approach is to use two policies, one that is learned about and that becomes the optimal policy, and one that is more exploratory and is used to generate behavior. The policy being learned about is called the target policy, and the policy used to generate behavior is called the behavior policy. In this case we say that learning is from data “o↵” the target policy, and the overall process is termed o↵-policy learning."
What is the difference between off-policy and on-policy learning? From the Sutton book: "The on-policy approach in the preceding section is actually a compromise—it learns action values not for the optimal policy, but for a near-optimal policy that still explores. A
882
What is the difference between off-policy and on-policy learning?
This is the recursive version of the Q-function (according to Bellman equation): $$Q_\pi(s_t,a_t)=\mathbb{E}_{\,r_t,\,s_{t+1}\,\sim\,E}\left[r(s_t,a_t)+\gamma\,\mathbb{E}_{\,a_{t+1}\,\sim\,\pi}\left[Q_\pi(s_{t+1}, a_{t+1})\right]\right]$$ Notice that the outer expectation exists because the current reward and the next state are sampled ($\sim)$ from the environment ($E$). The inner expectation exists because the Q-value for the next state depends on the next action. If you your policy is deterministic, there is no inner expectation, our $a_{t+1}$ is a known value that depends only on the next state, let's call it $A(s_{t+1})$: $$Q_{det}(s_t,a_t)=\mathbb{E}_{\,r_t,\,s_{t+1}\,\sim\,E}\left[r(s_t,a_t)+\gamma\,Q_{det}(s_{t+1}, A(s_{t+1})\right]$$ This means the Q-value depends only on the environment for deterministic policies. The optimal policy is always deterministic (it always take the action that leads to higher expected reward) and Q-learning directly approximates the optimal policy. Therefore the Q-values of this greedy agent depends only on the environment. Well, if the Q-values depends only on the environment, it doesn't matter how I explore the environment, that is, I can use an exploratory behaviour policy.
What is the difference between off-policy and on-policy learning?
This is the recursive version of the Q-function (according to Bellman equation): $$Q_\pi(s_t,a_t)=\mathbb{E}_{\,r_t,\,s_{t+1}\,\sim\,E}\left[r(s_t,a_t)+\gamma\,\mathbb{E}_{\,a_{t+1}\,\sim\,\pi}\left[Q
What is the difference between off-policy and on-policy learning? This is the recursive version of the Q-function (according to Bellman equation): $$Q_\pi(s_t,a_t)=\mathbb{E}_{\,r_t,\,s_{t+1}\,\sim\,E}\left[r(s_t,a_t)+\gamma\,\mathbb{E}_{\,a_{t+1}\,\sim\,\pi}\left[Q_\pi(s_{t+1}, a_{t+1})\right]\right]$$ Notice that the outer expectation exists because the current reward and the next state are sampled ($\sim)$ from the environment ($E$). The inner expectation exists because the Q-value for the next state depends on the next action. If you your policy is deterministic, there is no inner expectation, our $a_{t+1}$ is a known value that depends only on the next state, let's call it $A(s_{t+1})$: $$Q_{det}(s_t,a_t)=\mathbb{E}_{\,r_t,\,s_{t+1}\,\sim\,E}\left[r(s_t,a_t)+\gamma\,Q_{det}(s_{t+1}, A(s_{t+1})\right]$$ This means the Q-value depends only on the environment for deterministic policies. The optimal policy is always deterministic (it always take the action that leads to higher expected reward) and Q-learning directly approximates the optimal policy. Therefore the Q-values of this greedy agent depends only on the environment. Well, if the Q-values depends only on the environment, it doesn't matter how I explore the environment, that is, I can use an exploratory behaviour policy.
What is the difference between off-policy and on-policy learning? This is the recursive version of the Q-function (according to Bellman equation): $$Q_\pi(s_t,a_t)=\mathbb{E}_{\,r_t,\,s_{t+1}\,\sim\,E}\left[r(s_t,a_t)+\gamma\,\mathbb{E}_{\,a_{t+1}\,\sim\,\pi}\left[Q
883
What is the difference between off-policy and on-policy learning?
On-policy methods attempt to evaluate or improve the policy that is used to make decisions, whereas off-policy methods evaluate or improve a policy different from that used to generate the data. [1] [1]. Reinforcement Learning: An Introduction. Second edition, in progress. Richard S. Sutton and Andrew G. Barto c 2014, 2015. A Bradford Book. The MIT Press.
What is the difference between off-policy and on-policy learning?
On-policy methods attempt to evaluate or improve the policy that is used to make decisions, whereas off-policy methods evaluate or improve a policy different from that used to generate the data. [1]
What is the difference between off-policy and on-policy learning? On-policy methods attempt to evaluate or improve the policy that is used to make decisions, whereas off-policy methods evaluate or improve a policy different from that used to generate the data. [1] [1]. Reinforcement Learning: An Introduction. Second edition, in progress. Richard S. Sutton and Andrew G. Barto c 2014, 2015. A Bradford Book. The MIT Press.
What is the difference between off-policy and on-policy learning? On-policy methods attempt to evaluate or improve the policy that is used to make decisions, whereas off-policy methods evaluate or improve a policy different from that used to generate the data. [1]
884
What is the difference between off-policy and on-policy learning?
In the way I understood it, hope it helps : On-policy learning updates the policy currently in use while off-policy learning updates a different policy using the data collected from a different policy. On-policy learning is a type of RL that updates the policy being used to take actions as the agent interacts with the environment. Specifically, the agent learns by following the current policy and then updates the policy based on the rewards received from those actions. (It is often used in situations where the agent's exploration of the environment is limited and the learning must be done with the current policy). Off-policy learning, updates a different policy than the one being used to take actions. This approach involves learning from the behavior of an "older" policy (or another one), while simultaneously interacting with the environment using a newer, improved policy (currently learned). (The main advantage of off-policy learning is that it allows for greater exploration of the environment, which can lead to better policies). Technically, the book is called buffer. Example: an agent playing a game of chess. With on-policy learning, the agent would learn by playing the game using its current policy and then update its policy based on the rewards it receives (experience is sampled from the updated policy). But with the off-policy learning, the agent might study a chess book to learn new strategies, and then incorporate those strategies into its policy while still playing games using its original policy (experience is sampled from the "book" policy here). I recommend to read this following article : https://medium.com/@sergey.levine/decisions-from-data-how-offline-reinforcement-learning-will-change-how-we-use-ml-24d98cb069b0
What is the difference between off-policy and on-policy learning?
In the way I understood it, hope it helps : On-policy learning updates the policy currently in use while off-policy learning updates a different policy using the data collected from a different policy
What is the difference between off-policy and on-policy learning? In the way I understood it, hope it helps : On-policy learning updates the policy currently in use while off-policy learning updates a different policy using the data collected from a different policy. On-policy learning is a type of RL that updates the policy being used to take actions as the agent interacts with the environment. Specifically, the agent learns by following the current policy and then updates the policy based on the rewards received from those actions. (It is often used in situations where the agent's exploration of the environment is limited and the learning must be done with the current policy). Off-policy learning, updates a different policy than the one being used to take actions. This approach involves learning from the behavior of an "older" policy (or another one), while simultaneously interacting with the environment using a newer, improved policy (currently learned). (The main advantage of off-policy learning is that it allows for greater exploration of the environment, which can lead to better policies). Technically, the book is called buffer. Example: an agent playing a game of chess. With on-policy learning, the agent would learn by playing the game using its current policy and then update its policy based on the rewards it receives (experience is sampled from the updated policy). But with the off-policy learning, the agent might study a chess book to learn new strategies, and then incorporate those strategies into its policy while still playing games using its original policy (experience is sampled from the "book" policy here). I recommend to read this following article : https://medium.com/@sergey.levine/decisions-from-data-how-offline-reinforcement-learning-will-change-how-we-use-ml-24d98cb069b0
What is the difference between off-policy and on-policy learning? In the way I understood it, hope it helps : On-policy learning updates the policy currently in use while off-policy learning updates a different policy using the data collected from a different policy
885
What is the difference between off-policy and on-policy learning?
I find this helpful: Michael Herrmann: On-Policy and Off-Policy Algorithm
What is the difference between off-policy and on-policy learning?
I find this helpful: Michael Herrmann: On-Policy and Off-Policy Algorithm
What is the difference between off-policy and on-policy learning? I find this helpful: Michael Herrmann: On-Policy and Off-Policy Algorithm
What is the difference between off-policy and on-policy learning? I find this helpful: Michael Herrmann: On-Policy and Off-Policy Algorithm
886
Crossed vs nested random effects: how do they differ and how are they specified correctly in lme4?
(This is a fairly long answer, there is a summary at the end) You are not wrong in your understanding of what nested and crossed random effects are in the scenario that you describe. However, your definition of crossed random effects is a little narrow. A more general definition of crossed random effects is simply: not nested. We will look at this at the end of this answer, but the bulk of the answer will focus on the scenario you presented, of classrooms within schools. First note that: Nesting is a property of the data, or rather the experimental design, not the model. Also, Nested data can be encoded in at least 2 different ways, and this is at the heart of the issue you found. The dataset in your example is rather large, so I will use another schools example from the internet to explain the issues. But first, consider the following over-simplified example: Here we have classes nested in schools, which is a familiar scenario. The important point here is that, between each school, the classes have the same identifier, even though they are distinct if they are nested. Class1 appears in School1, School2 and School3. However if the data are nested then Class1 in School1 is not the same unit of measurement as Class1 in School2 and School3. If they were the same, then we would have this situation: which means that every class belongs to every school. The former is a nested design, and the latter is a crossed design (some might also call it multiple membership. Edit: For a discussion of the differences between multiple membership and crossed random effects, see here ), and we would formulate these in lme4 using: (1|School/Class) or equivalently (1|School) + (1|Class:School) and (1|School) + (1|Class) respectively. Due to the ambiguity of whether there is nesting or crossing of random effects, it is very important to specify the model correctly as these models will produce different results, as we shall show below. Moreover, it is not possible to know, just by inspecting the data, whether we have nested or crossed random effects. This can only be determined with knowledge of the data and the experimental design. But first let us consider a case where the Class variable is coded uniquely across schools: There is no longer any ambiguity concerning nesting or crossing. The nesting is explicit. Let us now see this with an example in R, where we have 6 schools (labelled I-VI) and 4 classes within each school (labelled a to d): > dt <- read.table("http://bayes.acs.unt.edu:8083/BayesContent/class/Jon/R_SC/Module9/lmm.data.txt", header=TRUE, sep=",", na.strings="NA", dec=".", strip.white=TRUE) > # data was previously publicly available from > # http://researchsupport.unt.edu/class/Jon/R_SC/Module9/lmm.data.txt > # but the link is now broken > xtabs(~ school + class, dt) class school a b c d I 50 50 50 50 II 50 50 50 50 III 50 50 50 50 IV 50 50 50 50 V 50 50 50 50 VI 50 50 50 50 We can see from this cross tabulation that every class ID appears in every school, which satisfies your definition of crossed random effects (in this case we have fully, as opposed to partially, crossed random effects, because every class occurs in every school). So this is the same situation that we had in the first figure above. However, if the data are really nested and not crossed, then we need to explicitly tell lme4: > m0 <- lmer(extro ~ open + agree + social + (1 | school/class), data = dt) > summary(m0) Random effects: Groups Name Variance Std.Dev. class:school (Intercept) 8.2043 2.8643 school (Intercept) 93.8421 9.6872 Residual 0.9684 0.9841 Number of obs: 1200, groups: class:school, 24; school, 6 Fixed effects: Estimate Std. Error t value (Intercept) 60.2378227 4.0117909 15.015 open 0.0061065 0.0049636 1.230 agree -0.0076659 0.0056986 -1.345 social 0.0005404 0.0018524 0.292 > m1 <- lmer(extro ~ open + agree + social + (1 | school) + (1 |class), data = dt) summary(m1) Random effects: Groups Name Variance Std.Dev. school (Intercept) 95.887 9.792 class (Intercept) 5.790 2.406 Residual 2.787 1.669 Number of obs: 1200, groups: school, 6; class, 4 Fixed effects: Estimate Std. Error t value (Intercept) 60.198841 4.212974 14.289 open 0.010834 0.008349 1.298 agree -0.005420 0.009605 -0.564 social -0.001762 0.003107 -0.567 As expected, the results differ because m0 is a nested model while m1 is a crossed model. Now, if we introduce a new variable for the class identifier: > dt$classID <- paste(dt$school, dt$class, sep=".") > xtabs(~ school + classID, dt) classID school I.a I.b I.c I.d II.a II.b II.c II.d III.a III.b III.c III.d IV.a IV.b I 50 50 50 50 0 0 0 0 0 0 0 0 0 0 II 0 0 0 0 50 50 50 50 0 0 0 0 0 0 III 0 0 0 0 0 0 0 0 50 50 50 50 0 0 IV 0 0 0 0 0 0 0 0 0 0 0 0 50 50 V 0 0 0 0 0 0 0 0 0 0 0 0 0 0 VI 0 0 0 0 0 0 0 0 0 0 0 0 0 0 classID school IV.c IV.d V.a V.b V.c V.d VI.a VI.b VI.c VI.d I 0 0 0 0 0 0 0 0 0 0 II 0 0 0 0 0 0 0 0 0 0 III 0 0 0 0 0 0 0 0 0 0 IV 50 50 0 0 0 0 0 0 0 0 V 0 0 50 50 50 50 0 0 0 0 VI 0 0 0 0 0 0 50 50 50 50 The cross tabulation shows that each level of class occurs only in one level of school, as per your definition of nesting. This is also the case with your data, however it is difficult to show that with your data because it is very sparse. Both model formulations will now produce the same output (that of the nested model m0 above): > m2 <- lmer(extro ~ open + agree + social + (1 | school/classID), data = dt) > summary(m2) Random effects: Groups Name Variance Std.Dev. classID:school (Intercept) 8.2043 2.8643 school (Intercept) 93.8419 9.6872 Residual 0.9684 0.9841 Number of obs: 1200, groups: classID:school, 24; school, 6 Fixed effects: Estimate Std. Error t value (Intercept) 60.2378227 4.0117882 15.015 open 0.0061065 0.0049636 1.230 agree -0.0076659 0.0056986 -1.345 social 0.0005404 0.0018524 0.292 > m3 <- lmer(extro ~ open + agree + social + (1 | school) + (1 |classID), data = dt) > summary(m3) Random effects: Groups Name Variance Std.Dev. classID (Intercept) 8.2043 2.8643 school (Intercept) 93.8419 9.6872 Residual 0.9684 0.9841 Number of obs: 1200, groups: classID, 24; school, 6 Fixed effects: Estimate Std. Error t value (Intercept) 60.2378227 4.0117882 15.015 open 0.0061065 0.0049636 1.230 agree -0.0076659 0.0056986 -1.345 social 0.0005404 0.0018524 0.292 It is worth noting that crossed random effects do not have to occur within the same factor - in the above the crossing was completely within school. However, this does not have to be the case, and very often it is not. For example, sticking with a school scenario, if instead of classes within schools we have pupils within schools, and we were also interested in the doctors that the pupils were registered with, then we would also have nesting of pupils within doctors. There is no nesting of schools within doctors, or vice versa, so this is also an example of crossed random effects, and we say that schools and doctors are crossed. A similar scenario where crossed random effects occur is when individual observations are nested within two factors simultaneously, which commonly occurs with so-called repeated measures subject-item data. Typically each subject is measured/tested multiple times with/on different items and these same items are measured/tested by different subjects. Thus, observations are clustered within subjects and within items, but items are not nested within subjects or vice-versa. Again, we say that subjects and items are crossed. Summary: TL;DR The difference between crossed and nested random effects is that nested random effects occur when one factor (grouping variable) appears only within a particular level of another factor (grouping variable). This is specified in lme4 with: (1|group1/group2) where group2 is nested within group1. Crossed random effects are simply: not nested. This can occur with three or more grouping variables (factors) where one factor is separately nested in both of the others, or with two or more factors where individual observations are nested separately within the two factors. These are specified in lme4 with: (1|group1) + (1|group2)
Crossed vs nested random effects: how do they differ and how are they specified correctly in lme4?
(This is a fairly long answer, there is a summary at the end) You are not wrong in your understanding of what nested and crossed random effects are in the scenario that you describe. However, your def
Crossed vs nested random effects: how do they differ and how are they specified correctly in lme4? (This is a fairly long answer, there is a summary at the end) You are not wrong in your understanding of what nested and crossed random effects are in the scenario that you describe. However, your definition of crossed random effects is a little narrow. A more general definition of crossed random effects is simply: not nested. We will look at this at the end of this answer, but the bulk of the answer will focus on the scenario you presented, of classrooms within schools. First note that: Nesting is a property of the data, or rather the experimental design, not the model. Also, Nested data can be encoded in at least 2 different ways, and this is at the heart of the issue you found. The dataset in your example is rather large, so I will use another schools example from the internet to explain the issues. But first, consider the following over-simplified example: Here we have classes nested in schools, which is a familiar scenario. The important point here is that, between each school, the classes have the same identifier, even though they are distinct if they are nested. Class1 appears in School1, School2 and School3. However if the data are nested then Class1 in School1 is not the same unit of measurement as Class1 in School2 and School3. If they were the same, then we would have this situation: which means that every class belongs to every school. The former is a nested design, and the latter is a crossed design (some might also call it multiple membership. Edit: For a discussion of the differences between multiple membership and crossed random effects, see here ), and we would formulate these in lme4 using: (1|School/Class) or equivalently (1|School) + (1|Class:School) and (1|School) + (1|Class) respectively. Due to the ambiguity of whether there is nesting or crossing of random effects, it is very important to specify the model correctly as these models will produce different results, as we shall show below. Moreover, it is not possible to know, just by inspecting the data, whether we have nested or crossed random effects. This can only be determined with knowledge of the data and the experimental design. But first let us consider a case where the Class variable is coded uniquely across schools: There is no longer any ambiguity concerning nesting or crossing. The nesting is explicit. Let us now see this with an example in R, where we have 6 schools (labelled I-VI) and 4 classes within each school (labelled a to d): > dt <- read.table("http://bayes.acs.unt.edu:8083/BayesContent/class/Jon/R_SC/Module9/lmm.data.txt", header=TRUE, sep=",", na.strings="NA", dec=".", strip.white=TRUE) > # data was previously publicly available from > # http://researchsupport.unt.edu/class/Jon/R_SC/Module9/lmm.data.txt > # but the link is now broken > xtabs(~ school + class, dt) class school a b c d I 50 50 50 50 II 50 50 50 50 III 50 50 50 50 IV 50 50 50 50 V 50 50 50 50 VI 50 50 50 50 We can see from this cross tabulation that every class ID appears in every school, which satisfies your definition of crossed random effects (in this case we have fully, as opposed to partially, crossed random effects, because every class occurs in every school). So this is the same situation that we had in the first figure above. However, if the data are really nested and not crossed, then we need to explicitly tell lme4: > m0 <- lmer(extro ~ open + agree + social + (1 | school/class), data = dt) > summary(m0) Random effects: Groups Name Variance Std.Dev. class:school (Intercept) 8.2043 2.8643 school (Intercept) 93.8421 9.6872 Residual 0.9684 0.9841 Number of obs: 1200, groups: class:school, 24; school, 6 Fixed effects: Estimate Std. Error t value (Intercept) 60.2378227 4.0117909 15.015 open 0.0061065 0.0049636 1.230 agree -0.0076659 0.0056986 -1.345 social 0.0005404 0.0018524 0.292 > m1 <- lmer(extro ~ open + agree + social + (1 | school) + (1 |class), data = dt) summary(m1) Random effects: Groups Name Variance Std.Dev. school (Intercept) 95.887 9.792 class (Intercept) 5.790 2.406 Residual 2.787 1.669 Number of obs: 1200, groups: school, 6; class, 4 Fixed effects: Estimate Std. Error t value (Intercept) 60.198841 4.212974 14.289 open 0.010834 0.008349 1.298 agree -0.005420 0.009605 -0.564 social -0.001762 0.003107 -0.567 As expected, the results differ because m0 is a nested model while m1 is a crossed model. Now, if we introduce a new variable for the class identifier: > dt$classID <- paste(dt$school, dt$class, sep=".") > xtabs(~ school + classID, dt) classID school I.a I.b I.c I.d II.a II.b II.c II.d III.a III.b III.c III.d IV.a IV.b I 50 50 50 50 0 0 0 0 0 0 0 0 0 0 II 0 0 0 0 50 50 50 50 0 0 0 0 0 0 III 0 0 0 0 0 0 0 0 50 50 50 50 0 0 IV 0 0 0 0 0 0 0 0 0 0 0 0 50 50 V 0 0 0 0 0 0 0 0 0 0 0 0 0 0 VI 0 0 0 0 0 0 0 0 0 0 0 0 0 0 classID school IV.c IV.d V.a V.b V.c V.d VI.a VI.b VI.c VI.d I 0 0 0 0 0 0 0 0 0 0 II 0 0 0 0 0 0 0 0 0 0 III 0 0 0 0 0 0 0 0 0 0 IV 50 50 0 0 0 0 0 0 0 0 V 0 0 50 50 50 50 0 0 0 0 VI 0 0 0 0 0 0 50 50 50 50 The cross tabulation shows that each level of class occurs only in one level of school, as per your definition of nesting. This is also the case with your data, however it is difficult to show that with your data because it is very sparse. Both model formulations will now produce the same output (that of the nested model m0 above): > m2 <- lmer(extro ~ open + agree + social + (1 | school/classID), data = dt) > summary(m2) Random effects: Groups Name Variance Std.Dev. classID:school (Intercept) 8.2043 2.8643 school (Intercept) 93.8419 9.6872 Residual 0.9684 0.9841 Number of obs: 1200, groups: classID:school, 24; school, 6 Fixed effects: Estimate Std. Error t value (Intercept) 60.2378227 4.0117882 15.015 open 0.0061065 0.0049636 1.230 agree -0.0076659 0.0056986 -1.345 social 0.0005404 0.0018524 0.292 > m3 <- lmer(extro ~ open + agree + social + (1 | school) + (1 |classID), data = dt) > summary(m3) Random effects: Groups Name Variance Std.Dev. classID (Intercept) 8.2043 2.8643 school (Intercept) 93.8419 9.6872 Residual 0.9684 0.9841 Number of obs: 1200, groups: classID, 24; school, 6 Fixed effects: Estimate Std. Error t value (Intercept) 60.2378227 4.0117882 15.015 open 0.0061065 0.0049636 1.230 agree -0.0076659 0.0056986 -1.345 social 0.0005404 0.0018524 0.292 It is worth noting that crossed random effects do not have to occur within the same factor - in the above the crossing was completely within school. However, this does not have to be the case, and very often it is not. For example, sticking with a school scenario, if instead of classes within schools we have pupils within schools, and we were also interested in the doctors that the pupils were registered with, then we would also have nesting of pupils within doctors. There is no nesting of schools within doctors, or vice versa, so this is also an example of crossed random effects, and we say that schools and doctors are crossed. A similar scenario where crossed random effects occur is when individual observations are nested within two factors simultaneously, which commonly occurs with so-called repeated measures subject-item data. Typically each subject is measured/tested multiple times with/on different items and these same items are measured/tested by different subjects. Thus, observations are clustered within subjects and within items, but items are not nested within subjects or vice-versa. Again, we say that subjects and items are crossed. Summary: TL;DR The difference between crossed and nested random effects is that nested random effects occur when one factor (grouping variable) appears only within a particular level of another factor (grouping variable). This is specified in lme4 with: (1|group1/group2) where group2 is nested within group1. Crossed random effects are simply: not nested. This can occur with three or more grouping variables (factors) where one factor is separately nested in both of the others, or with two or more factors where individual observations are nested separately within the two factors. These are specified in lme4 with: (1|group1) + (1|group2)
Crossed vs nested random effects: how do they differ and how are they specified correctly in lme4? (This is a fairly long answer, there is a summary at the end) You are not wrong in your understanding of what nested and crossed random effects are in the scenario that you describe. However, your def
887
What's the difference between Normalization and Standardization?
Normalization rescales the values into a range of [0,1]. This might be useful in some cases where all parameters need to have the same positive scale. However, the outliers from the data set are lost. $$ X_{changed} = \frac{X - X_{min}}{X_{max}-X_{min}} $$ Standardization rescales data to have a mean ($\mu$) of 0 and standard deviation ($\sigma$) of 1 (unit variance). $$ X_{changed} = \frac{X - \mu}{\sigma} $$ For most applications standardization is recommended.
What's the difference between Normalization and Standardization?
Normalization rescales the values into a range of [0,1]. This might be useful in some cases where all parameters need to have the same positive scale. However, the outliers from the data set are lost
What's the difference between Normalization and Standardization? Normalization rescales the values into a range of [0,1]. This might be useful in some cases where all parameters need to have the same positive scale. However, the outliers from the data set are lost. $$ X_{changed} = \frac{X - X_{min}}{X_{max}-X_{min}} $$ Standardization rescales data to have a mean ($\mu$) of 0 and standard deviation ($\sigma$) of 1 (unit variance). $$ X_{changed} = \frac{X - \mu}{\sigma} $$ For most applications standardization is recommended.
What's the difference between Normalization and Standardization? Normalization rescales the values into a range of [0,1]. This might be useful in some cases where all parameters need to have the same positive scale. However, the outliers from the data set are lost
888
What's the difference between Normalization and Standardization?
In the business world, "normalization" typically means that the range of values are "normalized to be from 0.0 to 1.0". "Standardization" typically means that the range of values are "standardized" to measure how many standard deviations the value is from its mean. However, not everyone would agree with that. It's best to explain your definitions before you use them. In any case, your transformation needs to provide something useful. In your train/car example, do you gain anything out of knowing how many standard deviations from their mean, each value lies? If you plot those "standardized" measures against each other as an x-y plot, you might see a correlation (see the first graph on the right): http://en.wikipedia.org/wiki/Correlation_and_dependence If so, does that mean anything to you? As far as your second example goes, if you want to "equate" a GPA from one scale to another scale, what do these scales have in common? In other words, how could you transform those minimums to be equivalent, and the maximums to be equivalent? Here's an example of "normalization": Normalization Link Once you get your GPA and ACT scores in an interchangeable form, does it make sense to weigh the ACT and GPA scores differently? If so, what weighting means something to you? Edit 1 (05/03/2011) ========================================== First, I would check out the links suggested by whuber above. The bottom line is, in both of your two-variable problems, you are going to have to come up with an "equivalence" of one variable versus the other. And, a way to differentiate one variable from the other. In other words, even if you can simplify this to a simple linear relationship, you'll need "weights" to differentiate one variable from the other. Here's an example of a two variable problem: Multi-Attribute Utilities From the last page, if you can say that standardized train traffic U1(x) versus standardized car traffic U2(y) is "additively independent", then you might be able to get away with a simple equation such as: U(x, y) = k1*U1(x) + (1 - k1)*U2(y) Where k1=0.5 means you're indifferent to standardized car/train traffic. A higher k1 would mean train traffic U1(x) is more important. However, if these two variables are not "additively independent", then you'll have to use a more complicated equation. One possibility is shown on page 1: U(x, y) = k1*U1(x) + k2*U2(y) + (1-k1-k2)*U1(x)*U2(y) In either case, you'll have to come up with a utility U(x, y) that makes sense. The same general weighting/comparison concepts hold for your GPA/ACT problem. Even if they are "normalized" rather than "standardized". One last issue. I know you're not going to like this, but the definition of the term "additively independent" is on page 4 of the following link. I looked for a less geeky definition, but I couldn't find one. You might look around to find something better. Additively Independent Quoting the link: Intuitively, the agent prefers being both healthy and wealthy more than might be suggested by considering the two attributes separately. It thus displays a preference for probability distributions in which health and wealth are positively correlated. As suggested at the top of this response, if you plot standardized train traffic versus standardized car traffic on an x-y plot, you might see a correlation. If so, then you're stuck with the above non-linear utility equation or something similar.
What's the difference between Normalization and Standardization?
In the business world, "normalization" typically means that the range of values are "normalized to be from 0.0 to 1.0". "Standardization" typically means that the range of values are "standardized"
What's the difference between Normalization and Standardization? In the business world, "normalization" typically means that the range of values are "normalized to be from 0.0 to 1.0". "Standardization" typically means that the range of values are "standardized" to measure how many standard deviations the value is from its mean. However, not everyone would agree with that. It's best to explain your definitions before you use them. In any case, your transformation needs to provide something useful. In your train/car example, do you gain anything out of knowing how many standard deviations from their mean, each value lies? If you plot those "standardized" measures against each other as an x-y plot, you might see a correlation (see the first graph on the right): http://en.wikipedia.org/wiki/Correlation_and_dependence If so, does that mean anything to you? As far as your second example goes, if you want to "equate" a GPA from one scale to another scale, what do these scales have in common? In other words, how could you transform those minimums to be equivalent, and the maximums to be equivalent? Here's an example of "normalization": Normalization Link Once you get your GPA and ACT scores in an interchangeable form, does it make sense to weigh the ACT and GPA scores differently? If so, what weighting means something to you? Edit 1 (05/03/2011) ========================================== First, I would check out the links suggested by whuber above. The bottom line is, in both of your two-variable problems, you are going to have to come up with an "equivalence" of one variable versus the other. And, a way to differentiate one variable from the other. In other words, even if you can simplify this to a simple linear relationship, you'll need "weights" to differentiate one variable from the other. Here's an example of a two variable problem: Multi-Attribute Utilities From the last page, if you can say that standardized train traffic U1(x) versus standardized car traffic U2(y) is "additively independent", then you might be able to get away with a simple equation such as: U(x, y) = k1*U1(x) + (1 - k1)*U2(y) Where k1=0.5 means you're indifferent to standardized car/train traffic. A higher k1 would mean train traffic U1(x) is more important. However, if these two variables are not "additively independent", then you'll have to use a more complicated equation. One possibility is shown on page 1: U(x, y) = k1*U1(x) + k2*U2(y) + (1-k1-k2)*U1(x)*U2(y) In either case, you'll have to come up with a utility U(x, y) that makes sense. The same general weighting/comparison concepts hold for your GPA/ACT problem. Even if they are "normalized" rather than "standardized". One last issue. I know you're not going to like this, but the definition of the term "additively independent" is on page 4 of the following link. I looked for a less geeky definition, but I couldn't find one. You might look around to find something better. Additively Independent Quoting the link: Intuitively, the agent prefers being both healthy and wealthy more than might be suggested by considering the two attributes separately. It thus displays a preference for probability distributions in which health and wealth are positively correlated. As suggested at the top of this response, if you plot standardized train traffic versus standardized car traffic on an x-y plot, you might see a correlation. If so, then you're stuck with the above non-linear utility equation or something similar.
What's the difference between Normalization and Standardization? In the business world, "normalization" typically means that the range of values are "normalized to be from 0.0 to 1.0". "Standardization" typically means that the range of values are "standardized"
889
What's the difference between Normalization and Standardization?
The answer is simple, but you're not going to like it: it depends. If you value 1 standard deviation from both scores equally, then standardization is the way to go (note: in fact, you're studentizing, because you're dividing by an estimate of the SD of the population). If not, it is likely that standardization will be a good first step, after which you can give more weight to one of the score by multiplying by a wellchosen factor.
What's the difference between Normalization and Standardization?
The answer is simple, but you're not going to like it: it depends. If you value 1 standard deviation from both scores equally, then standardization is the way to go (note: in fact, you're studentizing
What's the difference between Normalization and Standardization? The answer is simple, but you're not going to like it: it depends. If you value 1 standard deviation from both scores equally, then standardization is the way to go (note: in fact, you're studentizing, because you're dividing by an estimate of the SD of the population). If not, it is likely that standardization will be a good first step, after which you can give more weight to one of the score by multiplying by a wellchosen factor.
What's the difference between Normalization and Standardization? The answer is simple, but you're not going to like it: it depends. If you value 1 standard deviation from both scores equally, then standardization is the way to go (note: in fact, you're studentizing
890
What's the difference between Normalization and Standardization?
To solve the GPA/ACT or train/car problem, why not use the Geometric Mean? n√(a1 × a2 × ... × an) Where a* is the value from the distribution and n is the index of the distribution. This geometric mean makes sure that each value dispite its scale, equally contributes to the mean value. See more at Geometric Mean
What's the difference between Normalization and Standardization?
To solve the GPA/ACT or train/car problem, why not use the Geometric Mean? n√(a1 × a2 × ... × an) Where a* is the value from the distribution and n is the index of the distribution. This geometric me
What's the difference between Normalization and Standardization? To solve the GPA/ACT or train/car problem, why not use the Geometric Mean? n√(a1 × a2 × ... × an) Where a* is the value from the distribution and n is the index of the distribution. This geometric mean makes sure that each value dispite its scale, equally contributes to the mean value. See more at Geometric Mean
What's the difference between Normalization and Standardization? To solve the GPA/ACT or train/car problem, why not use the Geometric Mean? n√(a1 × a2 × ... × an) Where a* is the value from the distribution and n is the index of the distribution. This geometric me
891
What's the difference between Normalization and Standardization?
In my field, data science, normalization is a transformation of data which allows easy comparison of the data downstream. There are many types of normalizations. Scaling being one of them. You can also log the data, or do anything else you want. The type of normalisation you use would depend on the outcome you want, since all normalisations transform the data into something else. Here some of what I consider normalization examples. Scaling normalisations Quantile normalisation
What's the difference between Normalization and Standardization?
In my field, data science, normalization is a transformation of data which allows easy comparison of the data downstream. There are many types of normalizations. Scaling being one of them. You can als
What's the difference between Normalization and Standardization? In my field, data science, normalization is a transformation of data which allows easy comparison of the data downstream. There are many types of normalizations. Scaling being one of them. You can also log the data, or do anything else you want. The type of normalisation you use would depend on the outcome you want, since all normalisations transform the data into something else. Here some of what I consider normalization examples. Scaling normalisations Quantile normalisation
What's the difference between Normalization and Standardization? In my field, data science, normalization is a transformation of data which allows easy comparison of the data downstream. There are many types of normalizations. Scaling being one of them. You can als
892
Does Julia have any hope of sticking in the statistical community?
I think the key will be whether or not libraries start being developed for Julia. It's all well and good to see toy examples (even if they are complicated toys) showing that Julia blows R out of the water at tasks R is bad at. But poorly done loops and hand coded algorithms are not why many of the people I know who use R use R. They use it because for nearly any statistical task under the sun, someone has written R code for it. R is both a programming language and a statistics package - at present Julia is only the former. I think its possible to get there, but there are much more established languages (Python) that still struggle with being usable statistical toolkits.
Does Julia have any hope of sticking in the statistical community?
I think the key will be whether or not libraries start being developed for Julia. It's all well and good to see toy examples (even if they are complicated toys) showing that Julia blows R out of the w
Does Julia have any hope of sticking in the statistical community? I think the key will be whether or not libraries start being developed for Julia. It's all well and good to see toy examples (even if they are complicated toys) showing that Julia blows R out of the water at tasks R is bad at. But poorly done loops and hand coded algorithms are not why many of the people I know who use R use R. They use it because for nearly any statistical task under the sun, someone has written R code for it. R is both a programming language and a statistics package - at present Julia is only the former. I think its possible to get there, but there are much more established languages (Python) that still struggle with being usable statistical toolkits.
Does Julia have any hope of sticking in the statistical community? I think the key will be whether or not libraries start being developed for Julia. It's all well and good to see toy examples (even if they are complicated toys) showing that Julia blows R out of the w
893
Does Julia have any hope of sticking in the statistical community?
I agree with a lot of the other comments. "Hope"? Sure. I think Julia has learned a lot from what R and Python/NumPy/Pandas and other systems have done right and wrong over the years. If I were smarter than I am, and wanted to write a new programming language that would be the substrate for a statistical development environment in the future, it would look very much like Julia. This said, it'll be 5 years before this question could possibly be answered in hindsight. As of right now, Julia lacks the following critical aspects of a statistical programming system that could compete with R for day-to-day users: (list updated over time...) optionally-ordered factor types most statistical tests and statistical models literate programming/reproduce-able analysis support R-class, or even Matlab-class plotting To compete with R, Julia and add-on stats packages will need to be clean enough and complete enough that smart non-programmers, say grad students in the social sciences, could reasonably use it. There's a heck of a lot of work to get there. Maybe it'll happen, maybe it'll fizzle, maybe something else (R 3.0?) will supercede it. Update: Julia now supports DataFrames with missing data/NAs, modules/namespaces, formula types and model.matrix infrastructure, plotting (sorta), database support (but not to DataFrames yet), and passing arguments by keywords. There is also now an IDE (Julia Studio), Windows support, some statistical tests, and some date/time support.
Does Julia have any hope of sticking in the statistical community?
I agree with a lot of the other comments. "Hope"? Sure. I think Julia has learned a lot from what R and Python/NumPy/Pandas and other systems have done right and wrong over the years. If I were smarte
Does Julia have any hope of sticking in the statistical community? I agree with a lot of the other comments. "Hope"? Sure. I think Julia has learned a lot from what R and Python/NumPy/Pandas and other systems have done right and wrong over the years. If I were smarter than I am, and wanted to write a new programming language that would be the substrate for a statistical development environment in the future, it would look very much like Julia. This said, it'll be 5 years before this question could possibly be answered in hindsight. As of right now, Julia lacks the following critical aspects of a statistical programming system that could compete with R for day-to-day users: (list updated over time...) optionally-ordered factor types most statistical tests and statistical models literate programming/reproduce-able analysis support R-class, or even Matlab-class plotting To compete with R, Julia and add-on stats packages will need to be clean enough and complete enough that smart non-programmers, say grad students in the social sciences, could reasonably use it. There's a heck of a lot of work to get there. Maybe it'll happen, maybe it'll fizzle, maybe something else (R 3.0?) will supercede it. Update: Julia now supports DataFrames with missing data/NAs, modules/namespaces, formula types and model.matrix infrastructure, plotting (sorta), database support (but not to DataFrames yet), and passing arguments by keywords. There is also now an IDE (Julia Studio), Windows support, some statistical tests, and some date/time support.
Does Julia have any hope of sticking in the statistical community? I agree with a lot of the other comments. "Hope"? Sure. I think Julia has learned a lot from what R and Python/NumPy/Pandas and other systems have done right and wrong over the years. If I were smarte
894
Does Julia have any hope of sticking in the statistical community?
For me, one very important thing for a data analysis language is to have query/relational algebra functionality with reasonable defaults and interactively-oriented design, and ideally this should be a built-in of the language. IMO, no FOSS language that I've used does this effectively, not even R. data.frame is very clunky to work with interactively - for example, it prints the whole data structure on invocation, the \$ syntax is hard to work programatically with, querying requires redundant self reference (i.e., DF[DF$x < 10]), joins and aggregation are awkward. Data.table solves most of these annoyances, but as it is not part of the core implementation, most R code does not make use of its facilities. Pandas in python suffers from the same faults. These gripes may seem nitpicky, but these faults accumulate and in the end are significant in aggregate as they end up costing a lot of time. I believe if Julia is to succeed as a data analysis environment, effort must be devoted to implementing SQL type operators (without the baggage of SQL syntax) on a user friendly table data type.
Does Julia have any hope of sticking in the statistical community?
For me, one very important thing for a data analysis language is to have query/relational algebra functionality with reasonable defaults and interactively-oriented design, and ideally this should be a
Does Julia have any hope of sticking in the statistical community? For me, one very important thing for a data analysis language is to have query/relational algebra functionality with reasonable defaults and interactively-oriented design, and ideally this should be a built-in of the language. IMO, no FOSS language that I've used does this effectively, not even R. data.frame is very clunky to work with interactively - for example, it prints the whole data structure on invocation, the \$ syntax is hard to work programatically with, querying requires redundant self reference (i.e., DF[DF$x < 10]), joins and aggregation are awkward. Data.table solves most of these annoyances, but as it is not part of the core implementation, most R code does not make use of its facilities. Pandas in python suffers from the same faults. These gripes may seem nitpicky, but these faults accumulate and in the end are significant in aggregate as they end up costing a lot of time. I believe if Julia is to succeed as a data analysis environment, effort must be devoted to implementing SQL type operators (without the baggage of SQL syntax) on a user friendly table data type.
Does Julia have any hope of sticking in the statistical community? For me, one very important thing for a data analysis language is to have query/relational algebra functionality with reasonable defaults and interactively-oriented design, and ideally this should be a
895
Does Julia have any hope of sticking in the statistical community?
I can sign under what Dirk and EpiGrad said; yet there is one more thing that makes R an unique lang in its niche -- data-oriented type system. R's was especially designed for handling data, that's why it is vector-centered and has stuff like data.frames, factors, NAs and attributes. Julia's types are on the other hand numerical-performance-oriented, thus we have scalars, well defined storage modes, unions and structs. This may look benign, but everyone that has ever try to do stats with MATLAB knows that it really hurts. So, at least for me, Julia can't offer anything which I cannot fix with a few-line C chunk and kills a lot of really useful expressiveness.
Does Julia have any hope of sticking in the statistical community?
I can sign under what Dirk and EpiGrad said; yet there is one more thing that makes R an unique lang in its niche -- data-oriented type system. R's was especially designed for handling data, that's w
Does Julia have any hope of sticking in the statistical community? I can sign under what Dirk and EpiGrad said; yet there is one more thing that makes R an unique lang in its niche -- data-oriented type system. R's was especially designed for handling data, that's why it is vector-centered and has stuff like data.frames, factors, NAs and attributes. Julia's types are on the other hand numerical-performance-oriented, thus we have scalars, well defined storage modes, unions and structs. This may look benign, but everyone that has ever try to do stats with MATLAB knows that it really hurts. So, at least for me, Julia can't offer anything which I cannot fix with a few-line C chunk and kills a lot of really useful expressiveness.
Does Julia have any hope of sticking in the statistical community? I can sign under what Dirk and EpiGrad said; yet there is one more thing that makes R an unique lang in its niche -- data-oriented type system. R's was especially designed for handling data, that's w
896
Does Julia have any hope of sticking in the statistical community?
I can see Julia replacing Matlab, which would be a huge service for humanity. To replace R, you'd need to consider all of the things that Neil G, Harlan, and others have mentioned, plus one big factor that I don't believe has been addressed: easy installation of the application and its libraries. Right now, you can download a binary of R for Mac, Windows, or Linux. It works out of the box with a large selection of statistical methods. If you want to download a package, it's a simple command or mouse click. It just works. I went to download Julia and it's not simple. Even if you download the binary, you have to have gfortran installed in order to get the proper libraries. I downloaded the source and tried to make and it failed with no really useful message. I have an undergraduate and a graduate degree in computer science, so I could poke around and get it to work if I was so inclined. (I'm not.) Will Joe Statistician do that? R not only has a huge selection of packages, it has a fairly sophisticated system that makes binaries of the application and almost all packages, automatically. If, for some reason, you need to compile a package from source, that's not really any more difficult (as long as you have an appropriate compiler, etc, installed on your system). You can't ignore this infrastructure, do everything via github, and expect wide adoption. EDIT: I wanted to fool around with Julia -- it looks exciting. Two problems: 1) When I tried installing additional packages (forget what they're called in Julia), it failed with obscure errors. Evidently my Mac doesn't have a make-like tool that they expected. Not only does it fail, but it leaves stuff lying around that I have to manually delete or other installs will fail. 2) They force certain spacing in a line of code. I don't have the details in front of me, but it has to do with macros and not having a space between the macro and the parenthesis opening its arguments. That kind of restriction really bugs me, since I've developed my code formatting over many years and languages and I do actually put a space between a function/macro name and the opening parenthesis. Some code formatting restrictions I understand, but whitespace within a line?
Does Julia have any hope of sticking in the statistical community?
I can see Julia replacing Matlab, which would be a huge service for humanity. To replace R, you'd need to consider all of the things that Neil G, Harlan, and others have mentioned, plus one big factor
Does Julia have any hope of sticking in the statistical community? I can see Julia replacing Matlab, which would be a huge service for humanity. To replace R, you'd need to consider all of the things that Neil G, Harlan, and others have mentioned, plus one big factor that I don't believe has been addressed: easy installation of the application and its libraries. Right now, you can download a binary of R for Mac, Windows, or Linux. It works out of the box with a large selection of statistical methods. If you want to download a package, it's a simple command or mouse click. It just works. I went to download Julia and it's not simple. Even if you download the binary, you have to have gfortran installed in order to get the proper libraries. I downloaded the source and tried to make and it failed with no really useful message. I have an undergraduate and a graduate degree in computer science, so I could poke around and get it to work if I was so inclined. (I'm not.) Will Joe Statistician do that? R not only has a huge selection of packages, it has a fairly sophisticated system that makes binaries of the application and almost all packages, automatically. If, for some reason, you need to compile a package from source, that's not really any more difficult (as long as you have an appropriate compiler, etc, installed on your system). You can't ignore this infrastructure, do everything via github, and expect wide adoption. EDIT: I wanted to fool around with Julia -- it looks exciting. Two problems: 1) When I tried installing additional packages (forget what they're called in Julia), it failed with obscure errors. Evidently my Mac doesn't have a make-like tool that they expected. Not only does it fail, but it leaves stuff lying around that I have to manually delete or other installs will fail. 2) They force certain spacing in a line of code. I don't have the details in front of me, but it has to do with macros and not having a space between the macro and the parenthesis opening its arguments. That kind of restriction really bugs me, since I've developed my code formatting over many years and languages and I do actually put a space between a function/macro name and the opening parenthesis. Some code formatting restrictions I understand, but whitespace within a line?
Does Julia have any hope of sticking in the statistical community? I can see Julia replacing Matlab, which would be a huge service for humanity. To replace R, you'd need to consider all of the things that Neil G, Harlan, and others have mentioned, plus one big factor
897
Does Julia have any hope of sticking in the statistical community?
The Julia language is pretty new; it's time in the spot light can be measured in weeks (even though its development time can of course be measured in years). Now those weeks in the spot light were very exciting weeks---see for example the recent talk at Stanford where "it had just started"---but what you ask for in terms of broader infrastructure and package support will take much longer to materialize. So I'd keep using R, and be mindful of the developing alternatives. Last year a lot of people went gaga over Clojure; this year Julia is the reigning new flavour. We'll see if it sticks.
Does Julia have any hope of sticking in the statistical community?
The Julia language is pretty new; it's time in the spot light can be measured in weeks (even though its development time can of course be measured in years). Now those weeks in the spot light were ver
Does Julia have any hope of sticking in the statistical community? The Julia language is pretty new; it's time in the spot light can be measured in weeks (even though its development time can of course be measured in years). Now those weeks in the spot light were very exciting weeks---see for example the recent talk at Stanford where "it had just started"---but what you ask for in terms of broader infrastructure and package support will take much longer to materialize. So I'd keep using R, and be mindful of the developing alternatives. Last year a lot of people went gaga over Clojure; this year Julia is the reigning new flavour. We'll see if it sticks.
Does Julia have any hope of sticking in the statistical community? The Julia language is pretty new; it's time in the spot light can be measured in weeks (even though its development time can of course be measured in years). Now those weeks in the spot light were ver
898
Does Julia have any hope of sticking in the statistical community?
Bruce Tate here, author of Seven Languages in Seven Weeks. Here are a few thoughts. I am working on Julia for the followup book. The following is just my opinion after a few weeks of play. There are two fundamental forces at play. First, all languages have a lifespan. R will be replaced some day. We don't know when. New languages have an extremely difficult time evolving. When a new language does evolve, it usually solves some overwhelming pain point. These two things are related. To me, we're starting to see a theme taking shape around languages like R. It's not fast enough, and it's harder than it needs to be. Those who can live within a certain performance envelope and stay within established libraries are fine. Those who can't need more, and they're starting to look for more. The thing is, computer architectures are changing, and to take advantage of them, the language and its constructs need to be constructed in a certain way. Julia's take on concurrency is interesting. It optimizes the right thing for such a language: transparent distribution and the efficient movement of data between processes. When I use Julia for typical tasks, maps and transforms and the like, I am just calling functions. I don't have to worry about the plumbing. To me, the fact that Julia is faster on one processor is interesting, but not overly damning for R. The thing that is interesting to me is that as processors depend more and more on multicore for performance, technical computing problems are just about ideally positioned to take the best possible advantage, given the right language. The other feature that will help that happen is indeed macros. The pace of the language is just intense right now. Macros let you build with bigger, cleaner building blocks. Looking at libraries is interesting but doesn't tell the whole picture. You need to look at the growth of libraries. Julia's trajectory is pretty much spot on here. Clojure is interesting to some because there's no technical language that does what R can, so some look to a general purpose language to fill that void. I am actually a huge fan. But Clojure is a pretty serious brain warp. Clojure will be there for programmers who need to do technical computing. It won't be for engineers and scientists. There's just too much to learn. So to me, Julia or something like it will absolutely replace R some day. It's a matter of time.
Does Julia have any hope of sticking in the statistical community?
Bruce Tate here, author of Seven Languages in Seven Weeks. Here are a few thoughts. I am working on Julia for the followup book. The following is just my opinion after a few weeks of play. There are
Does Julia have any hope of sticking in the statistical community? Bruce Tate here, author of Seven Languages in Seven Weeks. Here are a few thoughts. I am working on Julia for the followup book. The following is just my opinion after a few weeks of play. There are two fundamental forces at play. First, all languages have a lifespan. R will be replaced some day. We don't know when. New languages have an extremely difficult time evolving. When a new language does evolve, it usually solves some overwhelming pain point. These two things are related. To me, we're starting to see a theme taking shape around languages like R. It's not fast enough, and it's harder than it needs to be. Those who can live within a certain performance envelope and stay within established libraries are fine. Those who can't need more, and they're starting to look for more. The thing is, computer architectures are changing, and to take advantage of them, the language and its constructs need to be constructed in a certain way. Julia's take on concurrency is interesting. It optimizes the right thing for such a language: transparent distribution and the efficient movement of data between processes. When I use Julia for typical tasks, maps and transforms and the like, I am just calling functions. I don't have to worry about the plumbing. To me, the fact that Julia is faster on one processor is interesting, but not overly damning for R. The thing that is interesting to me is that as processors depend more and more on multicore for performance, technical computing problems are just about ideally positioned to take the best possible advantage, given the right language. The other feature that will help that happen is indeed macros. The pace of the language is just intense right now. Macros let you build with bigger, cleaner building blocks. Looking at libraries is interesting but doesn't tell the whole picture. You need to look at the growth of libraries. Julia's trajectory is pretty much spot on here. Clojure is interesting to some because there's no technical language that does what R can, so some look to a general purpose language to fill that void. I am actually a huge fan. But Clojure is a pretty serious brain warp. Clojure will be there for programmers who need to do technical computing. It won't be for engineers and scientists. There's just too much to learn. So to me, Julia or something like it will absolutely replace R some day. It's a matter of time.
Does Julia have any hope of sticking in the statistical community? Bruce Tate here, author of Seven Languages in Seven Weeks. Here are a few thoughts. I am working on Julia for the followup book. The following is just my opinion after a few weeks of play. There are
899
Does Julia have any hope of sticking in the statistical community?
Every time I see a new language, I ask myself why an existing language can't be improved instead. Python's big advantages are a rich set of modules (not just statistics, but plotting libraries, output to pdf, etc.) language constructs that you end up needing in the long run (objected-oriented constructs you need in a big project; decorators, closures, etc. that simplify development) many tutorials and a large support community access to mapreduce, if you have a lot of data to process and don't mind paying a few pennies to run it on a cluster. In order to overtake R, Julia, etc., Python could use development of just-in-time compilation for restricted Python to give you more speed on a single machine (but mapreduce is still better if you can stand the latency) a richer statistical library
Does Julia have any hope of sticking in the statistical community?
Every time I see a new language, I ask myself why an existing language can't be improved instead. Python's big advantages are a rich set of modules (not just statistics, but plotting libraries, outpu
Does Julia have any hope of sticking in the statistical community? Every time I see a new language, I ask myself why an existing language can't be improved instead. Python's big advantages are a rich set of modules (not just statistics, but plotting libraries, output to pdf, etc.) language constructs that you end up needing in the long run (objected-oriented constructs you need in a big project; decorators, closures, etc. that simplify development) many tutorials and a large support community access to mapreduce, if you have a lot of data to process and don't mind paying a few pennies to run it on a cluster. In order to overtake R, Julia, etc., Python could use development of just-in-time compilation for restricted Python to give you more speed on a single machine (but mapreduce is still better if you can stand the latency) a richer statistical library
Does Julia have any hope of sticking in the statistical community? Every time I see a new language, I ask myself why an existing language can't be improved instead. Python's big advantages are a rich set of modules (not just statistics, but plotting libraries, outpu
900
Does Julia have any hope of sticking in the statistical community?
Julia will not take over R very soon. Check out Microsoft R open. https://mran.revolutionanalytics.com/open/ This is an enhanced version of R that automatically uses all the cores of your computer. It is the same R, same language, same packages. When you install it, RStudio will also use it in the console. The speed of MRO is even faster than Julia. I do a lot of heavy-duty computing and have used Julia more than a year. I switched to R recently because R has a better support and RStudio is an awesome editor. Julia is still in early stage and possibly not catching up Python or R very soon.
Does Julia have any hope of sticking in the statistical community?
Julia will not take over R very soon. Check out Microsoft R open. https://mran.revolutionanalytics.com/open/ This is an enhanced version of R that automatically uses all the cores of your computer. It
Does Julia have any hope of sticking in the statistical community? Julia will not take over R very soon. Check out Microsoft R open. https://mran.revolutionanalytics.com/open/ This is an enhanced version of R that automatically uses all the cores of your computer. It is the same R, same language, same packages. When you install it, RStudio will also use it in the console. The speed of MRO is even faster than Julia. I do a lot of heavy-duty computing and have used Julia more than a year. I switched to R recently because R has a better support and RStudio is an awesome editor. Julia is still in early stage and possibly not catching up Python or R very soon.
Does Julia have any hope of sticking in the statistical community? Julia will not take over R very soon. Check out Microsoft R open. https://mran.revolutionanalytics.com/open/ This is an enhanced version of R that automatically uses all the cores of your computer. It