idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
301
How to understand degrees of freedom?
This is a subtle question. It takes a thoughtful person not to understand those quotations! Although they are suggestive, it turns out that none of them is exactly or generally correct. I haven't the time (and there isn't the space here) to give a full exposition, but I would like to share one approach and an insight that it suggests. Where does the concept of degrees of freedom (DF) arise? The contexts in which it's found in elementary treatments are: The Student t-test and its variants such as the Welch or Satterthwaite solutions to the Behrens-Fisher problem (where two populations have different variances). The Chi-squared distribution (defined as a sum of squares of independent standard Normals), which is implicated in the sampling distribution of the variance. The F-test (of ratios of estimated variances). The Chi-squared test, comprising its uses in (a) testing for independence in contingency tables and (b) testing for goodness of fit of distributional estimates. In spirit, these tests run a gamut from being exact (the Student t-test and F-test for Normal variates) to being good approximations (the Student t-test and the Welch/Satterthwaite tests for not-too-badly-skewed data) to being based on asymptotic approximations (the Chi-squared test). An interesting aspect of some of these is the appearance of non-integral "degrees of freedom" (the Welch/Satterthwaite tests and, as we will see, the Chi-squared test). This is of especial interest because it is the first hint that DF is not any of the things claimed of it. We can dispose right away of some of the claims in the question. Because "final calculation of a statistic" is not well-defined (it apparently depends on what algorithm one uses for the calculation), it can be no more than a vague suggestion and is worth no further criticism. Similarly, neither "number of independent scores that go into the estimate" nor "the number of parameters used as intermediate steps" are well-defined. "Independent pieces of information that go into [an] estimate" is difficult to deal with, because there are two different but intimately related senses of "independent" that can be relevant here. One is independence of random variables; the other is functional independence. As an example of the latter, suppose we collect morphometric measurements of subjects--say, for simplicity, the three side lengths $X$, $Y$, $Z$, surface areas $S=2(XY+YZ+ZX)$, and volumes $V=XYZ$ of a set of wooden blocks. The three side lengths can be considered independent random variables, but all five variables are dependent RVs. The five are also functionally dependent because the codomain (not the "domain"!) of the vector-valued random variable $(X,Y,Z,S,V)$ traces out a three-dimensional manifold in $\mathbb{R}^5$. (Thus, locally at any point $\omega\in\mathbb{R}^5$, there are two functions $f_\omega$ and $g_\omega$ for which $f_\omega(X(\psi),\ldots,V(\psi))=0$ and $g_\omega(X(\psi),\ldots,V(\psi))=0$ for points $\psi$ "near" $\omega$ and the derivatives of $f$ and $g$ evaluated at $\omega$ are linearly independent.) However--here's the kicker--for many probability measures on the blocks, subsets of the variables such as $(X,S,V)$ are dependent as random variables but functionally independent. Having been alerted by these potential ambiguities, let's hold up the Chi-squared goodness of fit test for examination, because (a) it's simple, (b) it's one of the common situations where people really do need to know about DF to get the p-value right and (c) it's often used incorrectly. Here's a brief synopsis of the least controversial application of this test: You have a collection of data values $(x_1, \ldots, x_n)$, considered as a sample of a population. You have estimated some parameters $\theta_1, \ldots, \theta_p$ of a distribution. For example, you estimated the mean $\theta_1$ and standard deviation $\theta_2 = \theta_p$ of a Normal distribution, hypothesizing that the population is normally distributed but not knowing (in advance of obtaining the data) what $\theta_1$ or $\theta_2$ might be. In advance, you created a set of $k$ "bins" for the data. (It may be problematic when the bins are determined by the data, even though this is often done.) Using these bins, the data are reduced to the set of counts within each bin. Anticipating what the true values of $(\theta)$ might be, you have arranged it so (hopefully) each bin will receive approximately the same count. (Equal-probability binning assures the chi-squared distribution really is a good approximation to the true distribution of the chi-squared statistic about to be described.) You have a lot of data--enough to assure that almost all bins ought to have counts of 5 or greater. (This, we hope, will enable the sampling distribution of the $\chi^2$ statistic to be approximated adequately by some $\chi^2$ distribution.) Using the parameter estimates, you can compute the expected count in each bin. The Chi-squared statistic is the sum of the ratios $$\frac{(\text{observed}-\text{expected})^2}{\text{expected}}.$$ This, many authorities tell us, should have (to a very close approximation) a Chi-squared distribution. But there's a whole family of such distributions. They are differentiated by a parameter $\nu$ often referred to as the "degrees of freedom." The standard reasoning about how to determine $\nu$ goes like this I have $k$ counts. That's $k$ pieces of data. But there are (functional) relationships among them. To start with, I know in advance that the sum of the counts must equal $n$. That's one relationship. I estimated two (or $p$, generally) parameters from the data. That's two (or $p$) additional relationships, giving $p+1$ total relationships. Presuming they (the parameters) are all (functionally) independent, that leaves only $k-p-1$ (functionally) independent "degrees of freedom": that's the value to use for $\nu$. The problem with this reasoning (which is the sort of calculation the quotations in the question are hinting at) is that it's wrong except when some special additional conditions hold. Moreover, those conditions have nothing to do with independence (functional or statistical), with numbers of "components" of the data, with the numbers of parameters, nor with anything else referred to in the original question. Let me show you with an example. (To make it as clear as possible, I'm using a small number of bins, but that's not essential.) Let's generate 20 independent and identically distributed (iid) standard Normal variates and estimate their mean and standard deviation with the usual formulas (mean = sum/count, etc.). To test goodness of fit, create four bins with cutpoints at the quartiles of a standard normal: -0.675, 0, +0.657, and use the bin counts to generate a Chi-squared statistic. Repeat as patience allows; I had time to do 10,000 repetitions. The standard wisdom about DF says we have 4 bins and 1+2 = 3 constraints, implying the distribution of these 10,000 Chi-squared statistics should follow a Chi-squared distribution with 1 DF. Here's the histogram: The dark blue line graphs the PDF of a $\chi^2(1)$ distribution--the one we thought would work--while the dark red line graphs that of a $\chi^2(2)$ distribution (which would be a good guess if someone were to tell you that $\nu=1$ is incorrect). Neither fits the data. You might expect the problem to be due to the small size of the data sets ($n$=20) or perhaps the small size of the number of bins. However, the problem persists even with very large datasets and larger numbers of bins: it is not merely a failure to reach an asymptotic approximation. Things went wrong because I violated two requirements of the Chi-squared test: You must use the Maximum Likelihood estimate of the parameters. (This requirement can, in practice, be slightly violated.) You must base that estimate on the counts, not on the actual data! (This is crucial.) The red histogram depicts the chi-squared statistics for 10,000 separate iterations, following these requirements. Sure enough, it visibly follows the $\chi^2(1)$ curve (with an acceptable amount of sampling error), as we had originally hoped. The point of this comparison--which I hope you have seen coming--is that the correct DF to use for computing the p-values depends on many things other than dimensions of manifolds, counts of functional relationships, or the geometry of Normal variates. There is a subtle, delicate interaction between certain functional dependencies, as found in mathematical relationships among quantities, and distributions of the data, their statistics, and the estimators formed from them. Accordingly, it cannot be the case that DF is adequately explainable in terms of the geometry of multivariate normal distributions, or in terms of functional independence, or as counts of parameters, or anything else of this nature. We are led to see, then, that "degrees of freedom" is merely a heuristic that suggests what the sampling distribution of a (t, Chi-squared, or F) statistic ought to be, but it is not dispositive. Belief that it is dispositive leads to egregious errors. (For instance, the top hit on Google when searching "chi squared goodness of fit" is a Web page from an Ivy League university that gets most of this completely wrong! In particular, a simulation based on its instructions shows that the chi-squared value it recommends as having 7 DF actually has 9 DF.) With this more nuanced understanding, it's worthwhile to re-read the Wikipedia article in question: in its details it gets things right, pointing out where the DF heuristic tends to work and where it is either an approximation or does not apply at all. A good account of the phenomenon illustrated here (unexpectedly high DF in Chi-squared GOF tests) appears in Volume II of Kendall & Stuart, 5th edition. I am grateful for the opportunity afforded by this question to lead me back to this wonderful text, which is full of such useful analyses. Edit (Jan 2017) Here is R code to produce the figure following "The standard wisdom about DF..." # # Simulate data, one iteration per column of `x`. # n <- 20 n.sim <- 1e4 bins <- qnorm(seq(0, 1, 1/4)) x <- matrix(rnorm(n*n.sim), nrow=n) # # Compute statistics. # m <- colMeans(x) s <- apply(sweep(x, 2, m), 2, sd) counts <- apply(matrix(as.numeric(cut(x, bins)), nrow=n), 2, tabulate, nbins=4) expectations <- mapply(function(m,s) n*diff(pnorm(bins, m, s)), m, s) chisquared <- colSums((counts - expectations)^2 / expectations) # # Plot histograms of means, variances, and chi-squared stats. The first # two confirm all is working as expected. # mfrow <- par("mfrow") par(mfrow=c(1,3)) red <- "#a04040" # Intended to show correct distributions blue <- "#404090" # To show the putative chi-squared distribution hist(m, freq=FALSE) curve(dnorm(x, sd=1/sqrt(n)), add=TRUE, col=red, lwd=2) hist(s^2, freq=FALSE) curve(dchisq(x*(n-1), df=n-1)*(n-1), add=TRUE, col=red, lwd=2) hist(chisquared, freq=FALSE, breaks=seq(0, ceiling(max(chisquared)), 1/4), xlim=c(0, 13), ylim=c(0, 0.55), col="#c0c0ff", border="#404040") curve(ifelse(x <= 0, Inf, dchisq(x, df=2)), add=TRUE, col=red, lwd=2) curve(ifelse(x <= 0, Inf, dchisq(x, df=1)), add=TRUE, col=blue, lwd=2) par(mfrow=mfrow)
How to understand degrees of freedom?
This is a subtle question. It takes a thoughtful person not to understand those quotations! Although they are suggestive, it turns out that none of them is exactly or generally correct. I haven't t
How to understand degrees of freedom? This is a subtle question. It takes a thoughtful person not to understand those quotations! Although they are suggestive, it turns out that none of them is exactly or generally correct. I haven't the time (and there isn't the space here) to give a full exposition, but I would like to share one approach and an insight that it suggests. Where does the concept of degrees of freedom (DF) arise? The contexts in which it's found in elementary treatments are: The Student t-test and its variants such as the Welch or Satterthwaite solutions to the Behrens-Fisher problem (where two populations have different variances). The Chi-squared distribution (defined as a sum of squares of independent standard Normals), which is implicated in the sampling distribution of the variance. The F-test (of ratios of estimated variances). The Chi-squared test, comprising its uses in (a) testing for independence in contingency tables and (b) testing for goodness of fit of distributional estimates. In spirit, these tests run a gamut from being exact (the Student t-test and F-test for Normal variates) to being good approximations (the Student t-test and the Welch/Satterthwaite tests for not-too-badly-skewed data) to being based on asymptotic approximations (the Chi-squared test). An interesting aspect of some of these is the appearance of non-integral "degrees of freedom" (the Welch/Satterthwaite tests and, as we will see, the Chi-squared test). This is of especial interest because it is the first hint that DF is not any of the things claimed of it. We can dispose right away of some of the claims in the question. Because "final calculation of a statistic" is not well-defined (it apparently depends on what algorithm one uses for the calculation), it can be no more than a vague suggestion and is worth no further criticism. Similarly, neither "number of independent scores that go into the estimate" nor "the number of parameters used as intermediate steps" are well-defined. "Independent pieces of information that go into [an] estimate" is difficult to deal with, because there are two different but intimately related senses of "independent" that can be relevant here. One is independence of random variables; the other is functional independence. As an example of the latter, suppose we collect morphometric measurements of subjects--say, for simplicity, the three side lengths $X$, $Y$, $Z$, surface areas $S=2(XY+YZ+ZX)$, and volumes $V=XYZ$ of a set of wooden blocks. The three side lengths can be considered independent random variables, but all five variables are dependent RVs. The five are also functionally dependent because the codomain (not the "domain"!) of the vector-valued random variable $(X,Y,Z,S,V)$ traces out a three-dimensional manifold in $\mathbb{R}^5$. (Thus, locally at any point $\omega\in\mathbb{R}^5$, there are two functions $f_\omega$ and $g_\omega$ for which $f_\omega(X(\psi),\ldots,V(\psi))=0$ and $g_\omega(X(\psi),\ldots,V(\psi))=0$ for points $\psi$ "near" $\omega$ and the derivatives of $f$ and $g$ evaluated at $\omega$ are linearly independent.) However--here's the kicker--for many probability measures on the blocks, subsets of the variables such as $(X,S,V)$ are dependent as random variables but functionally independent. Having been alerted by these potential ambiguities, let's hold up the Chi-squared goodness of fit test for examination, because (a) it's simple, (b) it's one of the common situations where people really do need to know about DF to get the p-value right and (c) it's often used incorrectly. Here's a brief synopsis of the least controversial application of this test: You have a collection of data values $(x_1, \ldots, x_n)$, considered as a sample of a population. You have estimated some parameters $\theta_1, \ldots, \theta_p$ of a distribution. For example, you estimated the mean $\theta_1$ and standard deviation $\theta_2 = \theta_p$ of a Normal distribution, hypothesizing that the population is normally distributed but not knowing (in advance of obtaining the data) what $\theta_1$ or $\theta_2$ might be. In advance, you created a set of $k$ "bins" for the data. (It may be problematic when the bins are determined by the data, even though this is often done.) Using these bins, the data are reduced to the set of counts within each bin. Anticipating what the true values of $(\theta)$ might be, you have arranged it so (hopefully) each bin will receive approximately the same count. (Equal-probability binning assures the chi-squared distribution really is a good approximation to the true distribution of the chi-squared statistic about to be described.) You have a lot of data--enough to assure that almost all bins ought to have counts of 5 or greater. (This, we hope, will enable the sampling distribution of the $\chi^2$ statistic to be approximated adequately by some $\chi^2$ distribution.) Using the parameter estimates, you can compute the expected count in each bin. The Chi-squared statistic is the sum of the ratios $$\frac{(\text{observed}-\text{expected})^2}{\text{expected}}.$$ This, many authorities tell us, should have (to a very close approximation) a Chi-squared distribution. But there's a whole family of such distributions. They are differentiated by a parameter $\nu$ often referred to as the "degrees of freedom." The standard reasoning about how to determine $\nu$ goes like this I have $k$ counts. That's $k$ pieces of data. But there are (functional) relationships among them. To start with, I know in advance that the sum of the counts must equal $n$. That's one relationship. I estimated two (or $p$, generally) parameters from the data. That's two (or $p$) additional relationships, giving $p+1$ total relationships. Presuming they (the parameters) are all (functionally) independent, that leaves only $k-p-1$ (functionally) independent "degrees of freedom": that's the value to use for $\nu$. The problem with this reasoning (which is the sort of calculation the quotations in the question are hinting at) is that it's wrong except when some special additional conditions hold. Moreover, those conditions have nothing to do with independence (functional or statistical), with numbers of "components" of the data, with the numbers of parameters, nor with anything else referred to in the original question. Let me show you with an example. (To make it as clear as possible, I'm using a small number of bins, but that's not essential.) Let's generate 20 independent and identically distributed (iid) standard Normal variates and estimate their mean and standard deviation with the usual formulas (mean = sum/count, etc.). To test goodness of fit, create four bins with cutpoints at the quartiles of a standard normal: -0.675, 0, +0.657, and use the bin counts to generate a Chi-squared statistic. Repeat as patience allows; I had time to do 10,000 repetitions. The standard wisdom about DF says we have 4 bins and 1+2 = 3 constraints, implying the distribution of these 10,000 Chi-squared statistics should follow a Chi-squared distribution with 1 DF. Here's the histogram: The dark blue line graphs the PDF of a $\chi^2(1)$ distribution--the one we thought would work--while the dark red line graphs that of a $\chi^2(2)$ distribution (which would be a good guess if someone were to tell you that $\nu=1$ is incorrect). Neither fits the data. You might expect the problem to be due to the small size of the data sets ($n$=20) or perhaps the small size of the number of bins. However, the problem persists even with very large datasets and larger numbers of bins: it is not merely a failure to reach an asymptotic approximation. Things went wrong because I violated two requirements of the Chi-squared test: You must use the Maximum Likelihood estimate of the parameters. (This requirement can, in practice, be slightly violated.) You must base that estimate on the counts, not on the actual data! (This is crucial.) The red histogram depicts the chi-squared statistics for 10,000 separate iterations, following these requirements. Sure enough, it visibly follows the $\chi^2(1)$ curve (with an acceptable amount of sampling error), as we had originally hoped. The point of this comparison--which I hope you have seen coming--is that the correct DF to use for computing the p-values depends on many things other than dimensions of manifolds, counts of functional relationships, or the geometry of Normal variates. There is a subtle, delicate interaction between certain functional dependencies, as found in mathematical relationships among quantities, and distributions of the data, their statistics, and the estimators formed from them. Accordingly, it cannot be the case that DF is adequately explainable in terms of the geometry of multivariate normal distributions, or in terms of functional independence, or as counts of parameters, or anything else of this nature. We are led to see, then, that "degrees of freedom" is merely a heuristic that suggests what the sampling distribution of a (t, Chi-squared, or F) statistic ought to be, but it is not dispositive. Belief that it is dispositive leads to egregious errors. (For instance, the top hit on Google when searching "chi squared goodness of fit" is a Web page from an Ivy League university that gets most of this completely wrong! In particular, a simulation based on its instructions shows that the chi-squared value it recommends as having 7 DF actually has 9 DF.) With this more nuanced understanding, it's worthwhile to re-read the Wikipedia article in question: in its details it gets things right, pointing out where the DF heuristic tends to work and where it is either an approximation or does not apply at all. A good account of the phenomenon illustrated here (unexpectedly high DF in Chi-squared GOF tests) appears in Volume II of Kendall & Stuart, 5th edition. I am grateful for the opportunity afforded by this question to lead me back to this wonderful text, which is full of such useful analyses. Edit (Jan 2017) Here is R code to produce the figure following "The standard wisdom about DF..." # # Simulate data, one iteration per column of `x`. # n <- 20 n.sim <- 1e4 bins <- qnorm(seq(0, 1, 1/4)) x <- matrix(rnorm(n*n.sim), nrow=n) # # Compute statistics. # m <- colMeans(x) s <- apply(sweep(x, 2, m), 2, sd) counts <- apply(matrix(as.numeric(cut(x, bins)), nrow=n), 2, tabulate, nbins=4) expectations <- mapply(function(m,s) n*diff(pnorm(bins, m, s)), m, s) chisquared <- colSums((counts - expectations)^2 / expectations) # # Plot histograms of means, variances, and chi-squared stats. The first # two confirm all is working as expected. # mfrow <- par("mfrow") par(mfrow=c(1,3)) red <- "#a04040" # Intended to show correct distributions blue <- "#404090" # To show the putative chi-squared distribution hist(m, freq=FALSE) curve(dnorm(x, sd=1/sqrt(n)), add=TRUE, col=red, lwd=2) hist(s^2, freq=FALSE) curve(dchisq(x*(n-1), df=n-1)*(n-1), add=TRUE, col=red, lwd=2) hist(chisquared, freq=FALSE, breaks=seq(0, ceiling(max(chisquared)), 1/4), xlim=c(0, 13), ylim=c(0, 0.55), col="#c0c0ff", border="#404040") curve(ifelse(x <= 0, Inf, dchisq(x, df=2)), add=TRUE, col=red, lwd=2) curve(ifelse(x <= 0, Inf, dchisq(x, df=1)), add=TRUE, col=blue, lwd=2) par(mfrow=mfrow)
How to understand degrees of freedom? This is a subtle question. It takes a thoughtful person not to understand those quotations! Although they are suggestive, it turns out that none of them is exactly or generally correct. I haven't t
302
How to understand degrees of freedom?
Or simply: the number of elements in a numerical array that you're allowed to change so that the value of the statistic remains unchanged. # for instance if: x + y + z = 10 you can change, for instance, x and y at random, but you cannot change z (you can, but not at random, therefore you're not free to change it - see Harvey's comment), 'cause you'll change the value of the statistic (Σ = 10). So, in this case df = 2.
How to understand degrees of freedom?
Or simply: the number of elements in a numerical array that you're allowed to change so that the value of the statistic remains unchanged. # for instance if: x + y + z = 10 you can change, for instan
How to understand degrees of freedom? Or simply: the number of elements in a numerical array that you're allowed to change so that the value of the statistic remains unchanged. # for instance if: x + y + z = 10 you can change, for instance, x and y at random, but you cannot change z (you can, but not at random, therefore you're not free to change it - see Harvey's comment), 'cause you'll change the value of the statistic (Σ = 10). So, in this case df = 2.
How to understand degrees of freedom? Or simply: the number of elements in a numerical array that you're allowed to change so that the value of the statistic remains unchanged. # for instance if: x + y + z = 10 you can change, for instan
303
How to understand degrees of freedom?
The concept is not at all difficult to make mathematical precise given a bit of general knowledge of $n$-dimensional Euclidean geometry, subspaces and orthogonal projections. If $P$ is an orthogonal projection from $\mathbb{R}^n$ to a $p$-dimensional subspace $L$ and $x$ is an arbitrary $n$-vector then $Px$ is in $L$, $x - Px$ and $Px$ are orthogonal and $x - Px \in L^{\perp}$ is in the orthogonal complement of $L$. The dimension of this orthogonal complement, $L^{\perp}$, is $n-p$. If $x$ is free to vary in an $n$-dimensional space then $x - Px$ is free to vary in an $n-p$ dimensional space. For this reason we say that $x - Px$ has $n-p$ degrees of freedom. These considerations are important to statistics because if $X$ is an $n$-dimensional random vector and $L$ is a model of its mean, that is, the mean vector $E(X)$ is in $L$, then we call $X-PX$ the vector of residuals, and we use the residuals to estimate the variance. The vector of residuals has $n-p$ degrees of freedom, that is, it is constrained to a subspace of dimension $n-p$. If the coordinates of $X$ are independent and normally distributed with the same variance $\sigma^2$ then The vectors $PX$ and $X - PX$ are independent. If $E(X) \in L$ the distribution of the squared norm of the vector of residuals $||X - PX||^2$ is a $\chi^2$-distribution with scale parameter $\sigma^2$ and another parameter that happens to be the degrees of freedom $n-p$. The sketch of proof of these facts is given below. The two results are central for the further development of the statistical theory based on the normal distribution. Note also that this is why the $\chi^2$-distribution has the parametrization it has. It is also a $\Gamma$-distribution with scale parameter $2\sigma^2$ and shape parameter $(n-p)/2$, but in the context above it is natural to parametrize in terms of the degrees of freedom. I must admit that I don't find any of the paragraphs cited from the Wikipedia article particularly enlightening, but they are not really wrong or contradictory either. They say in an imprecise, and in a general loose sense, that when we compute the estimate of the variance parameter, but do so based on residuals, we base the computation on a vector that is only free to vary in a space of dimension $n-p$. Beyond the theory of linear normal models the use of the concept of degrees of freedom can be confusing. It is, for instance, used in the parametrization of the $\chi^2$-distribution whether or not there is a reference to anything that could have any degrees of freedom. When we consider statistical analysis of categorical data there can be some confusion about whether the "independent pieces" should be counted before or after a tabulation. Furthermore, for constraints, even for normal models, that are not subspace constraints, it is not obvious how to extend the concept of degrees of freedom. Various suggestions exist typically under the name of effective degrees of freedom. Before any other usages and meanings of degrees of freedom is considered I will strongly recommend to become confident with it in the context of linear normal models. A reference dealing with this model class is A First Course in Linear Model Theory, and there are additional references in the preface of the book to other classical books on linear models. Proof of the results above: Let $\xi = E(X)$, note that the variance matrix is $\sigma^2 I$ and choose an orthonormal basis $z_1, \ldots, z_p$ of $L$ and an orthonormal basis $z_{p+1}, \ldots, z_n$ of $L^{\perp}$. Then $z_1, \ldots, z_n$ is an orthonormal basis of $\mathbb{R}^n$. Let $\tilde{X}$ denote the $n$-vector of the coefficients of $X$ in this basis, that is $$\tilde{X}_i = z_i^T X.$$ This can also be written as $\tilde{X} = Z^T X$ where $Z$ is the orthogonal matrix with the $z_i$'s in the columns. Then we have to use that $\tilde{X}$ has a normal distribution with mean $Z^T \xi$ and, because $Z$ is orthogonal, variance matrix $\sigma^2 I$. This follows from general linear transformation results of the normal distribution. The basis was chosen so that the coefficients of $PX$ are $\tilde{X}_i$ for $i= 1, \ldots, p$, and the coefficients of $X - PX$ are $\tilde{X}_i$ for $i= p+1, \ldots, n$. Since the coefficients are uncorrelated and jointly normal, they are independent, and this implies that $$PX = \sum_{i=1}^p \tilde{X}_i z_i$$ and $$X - PX = \sum_{i=p+1}^n \tilde{X}_i z_i$$ are independent. Moreover, $$||X - PX||^2 = \sum_{i=p+1}^n \tilde{X}_i^2.$$ If $\xi \in L$ then $E(\tilde{X}_i) = z_i^T \xi = 0$ for $i = p +1, \ldots, n$ because then $z_i \in L^{\perp}$ and hence $z_i \perp \xi$. In this case $||X - PX||^2$ is the sum of $n-p$ independent $N(0, \sigma^2)$-distributed random variables, whose distribution, by definition, is a $\chi^2$-distribution with scale parameter $\sigma^2$ and $n-p$ degrees of freedom.
How to understand degrees of freedom?
The concept is not at all difficult to make mathematical precise given a bit of general knowledge of $n$-dimensional Euclidean geometry, subspaces and orthogonal projections. If $P$ is an orthogonal p
How to understand degrees of freedom? The concept is not at all difficult to make mathematical precise given a bit of general knowledge of $n$-dimensional Euclidean geometry, subspaces and orthogonal projections. If $P$ is an orthogonal projection from $\mathbb{R}^n$ to a $p$-dimensional subspace $L$ and $x$ is an arbitrary $n$-vector then $Px$ is in $L$, $x - Px$ and $Px$ are orthogonal and $x - Px \in L^{\perp}$ is in the orthogonal complement of $L$. The dimension of this orthogonal complement, $L^{\perp}$, is $n-p$. If $x$ is free to vary in an $n$-dimensional space then $x - Px$ is free to vary in an $n-p$ dimensional space. For this reason we say that $x - Px$ has $n-p$ degrees of freedom. These considerations are important to statistics because if $X$ is an $n$-dimensional random vector and $L$ is a model of its mean, that is, the mean vector $E(X)$ is in $L$, then we call $X-PX$ the vector of residuals, and we use the residuals to estimate the variance. The vector of residuals has $n-p$ degrees of freedom, that is, it is constrained to a subspace of dimension $n-p$. If the coordinates of $X$ are independent and normally distributed with the same variance $\sigma^2$ then The vectors $PX$ and $X - PX$ are independent. If $E(X) \in L$ the distribution of the squared norm of the vector of residuals $||X - PX||^2$ is a $\chi^2$-distribution with scale parameter $\sigma^2$ and another parameter that happens to be the degrees of freedom $n-p$. The sketch of proof of these facts is given below. The two results are central for the further development of the statistical theory based on the normal distribution. Note also that this is why the $\chi^2$-distribution has the parametrization it has. It is also a $\Gamma$-distribution with scale parameter $2\sigma^2$ and shape parameter $(n-p)/2$, but in the context above it is natural to parametrize in terms of the degrees of freedom. I must admit that I don't find any of the paragraphs cited from the Wikipedia article particularly enlightening, but they are not really wrong or contradictory either. They say in an imprecise, and in a general loose sense, that when we compute the estimate of the variance parameter, but do so based on residuals, we base the computation on a vector that is only free to vary in a space of dimension $n-p$. Beyond the theory of linear normal models the use of the concept of degrees of freedom can be confusing. It is, for instance, used in the parametrization of the $\chi^2$-distribution whether or not there is a reference to anything that could have any degrees of freedom. When we consider statistical analysis of categorical data there can be some confusion about whether the "independent pieces" should be counted before or after a tabulation. Furthermore, for constraints, even for normal models, that are not subspace constraints, it is not obvious how to extend the concept of degrees of freedom. Various suggestions exist typically under the name of effective degrees of freedom. Before any other usages and meanings of degrees of freedom is considered I will strongly recommend to become confident with it in the context of linear normal models. A reference dealing with this model class is A First Course in Linear Model Theory, and there are additional references in the preface of the book to other classical books on linear models. Proof of the results above: Let $\xi = E(X)$, note that the variance matrix is $\sigma^2 I$ and choose an orthonormal basis $z_1, \ldots, z_p$ of $L$ and an orthonormal basis $z_{p+1}, \ldots, z_n$ of $L^{\perp}$. Then $z_1, \ldots, z_n$ is an orthonormal basis of $\mathbb{R}^n$. Let $\tilde{X}$ denote the $n$-vector of the coefficients of $X$ in this basis, that is $$\tilde{X}_i = z_i^T X.$$ This can also be written as $\tilde{X} = Z^T X$ where $Z$ is the orthogonal matrix with the $z_i$'s in the columns. Then we have to use that $\tilde{X}$ has a normal distribution with mean $Z^T \xi$ and, because $Z$ is orthogonal, variance matrix $\sigma^2 I$. This follows from general linear transformation results of the normal distribution. The basis was chosen so that the coefficients of $PX$ are $\tilde{X}_i$ for $i= 1, \ldots, p$, and the coefficients of $X - PX$ are $\tilde{X}_i$ for $i= p+1, \ldots, n$. Since the coefficients are uncorrelated and jointly normal, they are independent, and this implies that $$PX = \sum_{i=1}^p \tilde{X}_i z_i$$ and $$X - PX = \sum_{i=p+1}^n \tilde{X}_i z_i$$ are independent. Moreover, $$||X - PX||^2 = \sum_{i=p+1}^n \tilde{X}_i^2.$$ If $\xi \in L$ then $E(\tilde{X}_i) = z_i^T \xi = 0$ for $i = p +1, \ldots, n$ because then $z_i \in L^{\perp}$ and hence $z_i \perp \xi$. In this case $||X - PX||^2$ is the sum of $n-p$ independent $N(0, \sigma^2)$-distributed random variables, whose distribution, by definition, is a $\chi^2$-distribution with scale parameter $\sigma^2$ and $n-p$ degrees of freedom.
How to understand degrees of freedom? The concept is not at all difficult to make mathematical precise given a bit of general knowledge of $n$-dimensional Euclidean geometry, subspaces and orthogonal projections. If $P$ is an orthogonal p
304
How to understand degrees of freedom?
It's really no different from the way the term "degrees of freedom" works in any other field. For example, suppose you have four variables: the length, the width, the area, and the perimeter of a rectangle. Do you really know four things? No, because there are only two degrees of freedom. If you know the length and the width, you can derive the area and the perimeter. If you know the length and the area, you can derive the width and the perimeter. If you know the area and the perimeter you can derive the length and the width (up to rotation). If you have all four, you can either say that the system is consistent (all of the variables agree with each other), or inconsistent (no rectangle could actually satisfy all of the conditions). A square is a rectangle with a degree of freedom removed; if you know any side of a square or its perimeter or its area, you can derive all of the others because there's only one degree of freedom. In statistics, things get more fuzzy, but the idea is still the same. If all of the data that you're using as the input for a function are independent variables, then you have as many degrees of freedom as you have inputs. But if they have dependence in some way, such that if you had n - k inputs you could figure out the remaining k, then you've actually only got n - k degrees of freedom. And sometimes you need to take that into account, lest you convince yourself that the data are more reliable or have more predictive power than they really do, by counting more data points than you really have independent bits of data. (Taken from a post at http://www.reddit.com/r/math/comments/9qbut/could_someone_explain_to_me_what_degrees_of/c0dxtbq?context=3.) Moreover, all three definitions are almost trying to give a same message.
How to understand degrees of freedom?
It's really no different from the way the term "degrees of freedom" works in any other field. For example, suppose you have four variables: the length, the width, the area, and the perimeter of a rect
How to understand degrees of freedom? It's really no different from the way the term "degrees of freedom" works in any other field. For example, suppose you have four variables: the length, the width, the area, and the perimeter of a rectangle. Do you really know four things? No, because there are only two degrees of freedom. If you know the length and the width, you can derive the area and the perimeter. If you know the length and the area, you can derive the width and the perimeter. If you know the area and the perimeter you can derive the length and the width (up to rotation). If you have all four, you can either say that the system is consistent (all of the variables agree with each other), or inconsistent (no rectangle could actually satisfy all of the conditions). A square is a rectangle with a degree of freedom removed; if you know any side of a square or its perimeter or its area, you can derive all of the others because there's only one degree of freedom. In statistics, things get more fuzzy, but the idea is still the same. If all of the data that you're using as the input for a function are independent variables, then you have as many degrees of freedom as you have inputs. But if they have dependence in some way, such that if you had n - k inputs you could figure out the remaining k, then you've actually only got n - k degrees of freedom. And sometimes you need to take that into account, lest you convince yourself that the data are more reliable or have more predictive power than they really do, by counting more data points than you really have independent bits of data. (Taken from a post at http://www.reddit.com/r/math/comments/9qbut/could_someone_explain_to_me_what_degrees_of/c0dxtbq?context=3.) Moreover, all three definitions are almost trying to give a same message.
How to understand degrees of freedom? It's really no different from the way the term "degrees of freedom" works in any other field. For example, suppose you have four variables: the length, the width, the area, and the perimeter of a rect
305
How to understand degrees of freedom?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. I really like first sentence from The Little Handbook of Statistical Practice. Degrees of Freedom Chapter One of the questions an instrutor dreads most from a mathematically unsophisticated audience is, "What exactly is degrees of freedom?" I think you can get really good understanding about degrees of freedom from reading this chapter.
How to understand degrees of freedom?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
How to understand degrees of freedom? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. I really like first sentence from The Little Handbook of Statistical Practice. Degrees of Freedom Chapter One of the questions an instrutor dreads most from a mathematically unsophisticated audience is, "What exactly is degrees of freedom?" I think you can get really good understanding about degrees of freedom from reading this chapter.
How to understand degrees of freedom? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
306
How to understand degrees of freedom?
Wikipedia asserts that degrees of freedom of a random vector can be interpreted as the dimensions of the vector subspace. I want to go step-by-step, very basically through this as a partial answer and elaboration on the Wikipedia entry. The example proposed is that of a random vector corresponding to the measurements of a continuous variable for different subjects, expressed as a vector extending from the origin $[a\,b\,c]^T$. Its orthogonal projection on the vector $[1\,1\,1]^T$ results in a vector equal to the projection of the vector of measurement means ($\bar{x}=1/3(a+b+c)$), i.e. $[\bar x \, \bar x \, \bar x]^T$, dotted with the $\vec{1}$ vector, $[1\,1\,1]^T $ This projection onto the subspace spanned by the vector of ones has $1\,\text{degree of freedom}$. The residual vector (distance from the mean) is the least-squares projection onto the $(n − 1)$-dimensional orthogonal complement of this subspace, and has $n − 1\,\text{degrees of freedom}$, $n$ being the total number of components of the vector (in our case $3$ since we are in $\mathbb{R}^3$ in the example).This can be simply proven by obtaining the dot product of $[\bar{x}\,\bar{x}\,\bar{x}]^T$ with the difference between $[a\,b\,c]^T$ and $[\bar{x}\,\bar{x}\,\bar{x}]^T$: $$ [\bar{x}\, \bar{x}\,\bar{x}]\, \begin{bmatrix} a-\bar{x}\\b-\bar{x}\\c-\bar{x}\end{bmatrix}=$$ $$= \bigg[\tiny\frac{(a+b+c)}{3}\, \bigg(a-\frac{(a+b+c)}{3}\bigg)\bigg]+ \bigg[\tiny\frac{(a+b+c)}{3} \,\bigg(b-\frac{(a+b+c)}{3}\bigg)\bigg]+ \bigg[\tiny\frac{(a+b+c)}{3} \,\bigg(c-\frac{(a+b+c)}{3}\bigg)\bigg]$$ $$=\tiny \frac{(a+b+c)}{3}\bigg[ \bigg(\tiny a-\frac{(a+b+c)}{3}\bigg)+ \bigg(b-\frac{(a+b+c)}{3}\bigg)+ \bigg(c-\frac{(a+b+c)}{3}\bigg)\bigg]$$ $$= \tiny \frac{(a+b+c)}{3}\bigg[\tiny \frac{1}{3} \bigg(\tiny 3a-(a+b+c)+ 3b-(a+b+c)+3c-(a+b+c)\bigg)\bigg]$$ $$=\tiny\frac{(a+b+c)}{3}\bigg[\tiny\frac{1}{3} (3a-3a+ 3b-3b+3c-3c)\bigg]\large= 0$$. And this relationship extends to any point in a plane orthogonal to $[\bar{x}\,\bar{x}\,\bar{x}]^T$. This concept is important in understanding why $\frac 1 {\sigma^2} \Big((X_1-\bar X)^2 + \cdots + (X_n - \bar X)^2 \Big) \sim \chi^2_{n-1}$, a step in the derivation of the t-distribution(here and here). Let's take the point $[35\,50\,80]^T$, corresponding to three observations. The mean is $55$, and the vector $[55\,\,55\,\,55]^T$ is the normal (orthogonal) to a plane, $55x + 55y + 55z = D$. Plugging in the point coordinates into the plane equation, $D = -9075$. Now we can choose any other point in this plane, and the mean of its coordinates is going to be $55$, geometrically corresponding to its projection onto the vector $[1\,\,1\,\,1]^T$. Hence for every mean value (in our example, $55$) we can choose an infinite number of pairs of coordinates in $\mathbb{R}^2$ without restriction ($2\,\text{degrees of freedom}$); yet, since the plane is in $\mathbb{R}^3$, the third coordinate will come determined by the equation of the plane (or, geometrically the orthogonal projection of the point onto $[55\,\,55\,\,55]^T$. Here is representation of three points (in white) lying on the plane (cerulean blue) orthogonal to $[55\,\,55\,\,55]^T$ (arrow): $[35\,\,50\,\,80]^T$, $[80\,\,80\,\,5]$ and $[90\,\,15\,\,60]$ all of them on the plane (subspace with $2\,\text{df}$), and then with a mean of their components of $55$, and an orthogonal projection to $[1\,\,1\,\,1]^T$ (subspace with $1\,\text{df}$) equal to $[55\,\,55\,\,55]^T$:
How to understand degrees of freedom?
Wikipedia asserts that degrees of freedom of a random vector can be interpreted as the dimensions of the vector subspace. I want to go step-by-step, very basically through this as a partial answer and
How to understand degrees of freedom? Wikipedia asserts that degrees of freedom of a random vector can be interpreted as the dimensions of the vector subspace. I want to go step-by-step, very basically through this as a partial answer and elaboration on the Wikipedia entry. The example proposed is that of a random vector corresponding to the measurements of a continuous variable for different subjects, expressed as a vector extending from the origin $[a\,b\,c]^T$. Its orthogonal projection on the vector $[1\,1\,1]^T$ results in a vector equal to the projection of the vector of measurement means ($\bar{x}=1/3(a+b+c)$), i.e. $[\bar x \, \bar x \, \bar x]^T$, dotted with the $\vec{1}$ vector, $[1\,1\,1]^T $ This projection onto the subspace spanned by the vector of ones has $1\,\text{degree of freedom}$. The residual vector (distance from the mean) is the least-squares projection onto the $(n − 1)$-dimensional orthogonal complement of this subspace, and has $n − 1\,\text{degrees of freedom}$, $n$ being the total number of components of the vector (in our case $3$ since we are in $\mathbb{R}^3$ in the example).This can be simply proven by obtaining the dot product of $[\bar{x}\,\bar{x}\,\bar{x}]^T$ with the difference between $[a\,b\,c]^T$ and $[\bar{x}\,\bar{x}\,\bar{x}]^T$: $$ [\bar{x}\, \bar{x}\,\bar{x}]\, \begin{bmatrix} a-\bar{x}\\b-\bar{x}\\c-\bar{x}\end{bmatrix}=$$ $$= \bigg[\tiny\frac{(a+b+c)}{3}\, \bigg(a-\frac{(a+b+c)}{3}\bigg)\bigg]+ \bigg[\tiny\frac{(a+b+c)}{3} \,\bigg(b-\frac{(a+b+c)}{3}\bigg)\bigg]+ \bigg[\tiny\frac{(a+b+c)}{3} \,\bigg(c-\frac{(a+b+c)}{3}\bigg)\bigg]$$ $$=\tiny \frac{(a+b+c)}{3}\bigg[ \bigg(\tiny a-\frac{(a+b+c)}{3}\bigg)+ \bigg(b-\frac{(a+b+c)}{3}\bigg)+ \bigg(c-\frac{(a+b+c)}{3}\bigg)\bigg]$$ $$= \tiny \frac{(a+b+c)}{3}\bigg[\tiny \frac{1}{3} \bigg(\tiny 3a-(a+b+c)+ 3b-(a+b+c)+3c-(a+b+c)\bigg)\bigg]$$ $$=\tiny\frac{(a+b+c)}{3}\bigg[\tiny\frac{1}{3} (3a-3a+ 3b-3b+3c-3c)\bigg]\large= 0$$. And this relationship extends to any point in a plane orthogonal to $[\bar{x}\,\bar{x}\,\bar{x}]^T$. This concept is important in understanding why $\frac 1 {\sigma^2} \Big((X_1-\bar X)^2 + \cdots + (X_n - \bar X)^2 \Big) \sim \chi^2_{n-1}$, a step in the derivation of the t-distribution(here and here). Let's take the point $[35\,50\,80]^T$, corresponding to three observations. The mean is $55$, and the vector $[55\,\,55\,\,55]^T$ is the normal (orthogonal) to a plane, $55x + 55y + 55z = D$. Plugging in the point coordinates into the plane equation, $D = -9075$. Now we can choose any other point in this plane, and the mean of its coordinates is going to be $55$, geometrically corresponding to its projection onto the vector $[1\,\,1\,\,1]^T$. Hence for every mean value (in our example, $55$) we can choose an infinite number of pairs of coordinates in $\mathbb{R}^2$ without restriction ($2\,\text{degrees of freedom}$); yet, since the plane is in $\mathbb{R}^3$, the third coordinate will come determined by the equation of the plane (or, geometrically the orthogonal projection of the point onto $[55\,\,55\,\,55]^T$. Here is representation of three points (in white) lying on the plane (cerulean blue) orthogonal to $[55\,\,55\,\,55]^T$ (arrow): $[35\,\,50\,\,80]^T$, $[80\,\,80\,\,5]$ and $[90\,\,15\,\,60]$ all of them on the plane (subspace with $2\,\text{df}$), and then with a mean of their components of $55$, and an orthogonal projection to $[1\,\,1\,\,1]^T$ (subspace with $1\,\text{df}$) equal to $[55\,\,55\,\,55]^T$:
How to understand degrees of freedom? Wikipedia asserts that degrees of freedom of a random vector can be interpreted as the dimensions of the vector subspace. I want to go step-by-step, very basically through this as a partial answer and
307
How to understand degrees of freedom?
In my classes, I use one "simple" situation that might help you wonder and perhaps develop a gut feeling for what a degree of freedom may mean. It is kind of a "Forrest Gump" approach to the subject, but it is worth the try. Consider you have 10 independent observations $X_1, X_2, \ldots, X_{10}\sim N(\mu,\sigma^2)$ that came right from a normal population whose mean $\mu$ and variance $\sigma^2$ are unknown. Your observations bring to you collectively information both about $\mu$ and $\sigma^2$. After all, your observations tend to be spread around one central value, which ought to be close to the actual and unknown value of $\mu$ and, likewise, if $\mu$ is very high or very low, then you can expect to see your observations gather around a very high or very low value respectively. One good "substitute" for $\mu$ (in the absence of knowledge of its actual value) is $\bar X$, the average of your observation. Also, if your observations are very close to one another, that is an indication that you can expect that $\sigma^2$ must be small and, likewise, if $\sigma^2$ is very large, then you can expect to see wildly different values for $X_1$ to $X_{10}$. If you were to bet your week's wage on which should be the actual values of $\mu$ and $\sigma^2$, you would need to choose a pair of values in which you would bet your money. Let's not think of anything as dramatic as losing your paycheck unless you guess $\mu$ correctly until its 200th decimal position. Nope. Let's think of some sort of prizing system that the closer you guess $\mu$ and $\sigma^2$ the more you get rewarded. In some sense, your better, more informed, and more polite guess for $\mu$'s value could be $\bar X$. In that sense, you estimate that $\mu$ must be some value around $\bar X$. Similarly, one good "substitute" for $\sigma^2$ (not required for now) is $S^2$, your sample variance, which makes a good estimate for $\sigma$. If your were to believe that those substitutes are the actual values of $\mu$ and $\sigma 2$, you would probably be wrong, because very slim are the chances that you were so lucky that your observations coordinated themselves to get you the gift of $\bar X$ being equal to $\mu$ and $S^2$ equal to $\sigma^2$. Nah, probably it didn't happen. But you could be at different levels of wrong, varying from a bit wrong to really, really, really miserably wrong (a.k.a., "Bye-bye, paycheck; see you next week!"). Ok, let's say that you took $\bar X$ as your guess for $\mu$. Consider just two scenarios: $S^2=2$ and $S^2=20,000,000$. In the first, your observations sit pretty and close to one another. In the latter, your observations vary wildly. In which scenario you should be more concerned with your potential losses? If you thought of the second one, you're right. Having a estimate about $\sigma^2$ changes your confidence on your bet very reasonably, for the larger $\sigma^2$ is, the wider you can expect $\bar X$ to variate. But, beyond information about $\mu$ and $\sigma^2$, your observations also carry some amount of just pure random fluctuation that is not informative neither about $\mu$ nor about $\sigma^2$. How can you notice it? Well, let's assume, for sake of argument, that there is a God and that He has spare time enough to give Himself the frivolity of telling you specifically the real (and so far unknown) values of both $\mu$ and $\sigma$. And here is the annoying plot twist of this lysergic tale: He tells it to you after you placed your bet. Perhaps to enlighten you, perhaps to prepare you, perhaps to mock you. How could you know? Well, that makes the information about $\mu$ and $\sigma^2$ contained in your observations quite useless now. Your observations' central position $\bar X$ and variance $S^2$ are no longer of any help to get closer to the actual values of $\mu$ and $\sigma^2$, for you already know them. One of the benefits of your good acquaintance with God is that you actually know by how much you failed to guess correctly $\mu$ by using $\bar X$, that is, $(\bar X - \mu)$ your estimation error. Well, since $X_i\sim N(\mu,\sigma^2)$, then $\bar X\sim N(\mu,\sigma^2/10)$ (trust me in that if you will), also $(\bar X - \mu)\sim N(0,\sigma^2/10)$ (ok, trust me in that on too) and, finally, $$ \frac{\bar X - \mu}{\sigma/\sqrt{10}} \sim N(0,1) $$ (guess what? trust me in that one as well), which carries absolutely no information about $\mu$ or $\sigma^2$. You know what? If you took any of your individual observations as a guess for $\mu$, your estimation error $(X_i-\mu)$ would be distributed as $N(0,\sigma^2)$. Well, between estimating $\mu$ with $\bar X$ and any $X_i$, choosing $\bar X$ would be better business, because $Var(\bar X) = \sigma^2/10 < \sigma^2 = Var(X_i)$, so $\bar X$ was less prone to be astray from $\mu$ than an individual $X_i$. Anyway, $(X_i-\mu)/\sigma\sim N(0,1)$ is also absolutely non informative about neither $\mu$ nor $\sigma^2$. "Will this tale ever end?" you may be thinking. You also may be thinking "Is there any more random fluctuation that is non informative about $\mu$ and $\sigma^2$?". [I prefer to think that you are thinking of the latter.] Yes, there is! The square of your estimation error for $\mu$ with $X_i$ divided by $\sigma$, $$ \frac{(X_i-\mu)^2}{\sigma^2} = \left(\frac{X_i-\mu}{\sigma}\right)^2 \sim \chi^2 $$ has a Chi-squared distribution, which is the distribution of the square $Z^2$ of a standard Normal $Z\sim N(0,1)$, which I am sure you noticed has absolutely no information about either $\mu$ nor $\sigma^2$, but conveys information about the variability you should expect to face. That is a very well known distribution that arises naturally from the very scenario of you gambling problem for every single one of your ten observations and also from your mean: $$ \frac{(\bar X-\mu)^2}{\sigma^2/10} = \left(\frac{\bar X-\mu}{\sigma/\sqrt{10}}\right)^2 = \left(N(0,1)\right)^2 \sim\chi^2 $$ and also from the gathering of your ten observations' variation: $$ \sum_{i=1}^{10} \frac{(X_i-\mu)^2}{\sigma^2/10} =\sum_{i=1}^{10} \left(\frac{X_i-\mu}{\sigma/\sqrt{10}}\right)^2 =\sum_{i=1}^{10} \left(N(0,1)\right)^2 =\sum_{i=1}^{10} \chi^2. $$ Now that last guy doesn't have a Chi-squared distribution, because he is the sum of ten of those Chi-squared distributions, all of them independent from one another (because so are $X_1, \ldots, X_{10}$). Each one of those single Chi-squared distribution is one contribution to the amount of random variability you should expect to face, with roughly the same amount of contribution to the sum. The value of each contribution is not mathematically equal to the other nine, but all of them have the same expected behavior in distribution. In that sense, they are somehow symmetric. Each one of those Chi-square is one contribution to the amount of pure, random variability you should expect in that sum. If you had 100 observations, the sum above would be expected to be bigger just because it have more sources of contibutions. Each of those "sources of contributions" with the same behavior can be called degree of freedom. Now take one or two steps back, re-read the previous paragraphs if needed to accommodate the sudden arrival of your quested-for degree of freedom. Yep, each degree of freedom can be thought of as one unit of variability that is obligatorily expected to occur and that brings nothing to the improvement of guessing of $\mu$ or $\sigma^2$. The thing is, you start to count on the behavior of those 10 equivalent sources of variability. If you had 100 observations, you would have 100 independent equally-behaved sources of strictly random fluctuation to that sum. That sum of 10 Chi-squares gets called a Chi-squared distributions with 10 degrees of freedom from now on, and written $\chi^2_{10}$. We can describe what to expect from it starting from its probability density function, that can be mathematically derived from the density from that single Chi-squared distribution (from now on called Chi-squared distribution with one degree of freedom and written $\chi^2_1$), that can be mathematically derived from the density of the normal distribution. "So what?" --- you might be thinking --- "That is of any good only if God took the time to tell me the values of $\mu$ and $\sigma^2$, of all the things He could tell me!" Indeed, if God Almighty were too busy to tell you the values of $\mu$ and $\sigma^2$, you would still have that 10 sources, that 10 degrees of freedom. Things start to get weird (Hahahaha; only now!) when you rebel against God and try and get along all by yourself, without expecting Him to patronize you. You have $\bar X$ and $S^2$, estimators for $\mu$ and $\sigma^2$. You can find your way to a safer bet. You could consider calculating the sum above with $\bar X$ and $S^2$ in the places of $\mu$ and $\sigma^2$: $$ \sum_{i=1}^{10} \frac{(X_i-\bar X)^2}{S^2/10} =\sum_{i=1}^{10} \left(\frac{X_i-\bar X}{S/\sqrt{10}}\right)^2, $$ but that is not the same as the original sum. "Why not?" The term inside the square of both sums are very different. For instance, it is unlikely but possible that all your observations end up being larger than $\mu$, in which case $(X_i-\mu) > 0$, which implies $\sum_{i=1}^{10}(X_i-\mu) > 0$, but, by its turn, $\sum_{i=1}^{10}(X_i-\bar X) = 0$, because $\sum_{i=1}^{10}X_i-10 \bar X =10 \bar X - 10 \bar X = 0$. Worse, you can prove easily (Hahahaha; right!) that $\sum_{i=1}^{10}(X_i-\bar X)^2 \le \sum_{i=1}^{10}(X_i-\mu)^2$ with strict inequality when at least two observations are different (which is not unusual). "But wait! There's more!" $$ \frac{X_i-\bar X}{S/\sqrt{10}} $$ doesn't have standard normal distribution, $$ \frac{(X_i-\bar X)^2}{S^2/10} $$ doesn't have Chi-squared distribution with one degree of freedom, $$ \sum_{i=1}^{10} \frac{(X_i-\bar X)^2}{S^2/10} $$ doesn't have Chi-squared distribution with 10 degrees of freedom $$ \frac{\bar X-\mu}{S/\sqrt{10}} $$ doesn't have standard normal distribution. "Was it all for nothing?" No way. Now comes the magic! Note that $$ \sum_{i=1}^{10} \frac{(X_i-\bar X)^2}{\sigma^2} =\sum_{i=1}^{10} \frac{[X_i-\mu+\mu-\bar X]^2}{\sigma^2} =\sum_{i=1}^{10} \frac{[(X_i-\mu)-(\bar X-\mu)]^2}{\sigma^2} =\sum_{i=1}^{10} \frac{(X_i-\mu)^2-2(X_i-\mu)(\bar X-\mu)+(\bar X-\mu)^2}{\sigma^2} =\sum_{i=1}^{10} \frac{(X_i-\mu)^2-(\bar X-\mu)^2}{\sigma^2} =\sum_{i=1}^{10} \frac{(X_i-\mu)^2}{\sigma^2}-\sum_{i=1}^{10} \frac{(\bar X-\mu)^2}{\sigma^2} =\sum_{i=1}^{10} \frac{(X_i-\mu)^2}{\sigma^2}-10\frac{(\bar X-\mu)^2}{\sigma^2} =\sum_{i=1}^{10} \frac{(X_i-\mu)^2}{\sigma^2}-\frac{(\bar X-\mu)^2}{\sigma^2/10} $$ or, equivalently, $$ \sum_{i=1}^{10} \frac{(X_i-\mu)^2}{\sigma^2} =\sum_{i=1}^{10} \frac{(X_i-\bar X)^2}{\sigma^2} +\frac{(\bar X-\mu)^2}{\sigma^2/10}. $$ Now we get back to those known faces. The first term has Chi-squared distribution with 10 degrees of freedom and the last term has Chi-squared distribution with one degree of freedom(!). We simply split a Chi-square with 10 independent equally-behaved sources of variability in two parts, both positive: one part is a Chi-square with one source of variability and the other we can prove (leap of faith? win by W.O.?) to be also a Chi-square with 9 (= 10-1) independent equally-behaved sources of variability, with both parts independent from one another. This is already a good news, since now we have its distribution. Alas, it uses $\sigma^2$, to which we have no access (recall that God is amusing Himself on watching our struggle). Well, $$ S^2=\frac{1}{10-1}\sum_{i=1}^{10} (X_i-\bar X)^2, $$ so $$ \sum_{i=1}^{10} \frac{(X_i-\bar X)^2}{\sigma^2} =\frac{\sum_{i=1}^{10} (X_i-\bar X)^2}{\sigma^2} =\frac{(10-1)S^2}{\sigma^2} \sim\chi^2_{(10-1)} $$ therefore $$ \frac{\bar X-\mu}{S/\sqrt{10}} =\frac{\frac{\bar X-\mu}{\sigma/\sqrt{10}}}{\frac{S}{\sigma}} =\frac{\frac{\bar X-\mu}{\sigma/\sqrt{10}}}{\sqrt{\frac{S^2}{\sigma^2}}} =\frac{\frac{\bar X-\mu}{\sigma/\sqrt{10}}}{\sqrt{\frac{\frac{(10-1)S^2}{\sigma^2}}{(10-1)}}} =\frac{N(0,1)}{\sqrt{\frac{\chi^2_{(10-1)}}{(10-1)}}}, $$ which is a distribution that is not the standard normal, but whose density can be derived from the densities of the standard normal and the Chi-squared with $(10-1)$ degrees of freedom. One very, very smart guy did that math[^1] in the beginning of 20th century and, as an unintended consequence, he made his boss the absolute world leader in the industry of Stout beer. I am talking about William Sealy Gosset (a.k.a. Student; yes, that Student, from the $t$ distribution) and Saint James's Gate Brewery (a.k.a. Guinness Brewery), of which I am a devout. [^1]: @whuber told in the comments below that Gosset did not do the math, but guessed instead! I really don't know which feat is more surprising for that time. That, my dear friend, is the origin of the $t$ distribution with $(10-1)$ degrees of freedom. The ratio of a standard normal and the squared root of an independent Chi-square divided by its degrees of freedom, which, in an unpredictable turn of tides, wind up describing the expected behavior of the estimation error you undergo when using the sample average $\bar X$ to estimate $\mu$ and using $S^2$ to estimate the variability of $\bar X$. There you go. With an awful lot of technical details grossly swept behind the rug, but not depending solely on God's intervention to dangerously bet your whole paycheck.
How to understand degrees of freedom?
In my classes, I use one "simple" situation that might help you wonder and perhaps develop a gut feeling for what a degree of freedom may mean. It is kind of a "Forrest Gump" approach to the subject,
How to understand degrees of freedom? In my classes, I use one "simple" situation that might help you wonder and perhaps develop a gut feeling for what a degree of freedom may mean. It is kind of a "Forrest Gump" approach to the subject, but it is worth the try. Consider you have 10 independent observations $X_1, X_2, \ldots, X_{10}\sim N(\mu,\sigma^2)$ that came right from a normal population whose mean $\mu$ and variance $\sigma^2$ are unknown. Your observations bring to you collectively information both about $\mu$ and $\sigma^2$. After all, your observations tend to be spread around one central value, which ought to be close to the actual and unknown value of $\mu$ and, likewise, if $\mu$ is very high or very low, then you can expect to see your observations gather around a very high or very low value respectively. One good "substitute" for $\mu$ (in the absence of knowledge of its actual value) is $\bar X$, the average of your observation. Also, if your observations are very close to one another, that is an indication that you can expect that $\sigma^2$ must be small and, likewise, if $\sigma^2$ is very large, then you can expect to see wildly different values for $X_1$ to $X_{10}$. If you were to bet your week's wage on which should be the actual values of $\mu$ and $\sigma^2$, you would need to choose a pair of values in which you would bet your money. Let's not think of anything as dramatic as losing your paycheck unless you guess $\mu$ correctly until its 200th decimal position. Nope. Let's think of some sort of prizing system that the closer you guess $\mu$ and $\sigma^2$ the more you get rewarded. In some sense, your better, more informed, and more polite guess for $\mu$'s value could be $\bar X$. In that sense, you estimate that $\mu$ must be some value around $\bar X$. Similarly, one good "substitute" for $\sigma^2$ (not required for now) is $S^2$, your sample variance, which makes a good estimate for $\sigma$. If your were to believe that those substitutes are the actual values of $\mu$ and $\sigma 2$, you would probably be wrong, because very slim are the chances that you were so lucky that your observations coordinated themselves to get you the gift of $\bar X$ being equal to $\mu$ and $S^2$ equal to $\sigma^2$. Nah, probably it didn't happen. But you could be at different levels of wrong, varying from a bit wrong to really, really, really miserably wrong (a.k.a., "Bye-bye, paycheck; see you next week!"). Ok, let's say that you took $\bar X$ as your guess for $\mu$. Consider just two scenarios: $S^2=2$ and $S^2=20,000,000$. In the first, your observations sit pretty and close to one another. In the latter, your observations vary wildly. In which scenario you should be more concerned with your potential losses? If you thought of the second one, you're right. Having a estimate about $\sigma^2$ changes your confidence on your bet very reasonably, for the larger $\sigma^2$ is, the wider you can expect $\bar X$ to variate. But, beyond information about $\mu$ and $\sigma^2$, your observations also carry some amount of just pure random fluctuation that is not informative neither about $\mu$ nor about $\sigma^2$. How can you notice it? Well, let's assume, for sake of argument, that there is a God and that He has spare time enough to give Himself the frivolity of telling you specifically the real (and so far unknown) values of both $\mu$ and $\sigma$. And here is the annoying plot twist of this lysergic tale: He tells it to you after you placed your bet. Perhaps to enlighten you, perhaps to prepare you, perhaps to mock you. How could you know? Well, that makes the information about $\mu$ and $\sigma^2$ contained in your observations quite useless now. Your observations' central position $\bar X$ and variance $S^2$ are no longer of any help to get closer to the actual values of $\mu$ and $\sigma^2$, for you already know them. One of the benefits of your good acquaintance with God is that you actually know by how much you failed to guess correctly $\mu$ by using $\bar X$, that is, $(\bar X - \mu)$ your estimation error. Well, since $X_i\sim N(\mu,\sigma^2)$, then $\bar X\sim N(\mu,\sigma^2/10)$ (trust me in that if you will), also $(\bar X - \mu)\sim N(0,\sigma^2/10)$ (ok, trust me in that on too) and, finally, $$ \frac{\bar X - \mu}{\sigma/\sqrt{10}} \sim N(0,1) $$ (guess what? trust me in that one as well), which carries absolutely no information about $\mu$ or $\sigma^2$. You know what? If you took any of your individual observations as a guess for $\mu$, your estimation error $(X_i-\mu)$ would be distributed as $N(0,\sigma^2)$. Well, between estimating $\mu$ with $\bar X$ and any $X_i$, choosing $\bar X$ would be better business, because $Var(\bar X) = \sigma^2/10 < \sigma^2 = Var(X_i)$, so $\bar X$ was less prone to be astray from $\mu$ than an individual $X_i$. Anyway, $(X_i-\mu)/\sigma\sim N(0,1)$ is also absolutely non informative about neither $\mu$ nor $\sigma^2$. "Will this tale ever end?" you may be thinking. You also may be thinking "Is there any more random fluctuation that is non informative about $\mu$ and $\sigma^2$?". [I prefer to think that you are thinking of the latter.] Yes, there is! The square of your estimation error for $\mu$ with $X_i$ divided by $\sigma$, $$ \frac{(X_i-\mu)^2}{\sigma^2} = \left(\frac{X_i-\mu}{\sigma}\right)^2 \sim \chi^2 $$ has a Chi-squared distribution, which is the distribution of the square $Z^2$ of a standard Normal $Z\sim N(0,1)$, which I am sure you noticed has absolutely no information about either $\mu$ nor $\sigma^2$, but conveys information about the variability you should expect to face. That is a very well known distribution that arises naturally from the very scenario of you gambling problem for every single one of your ten observations and also from your mean: $$ \frac{(\bar X-\mu)^2}{\sigma^2/10} = \left(\frac{\bar X-\mu}{\sigma/\sqrt{10}}\right)^2 = \left(N(0,1)\right)^2 \sim\chi^2 $$ and also from the gathering of your ten observations' variation: $$ \sum_{i=1}^{10} \frac{(X_i-\mu)^2}{\sigma^2/10} =\sum_{i=1}^{10} \left(\frac{X_i-\mu}{\sigma/\sqrt{10}}\right)^2 =\sum_{i=1}^{10} \left(N(0,1)\right)^2 =\sum_{i=1}^{10} \chi^2. $$ Now that last guy doesn't have a Chi-squared distribution, because he is the sum of ten of those Chi-squared distributions, all of them independent from one another (because so are $X_1, \ldots, X_{10}$). Each one of those single Chi-squared distribution is one contribution to the amount of random variability you should expect to face, with roughly the same amount of contribution to the sum. The value of each contribution is not mathematically equal to the other nine, but all of them have the same expected behavior in distribution. In that sense, they are somehow symmetric. Each one of those Chi-square is one contribution to the amount of pure, random variability you should expect in that sum. If you had 100 observations, the sum above would be expected to be bigger just because it have more sources of contibutions. Each of those "sources of contributions" with the same behavior can be called degree of freedom. Now take one or two steps back, re-read the previous paragraphs if needed to accommodate the sudden arrival of your quested-for degree of freedom. Yep, each degree of freedom can be thought of as one unit of variability that is obligatorily expected to occur and that brings nothing to the improvement of guessing of $\mu$ or $\sigma^2$. The thing is, you start to count on the behavior of those 10 equivalent sources of variability. If you had 100 observations, you would have 100 independent equally-behaved sources of strictly random fluctuation to that sum. That sum of 10 Chi-squares gets called a Chi-squared distributions with 10 degrees of freedom from now on, and written $\chi^2_{10}$. We can describe what to expect from it starting from its probability density function, that can be mathematically derived from the density from that single Chi-squared distribution (from now on called Chi-squared distribution with one degree of freedom and written $\chi^2_1$), that can be mathematically derived from the density of the normal distribution. "So what?" --- you might be thinking --- "That is of any good only if God took the time to tell me the values of $\mu$ and $\sigma^2$, of all the things He could tell me!" Indeed, if God Almighty were too busy to tell you the values of $\mu$ and $\sigma^2$, you would still have that 10 sources, that 10 degrees of freedom. Things start to get weird (Hahahaha; only now!) when you rebel against God and try and get along all by yourself, without expecting Him to patronize you. You have $\bar X$ and $S^2$, estimators for $\mu$ and $\sigma^2$. You can find your way to a safer bet. You could consider calculating the sum above with $\bar X$ and $S^2$ in the places of $\mu$ and $\sigma^2$: $$ \sum_{i=1}^{10} \frac{(X_i-\bar X)^2}{S^2/10} =\sum_{i=1}^{10} \left(\frac{X_i-\bar X}{S/\sqrt{10}}\right)^2, $$ but that is not the same as the original sum. "Why not?" The term inside the square of both sums are very different. For instance, it is unlikely but possible that all your observations end up being larger than $\mu$, in which case $(X_i-\mu) > 0$, which implies $\sum_{i=1}^{10}(X_i-\mu) > 0$, but, by its turn, $\sum_{i=1}^{10}(X_i-\bar X) = 0$, because $\sum_{i=1}^{10}X_i-10 \bar X =10 \bar X - 10 \bar X = 0$. Worse, you can prove easily (Hahahaha; right!) that $\sum_{i=1}^{10}(X_i-\bar X)^2 \le \sum_{i=1}^{10}(X_i-\mu)^2$ with strict inequality when at least two observations are different (which is not unusual). "But wait! There's more!" $$ \frac{X_i-\bar X}{S/\sqrt{10}} $$ doesn't have standard normal distribution, $$ \frac{(X_i-\bar X)^2}{S^2/10} $$ doesn't have Chi-squared distribution with one degree of freedom, $$ \sum_{i=1}^{10} \frac{(X_i-\bar X)^2}{S^2/10} $$ doesn't have Chi-squared distribution with 10 degrees of freedom $$ \frac{\bar X-\mu}{S/\sqrt{10}} $$ doesn't have standard normal distribution. "Was it all for nothing?" No way. Now comes the magic! Note that $$ \sum_{i=1}^{10} \frac{(X_i-\bar X)^2}{\sigma^2} =\sum_{i=1}^{10} \frac{[X_i-\mu+\mu-\bar X]^2}{\sigma^2} =\sum_{i=1}^{10} \frac{[(X_i-\mu)-(\bar X-\mu)]^2}{\sigma^2} =\sum_{i=1}^{10} \frac{(X_i-\mu)^2-2(X_i-\mu)(\bar X-\mu)+(\bar X-\mu)^2}{\sigma^2} =\sum_{i=1}^{10} \frac{(X_i-\mu)^2-(\bar X-\mu)^2}{\sigma^2} =\sum_{i=1}^{10} \frac{(X_i-\mu)^2}{\sigma^2}-\sum_{i=1}^{10} \frac{(\bar X-\mu)^2}{\sigma^2} =\sum_{i=1}^{10} \frac{(X_i-\mu)^2}{\sigma^2}-10\frac{(\bar X-\mu)^2}{\sigma^2} =\sum_{i=1}^{10} \frac{(X_i-\mu)^2}{\sigma^2}-\frac{(\bar X-\mu)^2}{\sigma^2/10} $$ or, equivalently, $$ \sum_{i=1}^{10} \frac{(X_i-\mu)^2}{\sigma^2} =\sum_{i=1}^{10} \frac{(X_i-\bar X)^2}{\sigma^2} +\frac{(\bar X-\mu)^2}{\sigma^2/10}. $$ Now we get back to those known faces. The first term has Chi-squared distribution with 10 degrees of freedom and the last term has Chi-squared distribution with one degree of freedom(!). We simply split a Chi-square with 10 independent equally-behaved sources of variability in two parts, both positive: one part is a Chi-square with one source of variability and the other we can prove (leap of faith? win by W.O.?) to be also a Chi-square with 9 (= 10-1) independent equally-behaved sources of variability, with both parts independent from one another. This is already a good news, since now we have its distribution. Alas, it uses $\sigma^2$, to which we have no access (recall that God is amusing Himself on watching our struggle). Well, $$ S^2=\frac{1}{10-1}\sum_{i=1}^{10} (X_i-\bar X)^2, $$ so $$ \sum_{i=1}^{10} \frac{(X_i-\bar X)^2}{\sigma^2} =\frac{\sum_{i=1}^{10} (X_i-\bar X)^2}{\sigma^2} =\frac{(10-1)S^2}{\sigma^2} \sim\chi^2_{(10-1)} $$ therefore $$ \frac{\bar X-\mu}{S/\sqrt{10}} =\frac{\frac{\bar X-\mu}{\sigma/\sqrt{10}}}{\frac{S}{\sigma}} =\frac{\frac{\bar X-\mu}{\sigma/\sqrt{10}}}{\sqrt{\frac{S^2}{\sigma^2}}} =\frac{\frac{\bar X-\mu}{\sigma/\sqrt{10}}}{\sqrt{\frac{\frac{(10-1)S^2}{\sigma^2}}{(10-1)}}} =\frac{N(0,1)}{\sqrt{\frac{\chi^2_{(10-1)}}{(10-1)}}}, $$ which is a distribution that is not the standard normal, but whose density can be derived from the densities of the standard normal and the Chi-squared with $(10-1)$ degrees of freedom. One very, very smart guy did that math[^1] in the beginning of 20th century and, as an unintended consequence, he made his boss the absolute world leader in the industry of Stout beer. I am talking about William Sealy Gosset (a.k.a. Student; yes, that Student, from the $t$ distribution) and Saint James's Gate Brewery (a.k.a. Guinness Brewery), of which I am a devout. [^1]: @whuber told in the comments below that Gosset did not do the math, but guessed instead! I really don't know which feat is more surprising for that time. That, my dear friend, is the origin of the $t$ distribution with $(10-1)$ degrees of freedom. The ratio of a standard normal and the squared root of an independent Chi-square divided by its degrees of freedom, which, in an unpredictable turn of tides, wind up describing the expected behavior of the estimation error you undergo when using the sample average $\bar X$ to estimate $\mu$ and using $S^2$ to estimate the variability of $\bar X$. There you go. With an awful lot of technical details grossly swept behind the rug, but not depending solely on God's intervention to dangerously bet your whole paycheck.
How to understand degrees of freedom? In my classes, I use one "simple" situation that might help you wonder and perhaps develop a gut feeling for what a degree of freedom may mean. It is kind of a "Forrest Gump" approach to the subject,
308
How to understand degrees of freedom?
This particular issue is quite frustrating for students in statistics courses, since they often cannot get a straight answer on exactly what a degree-of-freedom is defined to be. I will try to clear that up here. Suppose we have a random vector $\mathbf{x} \in \mathbb{R}^n$ and we form a new random vector $\mathbf{t} = T(\mathbf{x})$ via the linear function $T$. Formally, the degrees-of-freedom of $\mathbf{t}$ is the dimension of the space of allowable values for this vector, which is: $$DF \equiv \dim \mathscr{T} \equiv \dim \{ \mathbf{t} = T(\mathbf{x}) | \mathbf{x} \in \mathbb{R}^n \}.$$ The initial random vector $\mathbf{x}$ has an allowable space of dimension $n$, so it has $n$ degrees of freedom. Often the function $T$ will reduce the dimension of the allowable space of outcomes, and so $\mathbf{t}$ may have a lower degrees-of-freedom than $\mathbf{x}$. For example, in an answer to a related question you can see this formal definition of the degrees-of-freedom being used to explain Bessel's correction in the sample variance formula. In that particular case, transforming an initial sample to obtain its deviations from the sample mean leads to a deviation vector that has $n-1$ degrees-of-freedom (i.e., it is a vector in an allowable space with dimension $n-1$). When you apply this formal definition to statistical problems, you will usually find that the imposition of a single "constraint" on the random vector (via a linear equation on that vector) reduces the dimension of its allowable values by one, and thus reduces the degrees-of-freedom by one. As such, you will find that the above formal definition corresponds with the informal explanations you have been given. In undergraduate courses on statistics, you will generally find a lot of hand-waving and informal explanation of degrees-of-freedom, often via analogies or examples. The reason for this is that the formal definition requires an understanding of vector algebra and the geometry of vector spaces, which may be lacking in introductory statistics courses at an undergraduate level.
How to understand degrees of freedom?
This particular issue is quite frustrating for students in statistics courses, since they often cannot get a straight answer on exactly what a degree-of-freedom is defined to be. I will try to clear
How to understand degrees of freedom? This particular issue is quite frustrating for students in statistics courses, since they often cannot get a straight answer on exactly what a degree-of-freedom is defined to be. I will try to clear that up here. Suppose we have a random vector $\mathbf{x} \in \mathbb{R}^n$ and we form a new random vector $\mathbf{t} = T(\mathbf{x})$ via the linear function $T$. Formally, the degrees-of-freedom of $\mathbf{t}$ is the dimension of the space of allowable values for this vector, which is: $$DF \equiv \dim \mathscr{T} \equiv \dim \{ \mathbf{t} = T(\mathbf{x}) | \mathbf{x} \in \mathbb{R}^n \}.$$ The initial random vector $\mathbf{x}$ has an allowable space of dimension $n$, so it has $n$ degrees of freedom. Often the function $T$ will reduce the dimension of the allowable space of outcomes, and so $\mathbf{t}$ may have a lower degrees-of-freedom than $\mathbf{x}$. For example, in an answer to a related question you can see this formal definition of the degrees-of-freedom being used to explain Bessel's correction in the sample variance formula. In that particular case, transforming an initial sample to obtain its deviations from the sample mean leads to a deviation vector that has $n-1$ degrees-of-freedom (i.e., it is a vector in an allowable space with dimension $n-1$). When you apply this formal definition to statistical problems, you will usually find that the imposition of a single "constraint" on the random vector (via a linear equation on that vector) reduces the dimension of its allowable values by one, and thus reduces the degrees-of-freedom by one. As such, you will find that the above formal definition corresponds with the informal explanations you have been given. In undergraduate courses on statistics, you will generally find a lot of hand-waving and informal explanation of degrees-of-freedom, often via analogies or examples. The reason for this is that the formal definition requires an understanding of vector algebra and the geometry of vector spaces, which may be lacking in introductory statistics courses at an undergraduate level.
How to understand degrees of freedom? This particular issue is quite frustrating for students in statistics courses, since they often cannot get a straight answer on exactly what a degree-of-freedom is defined to be. I will try to clear
309
How to understand degrees of freedom?
You can see the degree of freedom as the number of observations minus the number of necessary relations among these observations. By exemple if you have $n$ sample of independant normal distribution observations $X_1,\dots,X_n$. The random variable $\sum_{i=1}^n (X_i-\overline{X}_n)^2\sim \mathcal{X}^2_{n-1}$, where $\overline{X}_n = \frac{1}{n}\sum_{i=1}^n X_i$. The degree of freedom here is $n-1$ because, their is one necessary relation between theses observations $(\overline{X}_n = \frac{1}{n}\sum_{i=1}^n X_i)$. For more information see this
How to understand degrees of freedom?
You can see the degree of freedom as the number of observations minus the number of necessary relations among these observations. By exemple if you have $n$ sample of independant normal distribution o
How to understand degrees of freedom? You can see the degree of freedom as the number of observations minus the number of necessary relations among these observations. By exemple if you have $n$ sample of independant normal distribution observations $X_1,\dots,X_n$. The random variable $\sum_{i=1}^n (X_i-\overline{X}_n)^2\sim \mathcal{X}^2_{n-1}$, where $\overline{X}_n = \frac{1}{n}\sum_{i=1}^n X_i$. The degree of freedom here is $n-1$ because, their is one necessary relation between theses observations $(\overline{X}_n = \frac{1}{n}\sum_{i=1}^n X_i)$. For more information see this
How to understand degrees of freedom? You can see the degree of freedom as the number of observations minus the number of necessary relations among these observations. By exemple if you have $n$ sample of independant normal distribution o
310
How to understand degrees of freedom?
The clearest "formal" definition of degrees-of-freedom is that it is the dimension of the space of allowable values for a random vector. This generally arises in a context where we have a sample vector $\mathbf{x} \in \mathbb{R}^n$ and we form a new random vector $\mathbf{t} = T(\mathbf{x})$ via the linear function $T$. Formally, the degrees-of-freedom of $\mathbf{t}$ is the dimension of the space of allowable values for this vector, which is: $$DF \equiv \dim \mathscr{T} \equiv \dim \{ \mathbf{t} = T(\mathbf{x}) | \mathbf{x} \in \mathbb{R}^n \}.$$ If we represent this linear transformation by the matrix transformation $T(\mathbf{x}) = \mathbf{T} \mathbf{x}$ then we have: $$\begin{aligned} DF &= \dim \{ \mathbf{t} = T(\mathbf{x}) | \mathbf{x} \in \mathbb{R}^n \} \\[6pt] &= \dim \{ \mathbf{T} \mathbf{x} | \mathbf{x} \in \mathbb{R}^n \} \\[6pt] &= \text{rank} \ \mathbf{T} \\[6pt] &= n - \text{Ker} \ \mathbf{T}, \\[6pt] \end{aligned}$$ where the last step follows from the rank-nullity theorem. This means that when we transform $\mathbf{x}$ by the linear transformation $T$ we lose degrees-of-freedom equal to the kernel (nullspace) of $\mathbf{T}$. In statistical problems, there is a close relationship between the eigenvalues of $\mathbf{T}$ and the loss of degrees-of-freedom from the transformation. Often the loss of degrees-of-freedom is equivalent to the number of zero eigenvalues in the transformation matrix $\mathbf{T}$. For example, in this answer we see that Bessel's correction to the sample variance, adjusting for the degrees-of-freedom of the vector of deviations from the mean, is closely related to the eigenvalues of the centering matrix. An identical result occurs in higher dimensions in linear regression analysis. In other statistical problems, similar relationships occur between the eigenvalues of the transformation matrix and the loss of degrees-of-freedom. The above result also formalises the notation that one loses a degree-of-freedom for each "constraint" imposed on the observable vector of interest. Thus, in simple univariate sampling problems, when looking at the sample variance, one loses a degree-of-freedom from estimating the mean. In linear regression models, when looking at the MSE, one loses a degree-of-freedom for each model coefficient that was estimated.
How to understand degrees of freedom?
The clearest "formal" definition of degrees-of-freedom is that it is the dimension of the space of allowable values for a random vector. This generally arises in a context where we have a sample vect
How to understand degrees of freedom? The clearest "formal" definition of degrees-of-freedom is that it is the dimension of the space of allowable values for a random vector. This generally arises in a context where we have a sample vector $\mathbf{x} \in \mathbb{R}^n$ and we form a new random vector $\mathbf{t} = T(\mathbf{x})$ via the linear function $T$. Formally, the degrees-of-freedom of $\mathbf{t}$ is the dimension of the space of allowable values for this vector, which is: $$DF \equiv \dim \mathscr{T} \equiv \dim \{ \mathbf{t} = T(\mathbf{x}) | \mathbf{x} \in \mathbb{R}^n \}.$$ If we represent this linear transformation by the matrix transformation $T(\mathbf{x}) = \mathbf{T} \mathbf{x}$ then we have: $$\begin{aligned} DF &= \dim \{ \mathbf{t} = T(\mathbf{x}) | \mathbf{x} \in \mathbb{R}^n \} \\[6pt] &= \dim \{ \mathbf{T} \mathbf{x} | \mathbf{x} \in \mathbb{R}^n \} \\[6pt] &= \text{rank} \ \mathbf{T} \\[6pt] &= n - \text{Ker} \ \mathbf{T}, \\[6pt] \end{aligned}$$ where the last step follows from the rank-nullity theorem. This means that when we transform $\mathbf{x}$ by the linear transformation $T$ we lose degrees-of-freedom equal to the kernel (nullspace) of $\mathbf{T}$. In statistical problems, there is a close relationship between the eigenvalues of $\mathbf{T}$ and the loss of degrees-of-freedom from the transformation. Often the loss of degrees-of-freedom is equivalent to the number of zero eigenvalues in the transformation matrix $\mathbf{T}$. For example, in this answer we see that Bessel's correction to the sample variance, adjusting for the degrees-of-freedom of the vector of deviations from the mean, is closely related to the eigenvalues of the centering matrix. An identical result occurs in higher dimensions in linear regression analysis. In other statistical problems, similar relationships occur between the eigenvalues of the transformation matrix and the loss of degrees-of-freedom. The above result also formalises the notation that one loses a degree-of-freedom for each "constraint" imposed on the observable vector of interest. Thus, in simple univariate sampling problems, when looking at the sample variance, one loses a degree-of-freedom from estimating the mean. In linear regression models, when looking at the MSE, one loses a degree-of-freedom for each model coefficient that was estimated.
How to understand degrees of freedom? The clearest "formal" definition of degrees-of-freedom is that it is the dimension of the space of allowable values for a random vector. This generally arises in a context where we have a sample vect
311
How to understand degrees of freedom?
An intuitive explanation of degrees of freedom is that they represent the number of independent pieces of information available in the data for estimating a parameter (i.e., unknown quantity) of interest. As an example, in a simple linear regression model of the form: $$ Y_i=\beta_0 + \beta_1\cdot X_i + \epsilon_i,\quad i=1,\ldots, n $$ where the $\epsilon_i$'s represent independent normally distributed error terms with mean 0 and standard deviation $\sigma$, we use 1 degree of freedom to estimate the intercept $\beta_0$ and 1 degree of freedom to estimate the slope $\beta_1$. Since we started out with $n$ observations and used up 2 degrees of freedom (i.e., two independent pieces of information), we are left with $n-2$ degrees of freedom (i.e., $n-2$ independent pieces of information) available for estimating the error standard deviation $\sigma$.
How to understand degrees of freedom?
An intuitive explanation of degrees of freedom is that they represent the number of independent pieces of information available in the data for estimating a parameter (i.e., unknown quantity) of inter
How to understand degrees of freedom? An intuitive explanation of degrees of freedom is that they represent the number of independent pieces of information available in the data for estimating a parameter (i.e., unknown quantity) of interest. As an example, in a simple linear regression model of the form: $$ Y_i=\beta_0 + \beta_1\cdot X_i + \epsilon_i,\quad i=1,\ldots, n $$ where the $\epsilon_i$'s represent independent normally distributed error terms with mean 0 and standard deviation $\sigma$, we use 1 degree of freedom to estimate the intercept $\beta_0$ and 1 degree of freedom to estimate the slope $\beta_1$. Since we started out with $n$ observations and used up 2 degrees of freedom (i.e., two independent pieces of information), we are left with $n-2$ degrees of freedom (i.e., $n-2$ independent pieces of information) available for estimating the error standard deviation $\sigma$.
How to understand degrees of freedom? An intuitive explanation of degrees of freedom is that they represent the number of independent pieces of information available in the data for estimating a parameter (i.e., unknown quantity) of inter
312
How to understand degrees of freedom?
For me the first explanation I understood was: If you know some statistical value like mean or variation, how many variables of data you need to know before you can know the value of every variable? This is the same as aL3xa said, but without giving any data point a special role and close to the third case given in the answer. In this way the same example would be: If you know the mean of data, you need to know the values for all but one data point, to know the value to all data points.
How to understand degrees of freedom?
For me the first explanation I understood was: If you know some statistical value like mean or variation, how many variables of data you need to know before you can know the value of every variab
How to understand degrees of freedom? For me the first explanation I understood was: If you know some statistical value like mean or variation, how many variables of data you need to know before you can know the value of every variable? This is the same as aL3xa said, but without giving any data point a special role and close to the third case given in the answer. In this way the same example would be: If you know the mean of data, you need to know the values for all but one data point, to know the value to all data points.
How to understand degrees of freedom? For me the first explanation I understood was: If you know some statistical value like mean or variation, how many variables of data you need to know before you can know the value of every variab
313
How to understand degrees of freedom?
Think of it this way. Variances are additive when independent. For example, suppose we are throwing darts at a board and we measure the standard deviations of the $x$ and $y$ displacements from the exact center of the board. Then $V_{x,y}=V_x+V_y$. But, $V_x=SD_x^2$ if we take the square root of the $V_{x,y}$ formula, we get the distance formula for orthogonal coordinates, $SD_{x,y}=\sqrt{SD_x^2+SD_y^2}$. Now all we have to show is that standard deviation is a representative measure of displacement away from the center of the dart board. Since $SD_x=\sqrt{\dfrac{\sum_{i=1}^n(x_i-\bar{x})^2}{n-1}}$, we have a ready means of discussing df. Note that when $n=1$, then $x_1-\bar{x}=0$ and the ratio $\dfrac{\sum_{i=1}^n(x_i-\bar{x})^2}{n-1}\rightarrow \dfrac{0}{0}$. In other words, there is no deviation to be had between one dart's $x$-coordinate and itself. The first time we have a deviation is for $n=2$ and there is only one of them, a duplicate. That duplicate deviation is the squared distance between $x_1$ or $x_2$ and $\bar{x}=\dfrac{x_1+x_2}{2}$ because $\bar{x}$ is the midpoint between or average of $x_1$ and $x_2$. In general, for $n$ distances we remove 1 because $\bar{x}$ is dependent on all $n$ of those distances. Now, $n-1$ represents the degrees of freedom because it normalizes for the number of unique outcomes to make an expected square distance. when divided into the sum of those square distances.
How to understand degrees of freedom?
Think of it this way. Variances are additive when independent. For example, suppose we are throwing darts at a board and we measure the standard deviations of the $x$ and $y$ displacements from the ex
How to understand degrees of freedom? Think of it this way. Variances are additive when independent. For example, suppose we are throwing darts at a board and we measure the standard deviations of the $x$ and $y$ displacements from the exact center of the board. Then $V_{x,y}=V_x+V_y$. But, $V_x=SD_x^2$ if we take the square root of the $V_{x,y}$ formula, we get the distance formula for orthogonal coordinates, $SD_{x,y}=\sqrt{SD_x^2+SD_y^2}$. Now all we have to show is that standard deviation is a representative measure of displacement away from the center of the dart board. Since $SD_x=\sqrt{\dfrac{\sum_{i=1}^n(x_i-\bar{x})^2}{n-1}}$, we have a ready means of discussing df. Note that when $n=1$, then $x_1-\bar{x}=0$ and the ratio $\dfrac{\sum_{i=1}^n(x_i-\bar{x})^2}{n-1}\rightarrow \dfrac{0}{0}$. In other words, there is no deviation to be had between one dart's $x$-coordinate and itself. The first time we have a deviation is for $n=2$ and there is only one of them, a duplicate. That duplicate deviation is the squared distance between $x_1$ or $x_2$ and $\bar{x}=\dfrac{x_1+x_2}{2}$ because $\bar{x}$ is the midpoint between or average of $x_1$ and $x_2$. In general, for $n$ distances we remove 1 because $\bar{x}$ is dependent on all $n$ of those distances. Now, $n-1$ represents the degrees of freedom because it normalizes for the number of unique outcomes to make an expected square distance. when divided into the sum of those square distances.
How to understand degrees of freedom? Think of it this way. Variances are additive when independent. For example, suppose we are throwing darts at a board and we measure the standard deviations of the $x$ and $y$ displacements from the ex
314
What's the difference between a confidence interval and a credible interval?
I agree completely with Srikant's explanation. To give a more heuristic spin on it: Classical approaches generally posit that the world is one way (e.g., a parameter has one particular true value), and try to conduct experiments whose resulting conclusion -- no matter the true value of the parameter -- will be correct with at least some minimum probability. As a result, to express uncertainty in our knowledge after an experiment, the frequentist approach uses a "confidence interval" -- a range of values designed to include the true value of the parameter with some minimum probability, say 95%. A frequentist will design the experiment and 95% confidence interval procedure so that out of every 100 experiments run start to finish, at least 95 of the resulting confidence intervals will be expected to include the true value of the parameter. The other 5 might be slightly wrong, or they might be complete nonsense -- formally speaking that's ok as far as the approach is concerned, as long as 95 out of 100 inferences are correct. (Of course we would prefer them to be slightly wrong, not total nonsense.) Bayesian approaches formulate the problem differently. Instead of saying the parameter simply has one (unknown) true value, a Bayesian method says the parameter's value is fixed but has been chosen from some probability distribution -- known as the prior probability distribution. (Another way to say that is that before taking any measurements, the Bayesian assigns a probability distribution, which they call a belief state, on what the true value of the parameter happens to be.) This "prior" might be known (imagine trying to estimate the size of a truck, if we know the overall distribution of truck sizes from the DMV) or it might be an assumption drawn out of thin air. The Bayesian inference is simpler -- we collect some data, and then calculate the probability of different values of the parameter GIVEN the data. This new probability distribution is called the "a posteriori probability" or simply the "posterior." Bayesian approaches can summarize their uncertainty by giving a range of values on the posterior probability distribution that includes 95% of the probability -- this is called a "95% credibility interval." A Bayesian partisan might criticize the frequentist confidence interval like this: "So what if 95 out of 100 experiments yield a confidence interval that includes the true value? I don't care about 99 experiments I DIDN'T DO; I care about this experiment I DID DO. Your rule allows 5 out of the 100 to be complete nonsense [negative values, impossible values] as long as the other 95 are correct; that's ridiculous." A frequentist die-hard might criticize the Bayesian credibility interval like this: "So what if 95% of the posterior probability is included in this range? What if the true value is, say, 0.37? If it is, then your method, run start to finish, will be WRONG 75% of the time. Your response is, 'Oh well, that's ok because according to the prior it's very rare that the value is 0.37,' and that may be so, but I want a method that works for ANY possible value of the parameter. I don't care about 99 values of the parameter that IT DOESN'T HAVE; I care about the one true value IT DOES HAVE. Oh also, by the way, your answers are only correct if the prior is correct. If you just pull it out of thin air because it feels right, you can be way off." In a sense both of these partisans are correct in their criticisms of each others' methods, but I would urge you to think mathematically about the distinction -- as Srikant explains. Here's an extended example from that talk that shows the difference precisely in a discrete example. When I was a child my mother used to occasionally surprise me by ordering a jar of chocolate-chip cookies to be delivered by mail. The delivery company stocked four different kinds of cookie jars -- type A, type B, type C, and type D, and they were all on the same truck and you were never sure what type you would get. Each jar had exactly 100 cookies, but the feature that distinguished the different cookie jars was their respective distributions of chocolate chips per cookie. If you reached into a jar and took out a single cookie uniformly at random, these are the probability distributions you would get on the number of chips: A type-A cookie jar, for example, has 70 cookies with two chips each, and no cookies with four chips or more! A type-D cookie jar has 70 cookies with one chip each. Notice how each vertical column is a probability mass function -- the conditional probability of the number of chips you'd get, given that the jar = A, or B, or C, or D, and each column sums to 100. I used to love to play a game as soon as the deliveryman dropped off my new cookie jar. I'd pull one single cookie at random from the jar, count the chips on the cookie, and try to express my uncertainty -- at the 70% level -- of which jars it could be. Thus it's the identity of the jar (A, B, C or D) that is the value of the parameter being estimated. The number of chips (0, 1, 2, 3 or 4) is the outcome or the observation or the sample. Originally I played this game using a frequentist, 70% confidence interval. Such an interval needs to make sure that no matter the true value of the parameter, meaning no matter which cookie jar I got, the interval would cover that true value with at least 70% probability. An interval, of course, is a function that relates an outcome (a row) to a set of values of the parameter (a set of columns). But to construct the confidence interval and guarantee 70% coverage, we need to work "vertically" -- looking at each column in turn, and making sure that 70% of the probability mass function is covered so that 70% of the time, that column's identity will be part of the interval that results. Remember that it's the vertical columns that form a p.m.f. So after doing that procedure, I ended up with these intervals: For example, if the number of chips on the cookie I draw is 1, my confidence interval will be {B,C,D}. If the number is 4, my confidence interval will be {B,C}. Notice that since each column sums to 70% or greater, then no matter which column we are truly in (no matter which jar the deliveryman dropped off), the interval resulting from this procedure will include the correct jar with at least 70% probability. Notice also that the procedure I followed in constructing the intervals had some discretion. In the column for type-B, I could have just as easily made sure that the intervals that included B would be 0,1,2,3 instead of 1,2,3,4. That would have resulted in 75% coverage for type-B jars (12+19+24+20), still meeting the lower bound of 70%. My sister Bayesia thought this approach was crazy, though. "You have to consider the deliverman as part of the system," she said. "Let's treat the identity of the jar as a random variable itself, and let's assume that the deliverman chooses among them uniformly -- meaning he has all four on his truck, and when he gets to our house he picks one at random, each with uniform probability." "With that assumption, now let's look at the joint probabilities of the whole event -- the jar type and the number of chips you draw from your first cookie," she said, drawing the following table: Notice that the whole table is now a probability mass function -- meaning the whole table sums to 100%. "Ok," I said, "where are you headed with this?" "You've been looking at the conditional probability of the number of chips, given the jar," said Bayesia. "That's all wrong! What you really care about is the conditional probability of which jar it is, given the number of chips on the cookie! Your 70% interval should simply include the list jars that, in total, have 70% probability of being the true jar. Isn't that a lot simpler and more intuitive?" "Sure, but how do we calculate that?" I asked. "Let's say we know that you got 3 chips. Then we can ignore all the other rows in the table, and simply treat that row as a probability mass function. We'll need to scale up the probabilities proportionately so each row sums to 100, though." She did: "Notice how each row is now a p.m.f., and sums to 100%. We've flipped the conditional probability from what you started with -- now it's the probability of the man having dropped off a certain jar, given the number of chips on the first cookie." "Interesting," I said. "So now we just circle enough jars in each row to get up to 70% probability?" We did just that, making these credibility intervals: Each interval includes a set of jars that, a posteriori, sum to 70% probability of being the true jar. "Well, hang on," I said. "I'm not convinced. Let's put the two kinds of intervals side-by-side and compare them for coverage and, assuming that the deliveryman picks each kind of jar with equal probability, credibility." Here they are: Confidence intervals: Credibility intervals: "See how crazy your confidence intervals are?" said Bayesia. "You don't even have a sensible answer when you draw a cookie with zero chips! You just say it's the empty interval. But that's obviously wrong -- it has to be one of the four types of jars. How can you live with yourself, stating an interval at the end of the day when you know the interval is wrong? And ditto when you pull a cookie with 3 chips -- your interval is only correct 41% of the time. Calling this a '70%' confidence interval is bullshit." "Well, hey," I replied. "It's correct 70% of the time, no matter which jar the deliveryman dropped off. That's a lot more than you can say about your credibility intervals. What if the jar is type B? Then your interval will be wrong 80% of the time, and only correct 20% of the time!" "This seems like a big problem," I continued, "because your mistakes will be correlated with the type of jar. If you send out 100 'Bayesian' robots to assess what type of jar you have, each robot sampling one cookie, you're telling me that on type-B days, you will expect 80 of the robots to get the wrong answer, each having >73% belief in its incorrect conclusion! That's troublesome, especially if you want most of the robots to agree on the right answer." "PLUS we had to make this assumption that the deliveryman behaves uniformly and selects each type of jar at random," I said. "Where did that come from? What if it's wrong? You haven't talked to him; you haven't interviewed him. Yet all your statements of a posteriori probability rest on this statement about his behavior. I didn't have to make any such assumptions, and my interval meets its criterion even in the worst case." "It's true that my credibility interval does perform poorly on type-B jars," Bayesia said. "But so what? Type B jars happen only 25% of the time. It's balanced out by my good coverage of type A, C, and D jars. And I never publish nonsense." "It's true that my confidence interval does perform poorly when I've drawn a cookie with zero chips," I said. "But so what? Chipless cookies happen, at most, 27% of the time in the worst case (a type-D jar). I can afford to give nonsense for this outcome because NO jar will result in a wrong answer more than 30% of the time." "The column sums matter," I said. "The row sums matter," Bayesia said. "I can see we're at an impasse," I said. "We're both correct in the mathematical statements we're making, but we disagree about the appropriate way to quantify uncertainty." "That's true," said my sister. "Want a cookie?"
What's the difference between a confidence interval and a credible interval?
I agree completely with Srikant's explanation. To give a more heuristic spin on it: Classical approaches generally posit that the world is one way (e.g., a parameter has one particular true value), an
What's the difference between a confidence interval and a credible interval? I agree completely with Srikant's explanation. To give a more heuristic spin on it: Classical approaches generally posit that the world is one way (e.g., a parameter has one particular true value), and try to conduct experiments whose resulting conclusion -- no matter the true value of the parameter -- will be correct with at least some minimum probability. As a result, to express uncertainty in our knowledge after an experiment, the frequentist approach uses a "confidence interval" -- a range of values designed to include the true value of the parameter with some minimum probability, say 95%. A frequentist will design the experiment and 95% confidence interval procedure so that out of every 100 experiments run start to finish, at least 95 of the resulting confidence intervals will be expected to include the true value of the parameter. The other 5 might be slightly wrong, or they might be complete nonsense -- formally speaking that's ok as far as the approach is concerned, as long as 95 out of 100 inferences are correct. (Of course we would prefer them to be slightly wrong, not total nonsense.) Bayesian approaches formulate the problem differently. Instead of saying the parameter simply has one (unknown) true value, a Bayesian method says the parameter's value is fixed but has been chosen from some probability distribution -- known as the prior probability distribution. (Another way to say that is that before taking any measurements, the Bayesian assigns a probability distribution, which they call a belief state, on what the true value of the parameter happens to be.) This "prior" might be known (imagine trying to estimate the size of a truck, if we know the overall distribution of truck sizes from the DMV) or it might be an assumption drawn out of thin air. The Bayesian inference is simpler -- we collect some data, and then calculate the probability of different values of the parameter GIVEN the data. This new probability distribution is called the "a posteriori probability" or simply the "posterior." Bayesian approaches can summarize their uncertainty by giving a range of values on the posterior probability distribution that includes 95% of the probability -- this is called a "95% credibility interval." A Bayesian partisan might criticize the frequentist confidence interval like this: "So what if 95 out of 100 experiments yield a confidence interval that includes the true value? I don't care about 99 experiments I DIDN'T DO; I care about this experiment I DID DO. Your rule allows 5 out of the 100 to be complete nonsense [negative values, impossible values] as long as the other 95 are correct; that's ridiculous." A frequentist die-hard might criticize the Bayesian credibility interval like this: "So what if 95% of the posterior probability is included in this range? What if the true value is, say, 0.37? If it is, then your method, run start to finish, will be WRONG 75% of the time. Your response is, 'Oh well, that's ok because according to the prior it's very rare that the value is 0.37,' and that may be so, but I want a method that works for ANY possible value of the parameter. I don't care about 99 values of the parameter that IT DOESN'T HAVE; I care about the one true value IT DOES HAVE. Oh also, by the way, your answers are only correct if the prior is correct. If you just pull it out of thin air because it feels right, you can be way off." In a sense both of these partisans are correct in their criticisms of each others' methods, but I would urge you to think mathematically about the distinction -- as Srikant explains. Here's an extended example from that talk that shows the difference precisely in a discrete example. When I was a child my mother used to occasionally surprise me by ordering a jar of chocolate-chip cookies to be delivered by mail. The delivery company stocked four different kinds of cookie jars -- type A, type B, type C, and type D, and they were all on the same truck and you were never sure what type you would get. Each jar had exactly 100 cookies, but the feature that distinguished the different cookie jars was their respective distributions of chocolate chips per cookie. If you reached into a jar and took out a single cookie uniformly at random, these are the probability distributions you would get on the number of chips: A type-A cookie jar, for example, has 70 cookies with two chips each, and no cookies with four chips or more! A type-D cookie jar has 70 cookies with one chip each. Notice how each vertical column is a probability mass function -- the conditional probability of the number of chips you'd get, given that the jar = A, or B, or C, or D, and each column sums to 100. I used to love to play a game as soon as the deliveryman dropped off my new cookie jar. I'd pull one single cookie at random from the jar, count the chips on the cookie, and try to express my uncertainty -- at the 70% level -- of which jars it could be. Thus it's the identity of the jar (A, B, C or D) that is the value of the parameter being estimated. The number of chips (0, 1, 2, 3 or 4) is the outcome or the observation or the sample. Originally I played this game using a frequentist, 70% confidence interval. Such an interval needs to make sure that no matter the true value of the parameter, meaning no matter which cookie jar I got, the interval would cover that true value with at least 70% probability. An interval, of course, is a function that relates an outcome (a row) to a set of values of the parameter (a set of columns). But to construct the confidence interval and guarantee 70% coverage, we need to work "vertically" -- looking at each column in turn, and making sure that 70% of the probability mass function is covered so that 70% of the time, that column's identity will be part of the interval that results. Remember that it's the vertical columns that form a p.m.f. So after doing that procedure, I ended up with these intervals: For example, if the number of chips on the cookie I draw is 1, my confidence interval will be {B,C,D}. If the number is 4, my confidence interval will be {B,C}. Notice that since each column sums to 70% or greater, then no matter which column we are truly in (no matter which jar the deliveryman dropped off), the interval resulting from this procedure will include the correct jar with at least 70% probability. Notice also that the procedure I followed in constructing the intervals had some discretion. In the column for type-B, I could have just as easily made sure that the intervals that included B would be 0,1,2,3 instead of 1,2,3,4. That would have resulted in 75% coverage for type-B jars (12+19+24+20), still meeting the lower bound of 70%. My sister Bayesia thought this approach was crazy, though. "You have to consider the deliverman as part of the system," she said. "Let's treat the identity of the jar as a random variable itself, and let's assume that the deliverman chooses among them uniformly -- meaning he has all four on his truck, and when he gets to our house he picks one at random, each with uniform probability." "With that assumption, now let's look at the joint probabilities of the whole event -- the jar type and the number of chips you draw from your first cookie," she said, drawing the following table: Notice that the whole table is now a probability mass function -- meaning the whole table sums to 100%. "Ok," I said, "where are you headed with this?" "You've been looking at the conditional probability of the number of chips, given the jar," said Bayesia. "That's all wrong! What you really care about is the conditional probability of which jar it is, given the number of chips on the cookie! Your 70% interval should simply include the list jars that, in total, have 70% probability of being the true jar. Isn't that a lot simpler and more intuitive?" "Sure, but how do we calculate that?" I asked. "Let's say we know that you got 3 chips. Then we can ignore all the other rows in the table, and simply treat that row as a probability mass function. We'll need to scale up the probabilities proportionately so each row sums to 100, though." She did: "Notice how each row is now a p.m.f., and sums to 100%. We've flipped the conditional probability from what you started with -- now it's the probability of the man having dropped off a certain jar, given the number of chips on the first cookie." "Interesting," I said. "So now we just circle enough jars in each row to get up to 70% probability?" We did just that, making these credibility intervals: Each interval includes a set of jars that, a posteriori, sum to 70% probability of being the true jar. "Well, hang on," I said. "I'm not convinced. Let's put the two kinds of intervals side-by-side and compare them for coverage and, assuming that the deliveryman picks each kind of jar with equal probability, credibility." Here they are: Confidence intervals: Credibility intervals: "See how crazy your confidence intervals are?" said Bayesia. "You don't even have a sensible answer when you draw a cookie with zero chips! You just say it's the empty interval. But that's obviously wrong -- it has to be one of the four types of jars. How can you live with yourself, stating an interval at the end of the day when you know the interval is wrong? And ditto when you pull a cookie with 3 chips -- your interval is only correct 41% of the time. Calling this a '70%' confidence interval is bullshit." "Well, hey," I replied. "It's correct 70% of the time, no matter which jar the deliveryman dropped off. That's a lot more than you can say about your credibility intervals. What if the jar is type B? Then your interval will be wrong 80% of the time, and only correct 20% of the time!" "This seems like a big problem," I continued, "because your mistakes will be correlated with the type of jar. If you send out 100 'Bayesian' robots to assess what type of jar you have, each robot sampling one cookie, you're telling me that on type-B days, you will expect 80 of the robots to get the wrong answer, each having >73% belief in its incorrect conclusion! That's troublesome, especially if you want most of the robots to agree on the right answer." "PLUS we had to make this assumption that the deliveryman behaves uniformly and selects each type of jar at random," I said. "Where did that come from? What if it's wrong? You haven't talked to him; you haven't interviewed him. Yet all your statements of a posteriori probability rest on this statement about his behavior. I didn't have to make any such assumptions, and my interval meets its criterion even in the worst case." "It's true that my credibility interval does perform poorly on type-B jars," Bayesia said. "But so what? Type B jars happen only 25% of the time. It's balanced out by my good coverage of type A, C, and D jars. And I never publish nonsense." "It's true that my confidence interval does perform poorly when I've drawn a cookie with zero chips," I said. "But so what? Chipless cookies happen, at most, 27% of the time in the worst case (a type-D jar). I can afford to give nonsense for this outcome because NO jar will result in a wrong answer more than 30% of the time." "The column sums matter," I said. "The row sums matter," Bayesia said. "I can see we're at an impasse," I said. "We're both correct in the mathematical statements we're making, but we disagree about the appropriate way to quantify uncertainty." "That's true," said my sister. "Want a cookie?"
What's the difference between a confidence interval and a credible interval? I agree completely with Srikant's explanation. To give a more heuristic spin on it: Classical approaches generally posit that the world is one way (e.g., a parameter has one particular true value), an
315
What's the difference between a confidence interval and a credible interval?
My understanding is as follows: Background Suppose that you have some data $x$ and you are trying to estimate $\theta$. You have a data generating process that describes how $x$ is generated conditional on $\theta$. In other words you know the distribution of $x$ (say, $f(x|\theta)$. Inference Problem Your inference problem is: What values of $\theta$ are reasonable given the observed data $x$ ? Confidence Intervals Confidence intervals are a classical answer to the above problem. In this approach, you assume that there is true, fixed value of $\theta$. Given this assumption, you use the data $x$ to get to an estimate of $\theta$ (say, $\hat{\theta}$). Once you have your estimate you want to assess where the true value is in relation to your estimate. Notice that under this approach the true value is not a random variable. It is a fixed but unknown quantity. In contrast, your estimate is a random variable as it depends on your data $x$ which was generated from your data generating process. Thus, you realize that you get different estimates each time you repeat your study. The above understanding leads to the following methodology to assess where the true parameter is in relation to your estimate. Define an interval, $I \equiv [lb(x), ub(x)]$ with the following property: $P(\theta \in I) = 0.95$ An interval constructed like the above is what is called a confidence interval. Since, the true value is unknown but fixed, the true value is either in the interval or outside the interval. The confidence interval then is a statement about the likelihood that the interval we obtain actually has the true parameter value. Thus, the probability statement is about the interval (i.e., the chances that interval which has the true value or not) rather than about the location of the true parameter value. In this paradigm, it is meaningless to speak about the probability that a true value is less than or greater than some value as the true value is not a random variable. Credible Intervals In contrast to the classical approach, in the bayesian approach we assume that the true value is a random variable. Thus, we capture the our uncertainty about the true parameter value by a imposing a prior distribution on the true parameter vector (say $f(\theta)$). Using bayes theorem, we construct the posterior distribution for the parameter vector by blending the prior and the data we have (briefly the posterior is $f(\theta|-) \propto f(\theta) f(x|\theta)$). We then arrive at a point estimate using the posterior distribution (e.g., use the mean of the posterior distribution). However, since under this paradigm, the true parameter vector is a random variable, we also want to know the extent of uncertainty we have in our point estimate. Thus, we construct an interval such that the following holds: $P(l(\theta) \le {\theta} \le ub(\theta)) = 0.95$ The above is a credible interval. Summary Credible intervals capture our current uncertainty in the location of the parameter values and thus can be interpreted as probabilistic statement about the parameter. In contrast, confidence intervals capture the uncertainty about the interval we have obtained (i.e., whether it contains the true value or not). Thus, they cannot be interpreted as a probabilistic statement about the true parameter values.
What's the difference between a confidence interval and a credible interval?
My understanding is as follows: Background Suppose that you have some data $x$ and you are trying to estimate $\theta$. You have a data generating process that describes how $x$ is generated condition
What's the difference between a confidence interval and a credible interval? My understanding is as follows: Background Suppose that you have some data $x$ and you are trying to estimate $\theta$. You have a data generating process that describes how $x$ is generated conditional on $\theta$. In other words you know the distribution of $x$ (say, $f(x|\theta)$. Inference Problem Your inference problem is: What values of $\theta$ are reasonable given the observed data $x$ ? Confidence Intervals Confidence intervals are a classical answer to the above problem. In this approach, you assume that there is true, fixed value of $\theta$. Given this assumption, you use the data $x$ to get to an estimate of $\theta$ (say, $\hat{\theta}$). Once you have your estimate you want to assess where the true value is in relation to your estimate. Notice that under this approach the true value is not a random variable. It is a fixed but unknown quantity. In contrast, your estimate is a random variable as it depends on your data $x$ which was generated from your data generating process. Thus, you realize that you get different estimates each time you repeat your study. The above understanding leads to the following methodology to assess where the true parameter is in relation to your estimate. Define an interval, $I \equiv [lb(x), ub(x)]$ with the following property: $P(\theta \in I) = 0.95$ An interval constructed like the above is what is called a confidence interval. Since, the true value is unknown but fixed, the true value is either in the interval or outside the interval. The confidence interval then is a statement about the likelihood that the interval we obtain actually has the true parameter value. Thus, the probability statement is about the interval (i.e., the chances that interval which has the true value or not) rather than about the location of the true parameter value. In this paradigm, it is meaningless to speak about the probability that a true value is less than or greater than some value as the true value is not a random variable. Credible Intervals In contrast to the classical approach, in the bayesian approach we assume that the true value is a random variable. Thus, we capture the our uncertainty about the true parameter value by a imposing a prior distribution on the true parameter vector (say $f(\theta)$). Using bayes theorem, we construct the posterior distribution for the parameter vector by blending the prior and the data we have (briefly the posterior is $f(\theta|-) \propto f(\theta) f(x|\theta)$). We then arrive at a point estimate using the posterior distribution (e.g., use the mean of the posterior distribution). However, since under this paradigm, the true parameter vector is a random variable, we also want to know the extent of uncertainty we have in our point estimate. Thus, we construct an interval such that the following holds: $P(l(\theta) \le {\theta} \le ub(\theta)) = 0.95$ The above is a credible interval. Summary Credible intervals capture our current uncertainty in the location of the parameter values and thus can be interpreted as probabilistic statement about the parameter. In contrast, confidence intervals capture the uncertainty about the interval we have obtained (i.e., whether it contains the true value or not). Thus, they cannot be interpreted as a probabilistic statement about the true parameter values.
What's the difference between a confidence interval and a credible interval? My understanding is as follows: Background Suppose that you have some data $x$ and you are trying to estimate $\theta$. You have a data generating process that describes how $x$ is generated condition
316
What's the difference between a confidence interval and a credible interval?
I disagree with Srikant's answer on one fundamental point. Srikant stated this: "Inference Problem: Your inference problem is: What values of θ are reasonable given the observed data x?" In fact this is the BAYESIAN INFERENCE PROBLEM. In Bayesian statistics we seek to calculate P(θ| x) i.e the probability of the parameter value given the observed data (sample). The CREDIBLE INTERVAL is an interval of θ that has a 95% chance (or other) of containing the true value of θ given the several assumptions underlying the problem. The FREQUENTIST INFERENCE PROBLEM is this: Are the observed data x reasonable given the hypothesised values of θ? In frequentist statistics we seek to calculate P(x| θ) i.e the probability of observing the data (sample) given the hypothesised parameter value(s). The CONFIDENCE INTERVAL (perhaps a misnomer) is interpreted as: if the experiment that generated the random sample x were repeated many times, 95% (or other) of such intervals constructed from those random samples would contain the true value of the parameter. Mess with your head? That's the problem with frequentist statistics and the main thing Bayesian statistics has going for it. As Sikrant points out, P(θ| x) and P(x| θ) are related as follows: P(θ| x) = P(θ)P(x| θ) Where P(θ) is our prior probability; P(x| θ) is the probability of the data conditional on that prior and P(θ| x) is the posterior probability. The prior P(θ) is inherently subjective, but that is the price of knowledge about the Universe - in a very profound sense. The other parts of both Sikrant's and Keith's answers are excellent.
What's the difference between a confidence interval and a credible interval?
I disagree with Srikant's answer on one fundamental point. Srikant stated this: "Inference Problem: Your inference problem is: What values of θ are reasonable given the observed data x?" In fact this
What's the difference between a confidence interval and a credible interval? I disagree with Srikant's answer on one fundamental point. Srikant stated this: "Inference Problem: Your inference problem is: What values of θ are reasonable given the observed data x?" In fact this is the BAYESIAN INFERENCE PROBLEM. In Bayesian statistics we seek to calculate P(θ| x) i.e the probability of the parameter value given the observed data (sample). The CREDIBLE INTERVAL is an interval of θ that has a 95% chance (or other) of containing the true value of θ given the several assumptions underlying the problem. The FREQUENTIST INFERENCE PROBLEM is this: Are the observed data x reasonable given the hypothesised values of θ? In frequentist statistics we seek to calculate P(x| θ) i.e the probability of observing the data (sample) given the hypothesised parameter value(s). The CONFIDENCE INTERVAL (perhaps a misnomer) is interpreted as: if the experiment that generated the random sample x were repeated many times, 95% (or other) of such intervals constructed from those random samples would contain the true value of the parameter. Mess with your head? That's the problem with frequentist statistics and the main thing Bayesian statistics has going for it. As Sikrant points out, P(θ| x) and P(x| θ) are related as follows: P(θ| x) = P(θ)P(x| θ) Where P(θ) is our prior probability; P(x| θ) is the probability of the data conditional on that prior and P(θ| x) is the posterior probability. The prior P(θ) is inherently subjective, but that is the price of knowledge about the Universe - in a very profound sense. The other parts of both Sikrant's and Keith's answers are excellent.
What's the difference between a confidence interval and a credible interval? I disagree with Srikant's answer on one fundamental point. Srikant stated this: "Inference Problem: Your inference problem is: What values of θ are reasonable given the observed data x?" In fact this
317
What's the difference between a confidence interval and a credible interval?
The answers provided before are very helpful and detailed. Here is my $0.25. Confidence interval (CI) is a concept based on the classical definition of probability (also called the "Frequentist definition") that probability is like proportion and is based on the axiomatic system of Kolmogrov (and others). Credible intervals (Highest Posterior Density, HPD) can be considered to have its roots in decision theory, based on the works of Wald and de Finetti (and extended a lot by others). As people in this thread have done a great job in giving examples and the difference of hypotheses in the Bayesian and frequentist case, I will just stress on a few important points. CIs are based on the fact that inference MUST be made on all possible repetitions of an experiment that can be seen and NOT only on the observed data where as HPDs are based ENTIRELY on the observed data (and obv. our prior assumptions). In general CIs are NOT coherent (will be explained later) where as HPDs are coherent(due to their roots in decision theory). Coherence (as I would explain to my grand mom) means: given a betting problem on a parameter value, if a classical statistician (frequentist) bets on CI and a bayesian bets on HPDs, the frequentist IS BOUND to lose (excluding the trivial case when HPD = CI). In short, if you want to summarize the findings of your experiment as a probability based on the data, the probability HAS to be a posterior probability (based on a prior). There is a theorem (cf Heath and Sudderth, Annals of Statistics, 1978) which (roughly) states: Assignment of probability to $\theta$ based on data will not make a sure loser if and only if it is obtained in a bayesian way. As CIs don't condition on the observed data (also called "Conditionality Principle" CP), there can be paradoxical examples. Fisher was a big supporter of CP and also found a lot of paradoxical examples when this was NOT followed (as in the case of CI). This is the reason why he used p-values for inference, as opposed to CI. In his view p-values were based on the observed data (much can be said about p-values, but that is not the focus here). Two of the very famous paradoxical examples are: (4 and 5) Cox's example (Annals of Math. Stat., 1958): $X_i \sim \mathcal{N}(\mu, \sigma^2)$ (iid) for $i\in\{1,\dots,n\}$ and we want to estimate $\mu$. $n$ is NOT fixed and is chosen by tossing a coin. If coin toss results in H, 2 is chosen, otherwise 1000 is chosen. The "common sense" estimate - sample mean is an unbiased estimate with a variance of $0.5\sigma^2+0.0005\sigma^2$. What do we use as the variance of sample mean when $n = 1000$? Isn't it better (or sensible) to use the variance of sample mean estimator as $0.001\sigma^2$ (conditional variance) instead of the actual variance of the estimator, which is HUGE!! ($0.5\sigma^2+0.0005\sigma^2$). This is a simple illustration of CP when we use the variance as $0.001\sigma^2$ when $n=1000$. $n$ stand alone has no importance or no information for $\mu$ and $\sigma$ (ie $n$ is ancillary for them) but GIVEN its value, you know a lot about the "quality of data". This directly relates to CI as they involve the variance which should not be conditioned on $n$, ie we will end up using the larger variance, hence over conservative. Welch's example: This example works for any $n$, but we will take $n=2$ for simplicity. $X_1, X_2 \sim \mathcal{U}(\theta - 1/2, \theta +1/2)$ (iid), $\theta$ belongs to the Real line. This implies $X_1 - \theta \sim \mathcal{U}(-1/2, 1/2)$ (iid). $\frac{1}{2}(X_1 + X_2) {\bar x} - \theta$ (note that this is NOT a statistic) has a distribution independent of $\theta$. We can choose $c > 0$ s.t. $\text{Prob}_\theta(-c <= {\bar x} - \theta <= c) = 1-\alpha (\approx 99\%)$, implying $({\bar x} - c, {\bar x} + c)$ is the 99% CI of $\theta$. The interpretation of this CI is: if we sample repeatedly, we will get different ${\bar x}$ and 99% (at least) times it will contain true $\theta$, BUT (the elephant in the room) for a GIVEN data, we DON'T know the probability that CI will contain true $\theta$. Now, consider the following data: $X_1 = 0$ and $X_2=1$, as $|X_1 - X_2|=1$, we know FOR SURE that the interval $(X_1, X_2)$ contains $\theta$ (one possible criticism, $\text{Prob}(|X_1 - X_2|=1) = 0$, but we can handle it mathematically and I won't discuss it). This example also illustrates the concept of coherence beautifully. If you are a classical statistician, you will definitely bet on the 99% CI without looking at the value of $|X_1 - X_2|$ (assuming you are true to your profession). However, a bayesian will bet on the CI only if the value of $|X_1 - X_2|$ is close to 1. If we condition on $|X_1 - X_2|$, the interval is coherent and the player won't be a sure loser any longer (similar to the theorem by Heath and Sudderth). Fisher had a recommendation for such problems - use CP. For the Welch's example, Fisher suggested to condition of $X_2-X_1$. As we see, $X_2-X_1$ is ancillary for $\theta$, but it provides information about theta. If $X_2-X_1$ is SMALL, there is not a lot of information about $\theta$ in the data. If $X_2-X_1$ is LARGE, there is a lot of information about $\theta$ in the data. Fisher extended the strategy of conditioning on the ancillary statistic to a general theory called Fiducial Inference (also called his greatest failure, cf Zabell, Stat. Sci. 1992), but it didn't become popular due to lack of generality and flexibility. Fisher was trying to find a way different from both the classical statistics (of Neyman School) and the bayesian school (hence the famous adage from Savage: "Fisher wanted to make a Bayesian omelette (ie using CP) without breaking the Bayesian eggs"). Folklore (no proof) says: Fisher in his debates attacked Neyman (for Type I and Type II error and CI) by calling him a Quality Control guy rather than a Scientist, as Neyman's methods didn't condition on the observed data, instead looked at all possible repetitions. Statisticians also want to use Sufficiency Principle (SP) in addition to the CP. But SP and CP together imply the Likelihood Principle (LP) (cf Birnbaum, JASA, 1962) ie given CP and SP, one must ignore the sample space and look at the likelihood function only. Thus, we only need to look at the given data and NOT at the whole sample space (looking at whole sample space is in a way similar to repeated sampling). This has led to concept like Observed Fisher Information (cf. Efron and Hinkley, AS, 1978) which measure the information about the data from a frequentist perspective. The amount of information in the data is a bayesian concept (and hence related to HPD), instead of CI. Kiefer did some foundational work on CI in the late 1970s, but his extensions haven't become popular. A good source of reference is Berger ("Could Fisher, Neyman and Jeffreys agree about testing of hypotheses", Stat Sci, 2003). Summary: (As pointed out by Srikant and others) CIs can't be interpreted as probability and they don't tell anything about the unkown parameter GIVEN the observed data. CIs are statements about repeated experiments. HPDs are probabilistic intervals based on the posterior distribution of the unknown parameter and have a probability based interpretation based on the given data. Frequentist property (repeated sampling) property is a desirable property and HPDs (with appropriate priors) and CI both have them. HPDs condition on the given data also in answering the questions about the unknown parameter (Objective NOT Subjective) Bayesians agree with the classical statisticians that there is a single TRUE value of the parameter. However, they both differ in the way they make inference about this true parameter. Bayesian HPDs give us a good way of conditioning on data, but if they fail to agree with the frequentist properties of CI they are not very useful (analogy: a person who uses HPDs (with some prior) without a good frequentist property, is bound to be doomed like a carpenter who only cares about the hammer and forgets the screw driver) At last, I have seen people in this thread (comments by Dr. Joris: "...assumptions involved imply a diffuse prior, i.e. a complete lack of knowledge about the true parameter.") talking about lack of knowledge about the true parameter being equivalent to using a diffuse prior. I DON'T know if I can agree with the statement (Dr. Keith agrees with me). For example, in the basic linear models case, some distributions can be obtained by using a uniform prior (which some people called diffuse), BUT it DOESN'T mean that uniform distribution can be regarded as a LOW INFORMATION PRIOR. In general, NON-INFORMATIVE(Objective) prior doesn't mean it has low information about the parameter. Note: A lot of these points are based on the lectures by one of the prominent bayesians. I am still a student and could have misunderstood him in some way. Please accept my apologies in advance.
What's the difference between a confidence interval and a credible interval?
The answers provided before are very helpful and detailed. Here is my $0.25. Confidence interval (CI) is a concept based on the classical definition of probability (also called the "Frequentist defini
What's the difference between a confidence interval and a credible interval? The answers provided before are very helpful and detailed. Here is my $0.25. Confidence interval (CI) is a concept based on the classical definition of probability (also called the "Frequentist definition") that probability is like proportion and is based on the axiomatic system of Kolmogrov (and others). Credible intervals (Highest Posterior Density, HPD) can be considered to have its roots in decision theory, based on the works of Wald and de Finetti (and extended a lot by others). As people in this thread have done a great job in giving examples and the difference of hypotheses in the Bayesian and frequentist case, I will just stress on a few important points. CIs are based on the fact that inference MUST be made on all possible repetitions of an experiment that can be seen and NOT only on the observed data where as HPDs are based ENTIRELY on the observed data (and obv. our prior assumptions). In general CIs are NOT coherent (will be explained later) where as HPDs are coherent(due to their roots in decision theory). Coherence (as I would explain to my grand mom) means: given a betting problem on a parameter value, if a classical statistician (frequentist) bets on CI and a bayesian bets on HPDs, the frequentist IS BOUND to lose (excluding the trivial case when HPD = CI). In short, if you want to summarize the findings of your experiment as a probability based on the data, the probability HAS to be a posterior probability (based on a prior). There is a theorem (cf Heath and Sudderth, Annals of Statistics, 1978) which (roughly) states: Assignment of probability to $\theta$ based on data will not make a sure loser if and only if it is obtained in a bayesian way. As CIs don't condition on the observed data (also called "Conditionality Principle" CP), there can be paradoxical examples. Fisher was a big supporter of CP and also found a lot of paradoxical examples when this was NOT followed (as in the case of CI). This is the reason why he used p-values for inference, as opposed to CI. In his view p-values were based on the observed data (much can be said about p-values, but that is not the focus here). Two of the very famous paradoxical examples are: (4 and 5) Cox's example (Annals of Math. Stat., 1958): $X_i \sim \mathcal{N}(\mu, \sigma^2)$ (iid) for $i\in\{1,\dots,n\}$ and we want to estimate $\mu$. $n$ is NOT fixed and is chosen by tossing a coin. If coin toss results in H, 2 is chosen, otherwise 1000 is chosen. The "common sense" estimate - sample mean is an unbiased estimate with a variance of $0.5\sigma^2+0.0005\sigma^2$. What do we use as the variance of sample mean when $n = 1000$? Isn't it better (or sensible) to use the variance of sample mean estimator as $0.001\sigma^2$ (conditional variance) instead of the actual variance of the estimator, which is HUGE!! ($0.5\sigma^2+0.0005\sigma^2$). This is a simple illustration of CP when we use the variance as $0.001\sigma^2$ when $n=1000$. $n$ stand alone has no importance or no information for $\mu$ and $\sigma$ (ie $n$ is ancillary for them) but GIVEN its value, you know a lot about the "quality of data". This directly relates to CI as they involve the variance which should not be conditioned on $n$, ie we will end up using the larger variance, hence over conservative. Welch's example: This example works for any $n$, but we will take $n=2$ for simplicity. $X_1, X_2 \sim \mathcal{U}(\theta - 1/2, \theta +1/2)$ (iid), $\theta$ belongs to the Real line. This implies $X_1 - \theta \sim \mathcal{U}(-1/2, 1/2)$ (iid). $\frac{1}{2}(X_1 + X_2) {\bar x} - \theta$ (note that this is NOT a statistic) has a distribution independent of $\theta$. We can choose $c > 0$ s.t. $\text{Prob}_\theta(-c <= {\bar x} - \theta <= c) = 1-\alpha (\approx 99\%)$, implying $({\bar x} - c, {\bar x} + c)$ is the 99% CI of $\theta$. The interpretation of this CI is: if we sample repeatedly, we will get different ${\bar x}$ and 99% (at least) times it will contain true $\theta$, BUT (the elephant in the room) for a GIVEN data, we DON'T know the probability that CI will contain true $\theta$. Now, consider the following data: $X_1 = 0$ and $X_2=1$, as $|X_1 - X_2|=1$, we know FOR SURE that the interval $(X_1, X_2)$ contains $\theta$ (one possible criticism, $\text{Prob}(|X_1 - X_2|=1) = 0$, but we can handle it mathematically and I won't discuss it). This example also illustrates the concept of coherence beautifully. If you are a classical statistician, you will definitely bet on the 99% CI without looking at the value of $|X_1 - X_2|$ (assuming you are true to your profession). However, a bayesian will bet on the CI only if the value of $|X_1 - X_2|$ is close to 1. If we condition on $|X_1 - X_2|$, the interval is coherent and the player won't be a sure loser any longer (similar to the theorem by Heath and Sudderth). Fisher had a recommendation for such problems - use CP. For the Welch's example, Fisher suggested to condition of $X_2-X_1$. As we see, $X_2-X_1$ is ancillary for $\theta$, but it provides information about theta. If $X_2-X_1$ is SMALL, there is not a lot of information about $\theta$ in the data. If $X_2-X_1$ is LARGE, there is a lot of information about $\theta$ in the data. Fisher extended the strategy of conditioning on the ancillary statistic to a general theory called Fiducial Inference (also called his greatest failure, cf Zabell, Stat. Sci. 1992), but it didn't become popular due to lack of generality and flexibility. Fisher was trying to find a way different from both the classical statistics (of Neyman School) and the bayesian school (hence the famous adage from Savage: "Fisher wanted to make a Bayesian omelette (ie using CP) without breaking the Bayesian eggs"). Folklore (no proof) says: Fisher in his debates attacked Neyman (for Type I and Type II error and CI) by calling him a Quality Control guy rather than a Scientist, as Neyman's methods didn't condition on the observed data, instead looked at all possible repetitions. Statisticians also want to use Sufficiency Principle (SP) in addition to the CP. But SP and CP together imply the Likelihood Principle (LP) (cf Birnbaum, JASA, 1962) ie given CP and SP, one must ignore the sample space and look at the likelihood function only. Thus, we only need to look at the given data and NOT at the whole sample space (looking at whole sample space is in a way similar to repeated sampling). This has led to concept like Observed Fisher Information (cf. Efron and Hinkley, AS, 1978) which measure the information about the data from a frequentist perspective. The amount of information in the data is a bayesian concept (and hence related to HPD), instead of CI. Kiefer did some foundational work on CI in the late 1970s, but his extensions haven't become popular. A good source of reference is Berger ("Could Fisher, Neyman and Jeffreys agree about testing of hypotheses", Stat Sci, 2003). Summary: (As pointed out by Srikant and others) CIs can't be interpreted as probability and they don't tell anything about the unkown parameter GIVEN the observed data. CIs are statements about repeated experiments. HPDs are probabilistic intervals based on the posterior distribution of the unknown parameter and have a probability based interpretation based on the given data. Frequentist property (repeated sampling) property is a desirable property and HPDs (with appropriate priors) and CI both have them. HPDs condition on the given data also in answering the questions about the unknown parameter (Objective NOT Subjective) Bayesians agree with the classical statisticians that there is a single TRUE value of the parameter. However, they both differ in the way they make inference about this true parameter. Bayesian HPDs give us a good way of conditioning on data, but if they fail to agree with the frequentist properties of CI they are not very useful (analogy: a person who uses HPDs (with some prior) without a good frequentist property, is bound to be doomed like a carpenter who only cares about the hammer and forgets the screw driver) At last, I have seen people in this thread (comments by Dr. Joris: "...assumptions involved imply a diffuse prior, i.e. a complete lack of knowledge about the true parameter.") talking about lack of knowledge about the true parameter being equivalent to using a diffuse prior. I DON'T know if I can agree with the statement (Dr. Keith agrees with me). For example, in the basic linear models case, some distributions can be obtained by using a uniform prior (which some people called diffuse), BUT it DOESN'T mean that uniform distribution can be regarded as a LOW INFORMATION PRIOR. In general, NON-INFORMATIVE(Objective) prior doesn't mean it has low information about the parameter. Note: A lot of these points are based on the lectures by one of the prominent bayesians. I am still a student and could have misunderstood him in some way. Please accept my apologies in advance.
What's the difference between a confidence interval and a credible interval? The answers provided before are very helpful and detailed. Here is my $0.25. Confidence interval (CI) is a concept based on the classical definition of probability (also called the "Frequentist defini
318
What's the difference between a confidence interval and a credible interval?
Always fun to engage in a bit of philosophy. I quite like Keith's response, however I would say that he is taking the position of "Mr forgetful Bayesia". The bad coverage when type B and type C can only come about if (s)he applies the same probability distribution at every trial, and refuses to update his(her) prior. You can see this quite clearly, for the type A and type D jars make "definite predictions" so to speak (for 0-1 and 2-3 chips respectively), whereas type B and C jars basically give a uniform distribution of chips. So, on repetitions of the experiment with some fixed "true jar" (or if we sampled another biscuit), a uniform distribution of chips will provide evidence for type B or C jars. And from the "practical" viewpoint, type B and C would require an enormous sample to be able to distinguish between them. The KL divergences between the two distributions are $KL(B||C) \approx 0.006 \approx KL(C||B)$. This is a divergence equivalent to two normal distributions both with variance $1$ and a difference in means of $\sqrt{2\times 0.006}=0.11$. So we can't possibly be expected to be able to discriminate on the basis of one sample (for the normal case, we would require about 320 sample size to detect this difference at 5% significance level). So we can justifiably collapse type B and type C together, until such time as we have a big enough sample. Now what happens to those credible intervals? We actually now have 100% coverage of "B or C"! What about the frequentist intervals? The coverage is unchanged as all intervals contained both B and C or neither, so it is still subject to the criticisms in Keith's response - 59% and 0% for 3 and 0 chips observed. But lets be pragmatic here. If you optimise something with respect to one function, it can't be expected to work well for a different function. However, both the frequentist and bayesian intervals do achieve the desired credibility/confidence level on the average. We have $(0+99+99+59+99)/5=71.2$ - so the frequentist has appropriate average credibility. We also have $(98+60+66+97)/4=80.3$ - the bayesian has appropriate average coverage. Another point I would like to stress is that the Bayesian is not saying that "the parameter is random" by assigning a probability distribution. For the Bayesian (well, at least for me anyways) a probability distribution is a description of what is known about that parameter. The notion of "randomness" does not really exist in Bayesian theory, only the notions of "knowing" and "not knowing". The "knowns" go into the conditions, and the "unknowns" are what we calculate the probabilities for, if of interest, and marginalise over if a nuisance. So a credible interval describes what is known about a fixed parameter, averaging over what is not known about it. So if we were to take the position of the person who packed the cookie jar and knew that it was type A, their credibility interval would just be [A], regardless of the sample, and no matter how many samples were taken. And they would be 100% accurate! A confidence interval is based on the "randomness" or variation which exists in the different possible samples. As such the only variation that they take into account is that in a sample. So the confidence interval is unchanged for the person who packed the cookie jar and new that it was type A. So if you drew the biscuit with 1 chip out of the type A jar, the frequentist would assert with 70% confidence that the type was not A, even though they know the jar is type A! (if they maintained their ideology and ignored their common sense). To see that this is the case, note that nothing in this situation has changed the sampling distribution - we have simply taken the perspective of a different person with "non-data" based information about a parameter. Confidence intervals will change only when the data changes or the model/sampling distribution changes. credibility intervals can change if other relevant information is taken into account. Note that this crazy behavior is certainly not what a proponent of confidence intervals would actually do; but it does demonstrate a weakness in the philosophy underlying the method in a particular case. Confidence intervals work their best when you don't know much about a parameter beyond the information contained in a data set. And further, credibility intervals won't be able to improve much on confidence intervals unless there is prior information which the confidence interval can't take into account, or finding the sufficient and ancillary statistics is hard.
What's the difference between a confidence interval and a credible interval?
Always fun to engage in a bit of philosophy. I quite like Keith's response, however I would say that he is taking the position of "Mr forgetful Bayesia". The bad coverage when type B and type C can
What's the difference between a confidence interval and a credible interval? Always fun to engage in a bit of philosophy. I quite like Keith's response, however I would say that he is taking the position of "Mr forgetful Bayesia". The bad coverage when type B and type C can only come about if (s)he applies the same probability distribution at every trial, and refuses to update his(her) prior. You can see this quite clearly, for the type A and type D jars make "definite predictions" so to speak (for 0-1 and 2-3 chips respectively), whereas type B and C jars basically give a uniform distribution of chips. So, on repetitions of the experiment with some fixed "true jar" (or if we sampled another biscuit), a uniform distribution of chips will provide evidence for type B or C jars. And from the "practical" viewpoint, type B and C would require an enormous sample to be able to distinguish between them. The KL divergences between the two distributions are $KL(B||C) \approx 0.006 \approx KL(C||B)$. This is a divergence equivalent to two normal distributions both with variance $1$ and a difference in means of $\sqrt{2\times 0.006}=0.11$. So we can't possibly be expected to be able to discriminate on the basis of one sample (for the normal case, we would require about 320 sample size to detect this difference at 5% significance level). So we can justifiably collapse type B and type C together, until such time as we have a big enough sample. Now what happens to those credible intervals? We actually now have 100% coverage of "B or C"! What about the frequentist intervals? The coverage is unchanged as all intervals contained both B and C or neither, so it is still subject to the criticisms in Keith's response - 59% and 0% for 3 and 0 chips observed. But lets be pragmatic here. If you optimise something with respect to one function, it can't be expected to work well for a different function. However, both the frequentist and bayesian intervals do achieve the desired credibility/confidence level on the average. We have $(0+99+99+59+99)/5=71.2$ - so the frequentist has appropriate average credibility. We also have $(98+60+66+97)/4=80.3$ - the bayesian has appropriate average coverage. Another point I would like to stress is that the Bayesian is not saying that "the parameter is random" by assigning a probability distribution. For the Bayesian (well, at least for me anyways) a probability distribution is a description of what is known about that parameter. The notion of "randomness" does not really exist in Bayesian theory, only the notions of "knowing" and "not knowing". The "knowns" go into the conditions, and the "unknowns" are what we calculate the probabilities for, if of interest, and marginalise over if a nuisance. So a credible interval describes what is known about a fixed parameter, averaging over what is not known about it. So if we were to take the position of the person who packed the cookie jar and knew that it was type A, their credibility interval would just be [A], regardless of the sample, and no matter how many samples were taken. And they would be 100% accurate! A confidence interval is based on the "randomness" or variation which exists in the different possible samples. As such the only variation that they take into account is that in a sample. So the confidence interval is unchanged for the person who packed the cookie jar and new that it was type A. So if you drew the biscuit with 1 chip out of the type A jar, the frequentist would assert with 70% confidence that the type was not A, even though they know the jar is type A! (if they maintained their ideology and ignored their common sense). To see that this is the case, note that nothing in this situation has changed the sampling distribution - we have simply taken the perspective of a different person with "non-data" based information about a parameter. Confidence intervals will change only when the data changes or the model/sampling distribution changes. credibility intervals can change if other relevant information is taken into account. Note that this crazy behavior is certainly not what a proponent of confidence intervals would actually do; but it does demonstrate a weakness in the philosophy underlying the method in a particular case. Confidence intervals work their best when you don't know much about a parameter beyond the information contained in a data set. And further, credibility intervals won't be able to improve much on confidence intervals unless there is prior information which the confidence interval can't take into account, or finding the sufficient and ancillary statistics is hard.
What's the difference between a confidence interval and a credible interval? Always fun to engage in a bit of philosophy. I quite like Keith's response, however I would say that he is taking the position of "Mr forgetful Bayesia". The bad coverage when type B and type C can
319
What's the difference between a confidence interval and a credible interval?
As I understand it: A credible interval is a statement of the range of values for the statistic of interest that remain plausible given the particular sample of data that we have actually observed. A confidence interval is a statement of the frequency with which the true value lies in the confidence interval when the experiment is repeated a large number of times, each time with a different sample of data from the same underlying population. Normally the question we want to answer is "what values of the statistic are consistent with the observed data", and the credible interval gives a direct answer to that question - the true value of the statistic lies in a 95% credible interval with probability 95%. The confidence interval does not give a direct answer to this question; it is not correct to assert that the probability that the true value of the statistic lies within the 95% confidence interval is 95% (unless it happens to coincide with the credible interval). However this is a very common misinterpretation of a frequentist confidence interval as it the interpretation that would be a direct answer to the question. The paper by Jayne's that I discuss in another question gives a good example of this (example #5), were a perfectly correct confidence interval is constructed, where the particular sample of data on which it is based rules out any possibility of the true value of the statistic being in the 95% confidence interval! This is only a problem if the confidence interval is incorrectly interpreted as a statment of plausible values of the statistic on the basis of the particular sample we have observed. At the end of the day, it is a matter of "horses for courses", and which interval is best depends on the question you want answered - just choose the method that directly answers that question. I suspect confidence intervals are more useful when analysing [desgined] repeatable experiments (as that is just the assumption underlying the confidence interval), and credible intervals better when analysing observational data, but that is just an opinion (I use both sorts of intervals in my own work, but wouldn't describe myself as an expert in either).
What's the difference between a confidence interval and a credible interval?
As I understand it: A credible interval is a statement of the range of values for the statistic of interest that remain plausible given the particular sample of data that we have actually observed. A
What's the difference between a confidence interval and a credible interval? As I understand it: A credible interval is a statement of the range of values for the statistic of interest that remain plausible given the particular sample of data that we have actually observed. A confidence interval is a statement of the frequency with which the true value lies in the confidence interval when the experiment is repeated a large number of times, each time with a different sample of data from the same underlying population. Normally the question we want to answer is "what values of the statistic are consistent with the observed data", and the credible interval gives a direct answer to that question - the true value of the statistic lies in a 95% credible interval with probability 95%. The confidence interval does not give a direct answer to this question; it is not correct to assert that the probability that the true value of the statistic lies within the 95% confidence interval is 95% (unless it happens to coincide with the credible interval). However this is a very common misinterpretation of a frequentist confidence interval as it the interpretation that would be a direct answer to the question. The paper by Jayne's that I discuss in another question gives a good example of this (example #5), were a perfectly correct confidence interval is constructed, where the particular sample of data on which it is based rules out any possibility of the true value of the statistic being in the 95% confidence interval! This is only a problem if the confidence interval is incorrectly interpreted as a statment of plausible values of the statistic on the basis of the particular sample we have observed. At the end of the day, it is a matter of "horses for courses", and which interval is best depends on the question you want answered - just choose the method that directly answers that question. I suspect confidence intervals are more useful when analysing [desgined] repeatable experiments (as that is just the assumption underlying the confidence interval), and credible intervals better when analysing observational data, but that is just an opinion (I use both sorts of intervals in my own work, but wouldn't describe myself as an expert in either).
What's the difference between a confidence interval and a credible interval? As I understand it: A credible interval is a statement of the range of values for the statistic of interest that remain plausible given the particular sample of data that we have actually observed. A
320
What's the difference between a confidence interval and a credible interval?
I found a lot of interpretations about confidence interval and credible set are wrong. For example, confidence interval cannot be expressed in this format $P(\theta\in CI)$. If you look closely on the 'distributions' in the inference of frequentist and Bayesian, you will see Frequentist works on Sampling Distribution on the data while Bayesian works on (posterior) distribution of the parameter. They are defined on totally different Sample Space and Sigma Algebra. So yes you can say 'If you repeat the experiment a lot of times, approximately 95% of the 95% CIs will cover the true parameter'. Although in Bayesian you get to say 'the true value of the statistic lies in a 95% credible interval with probability 95%', however, this 95% probability (in Bayesian) itself is only an estimate. (Remember it is based on the condition distribution given this specific data, not the sampling distribution). This estimator should come with a random error due to random sample. Bayesian try to avoid the type I error issue. Bayesian always say it does not make sense to talk about type I error in Bayesian. This is not entirely true. Statisticians always want to measure the possibility or error that 'Your data suggests you to make a decision but the population suggests otherwise'. This is something Bayesian cannot answer (details omitted here). Unfortunately, this may be the most important thing statistician should answer. Statisticians do not just suggest a decision. Statisticians should also be able to address how much the decision can possibly go wrong. I have to invent the following table and terms to explain the concept. Hope this can help explain the difference of Confidence Interval and Credible Set. Please note that the posterior distribution is $P(\theta_0|Data_n)$, where $\theta_0$ is defined from the prior $P(\theta_0)$. In frequentist the sampling distribution is $P(Data_n; \theta)$. The sampling distribution of $\hat{\theta}$ is $P(\hat{\theta}_n; \theta)$. The subscript $n$ is the sample size. Please do not use the notation $P(Data_n | \theta)$ to present the sampling distribution in frequentist. You can talk about random data in $P(Data_n; \theta)$ and $P(\hat{\theta}_n; \theta)$ but you cannot talk about random data in $P(\theta_0|Data_n)$. The '???????' explains why we are not able to evaluate type I error (or anything similar) in Bayesian. Please also note that credible sets can be used to approximate confidence intervals under some circumstances. However this is only mathematical approximation. The interpretation should go with frequentist. The Bayesian interpretation in this case does not work anymore. Thylacoleo's notation in $P(x|\theta)$ is not frequentist. This is still Bayesian. This notation causes a fundamental problem in measure theory when talking about frequentist. I agree with the conclusion made by Dikran Marsupial. If you are the FDA reviewer, you always want to know the possibility that you approve a drug application but the drug is actually not efficacious. This is the answer that Bayesian cannot provide, at least in classic/typical Bayesian.
What's the difference between a confidence interval and a credible interval?
I found a lot of interpretations about confidence interval and credible set are wrong. For example, confidence interval cannot be expressed in this format $P(\theta\in CI)$. If you look closely on the
What's the difference between a confidence interval and a credible interval? I found a lot of interpretations about confidence interval and credible set are wrong. For example, confidence interval cannot be expressed in this format $P(\theta\in CI)$. If you look closely on the 'distributions' in the inference of frequentist and Bayesian, you will see Frequentist works on Sampling Distribution on the data while Bayesian works on (posterior) distribution of the parameter. They are defined on totally different Sample Space and Sigma Algebra. So yes you can say 'If you repeat the experiment a lot of times, approximately 95% of the 95% CIs will cover the true parameter'. Although in Bayesian you get to say 'the true value of the statistic lies in a 95% credible interval with probability 95%', however, this 95% probability (in Bayesian) itself is only an estimate. (Remember it is based on the condition distribution given this specific data, not the sampling distribution). This estimator should come with a random error due to random sample. Bayesian try to avoid the type I error issue. Bayesian always say it does not make sense to talk about type I error in Bayesian. This is not entirely true. Statisticians always want to measure the possibility or error that 'Your data suggests you to make a decision but the population suggests otherwise'. This is something Bayesian cannot answer (details omitted here). Unfortunately, this may be the most important thing statistician should answer. Statisticians do not just suggest a decision. Statisticians should also be able to address how much the decision can possibly go wrong. I have to invent the following table and terms to explain the concept. Hope this can help explain the difference of Confidence Interval and Credible Set. Please note that the posterior distribution is $P(\theta_0|Data_n)$, where $\theta_0$ is defined from the prior $P(\theta_0)$. In frequentist the sampling distribution is $P(Data_n; \theta)$. The sampling distribution of $\hat{\theta}$ is $P(\hat{\theta}_n; \theta)$. The subscript $n$ is the sample size. Please do not use the notation $P(Data_n | \theta)$ to present the sampling distribution in frequentist. You can talk about random data in $P(Data_n; \theta)$ and $P(\hat{\theta}_n; \theta)$ but you cannot talk about random data in $P(\theta_0|Data_n)$. The '???????' explains why we are not able to evaluate type I error (or anything similar) in Bayesian. Please also note that credible sets can be used to approximate confidence intervals under some circumstances. However this is only mathematical approximation. The interpretation should go with frequentist. The Bayesian interpretation in this case does not work anymore. Thylacoleo's notation in $P(x|\theta)$ is not frequentist. This is still Bayesian. This notation causes a fundamental problem in measure theory when talking about frequentist. I agree with the conclusion made by Dikran Marsupial. If you are the FDA reviewer, you always want to know the possibility that you approve a drug application but the drug is actually not efficacious. This is the answer that Bayesian cannot provide, at least in classic/typical Bayesian.
What's the difference between a confidence interval and a credible interval? I found a lot of interpretations about confidence interval and credible set are wrong. For example, confidence interval cannot be expressed in this format $P(\theta\in CI)$. If you look closely on the
321
What's the difference between a confidence interval and a credible interval?
Generic and consistent confidence and credible regions. http://dx.doi.org/10.6084/m9.figshare.1528163 with code at http://dx.doi.org/10.6084/m9.figshare.1528187 Provides a description of credible intervals and confidence intervals for set selection together with generic R code to calculate both given the likelihood function and some observed data. Further it proposes a test statistics that gives credible and confidence intervals of optimal size that are consistent with each other. In short and avoiding formulas. The Bayesian credible interval is based on the probability of the parameters given the data. It collects the parameters that have a high probability into the credible set/interval. The 95% credible interval contains parameters that together have a probability of 0.95 given the data. The frequentist confidence interval is based on the probability of the data given some parameters. For each (possibly infinitely many) parameter, It first generates the set of data that is likely to be observed given the parameter. It then checks for each parameter, whether the selected high probability data contains the observed data. If the high probability data contains the observed data, the corresponding parameter is added to the confidence interval. Thus, the confidence interval is the collection of parameters for which we cannot rule out the possibility that the parameter has generated the data. This gives a rule such that, if applied repeatedly to similar problems, the 95% confidence interval will contain the true parameter value in 95% of the cases. 95% credible set and 95% confidence set for an example from a negative binomial distribution
What's the difference between a confidence interval and a credible interval?
Generic and consistent confidence and credible regions. http://dx.doi.org/10.6084/m9.figshare.1528163 with code at http://dx.doi.org/10.6084/m9.figshare.1528187 Provides a description of credible int
What's the difference between a confidence interval and a credible interval? Generic and consistent confidence and credible regions. http://dx.doi.org/10.6084/m9.figshare.1528163 with code at http://dx.doi.org/10.6084/m9.figshare.1528187 Provides a description of credible intervals and confidence intervals for set selection together with generic R code to calculate both given the likelihood function and some observed data. Further it proposes a test statistics that gives credible and confidence intervals of optimal size that are consistent with each other. In short and avoiding formulas. The Bayesian credible interval is based on the probability of the parameters given the data. It collects the parameters that have a high probability into the credible set/interval. The 95% credible interval contains parameters that together have a probability of 0.95 given the data. The frequentist confidence interval is based on the probability of the data given some parameters. For each (possibly infinitely many) parameter, It first generates the set of data that is likely to be observed given the parameter. It then checks for each parameter, whether the selected high probability data contains the observed data. If the high probability data contains the observed data, the corresponding parameter is added to the confidence interval. Thus, the confidence interval is the collection of parameters for which we cannot rule out the possibility that the parameter has generated the data. This gives a rule such that, if applied repeatedly to similar problems, the 95% confidence interval will contain the true parameter value in 95% of the cases. 95% credible set and 95% confidence set for an example from a negative binomial distribution
What's the difference between a confidence interval and a credible interval? Generic and consistent confidence and credible regions. http://dx.doi.org/10.6084/m9.figshare.1528163 with code at http://dx.doi.org/10.6084/m9.figshare.1528187 Provides a description of credible int
322
What's the difference between a confidence interval and a credible interval?
This is more of a comment but too long. In the following paper: The Dawning of the Age of Stochasticity (David Mumford) Mumford have the following interesting comment: While all these really exciting uses were being made of statistics, the majority of statisticians themselves, led by Sir R.A. Fisher, were tying their hands behind their backs, insisting that statistics couldn't be used in any but totally reproducible situations and then only using the empirical data. This is the so-called 'frequentist' school which fought with the Bayesian school which believed that priors could be used and the use of statistical inference greatly extended. This approach denies that statistical inference can have anything to do with real thought because real-life situations are always buried in contextual variables and cannot be repeated. Fortunately, the Bayesian school did not totally die, being continued by DeFinetti, E.T. Jaynes, arid others.
What's the difference between a confidence interval and a credible interval?
This is more of a comment but too long. In the following paper: The Dawning of the Age of Stochasticity (David Mumford) Mumford have the following interesting comment: While all these really exci
What's the difference between a confidence interval and a credible interval? This is more of a comment but too long. In the following paper: The Dawning of the Age of Stochasticity (David Mumford) Mumford have the following interesting comment: While all these really exciting uses were being made of statistics, the majority of statisticians themselves, led by Sir R.A. Fisher, were tying their hands behind their backs, insisting that statistics couldn't be used in any but totally reproducible situations and then only using the empirical data. This is the so-called 'frequentist' school which fought with the Bayesian school which believed that priors could be used and the use of statistical inference greatly extended. This approach denies that statistical inference can have anything to do with real thought because real-life situations are always buried in contextual variables and cannot be repeated. Fortunately, the Bayesian school did not totally die, being continued by DeFinetti, E.T. Jaynes, arid others.
What's the difference between a confidence interval and a credible interval? This is more of a comment but too long. In the following paper: The Dawning of the Age of Stochasticity (David Mumford) Mumford have the following interesting comment: While all these really exci
323
What's the difference between a confidence interval and a credible interval?
Both the confidence interval and the credible interval address the question, "what values of the parameter are consistent with the observed data." The confidence interval does so by identifying those hypotheses for which the observed result is within a $100(1-\alpha)\%$ margin of error, while the credible interval does so by normalizing the likelihood wrt a user-defined weight function (prior) and identifying an analogous interval. Both the confidence interval and the credible interval are sets in the parameter space determined by the observed data. The confidence interval addresses this question while providing frequency guarantees in repeated experiments. This performance is what gives the experimenter confidence in a single computed confidence interval actually covering the unknown fixed true parameter. The credible interval does not provide these same frequency guarantees, though if the user-defined weight function (prior) is chosen carefully the credible may have reasonable performance in repeated experiments. One can view the posterior distribution (depicting credible intervals of all levels) as a crude approximate p-value function depicting p-values and confidence intervals of all levels.
What's the difference between a confidence interval and a credible interval?
Both the confidence interval and the credible interval address the question, "what values of the parameter are consistent with the observed data." The confidence interval does so by identifying those
What's the difference between a confidence interval and a credible interval? Both the confidence interval and the credible interval address the question, "what values of the parameter are consistent with the observed data." The confidence interval does so by identifying those hypotheses for which the observed result is within a $100(1-\alpha)\%$ margin of error, while the credible interval does so by normalizing the likelihood wrt a user-defined weight function (prior) and identifying an analogous interval. Both the confidence interval and the credible interval are sets in the parameter space determined by the observed data. The confidence interval addresses this question while providing frequency guarantees in repeated experiments. This performance is what gives the experimenter confidence in a single computed confidence interval actually covering the unknown fixed true parameter. The credible interval does not provide these same frequency guarantees, though if the user-defined weight function (prior) is chosen carefully the credible may have reasonable performance in repeated experiments. One can view the posterior distribution (depicting credible intervals of all levels) as a crude approximate p-value function depicting p-values and confidence intervals of all levels.
What's the difference between a confidence interval and a credible interval? Both the confidence interval and the credible interval address the question, "what values of the parameter are consistent with the observed data." The confidence interval does so by identifying those
324
Bagging, boosting and stacking in machine learning
All three are so-called "meta-algorithms": approaches to combine several machine learning techniques into one predictive model in order to decrease the variance (bagging), bias (boosting) or improving the predictive force (stacking alias ensemble). Every algorithm consists of two steps: Producing a distribution of simple ML models on subsets of the original data. Combining the distribution into one "aggregated" model. Here is a short description of all three methods: Bagging (stands for Bootstrap Aggregating) is a way to decrease the variance of your prediction by generating additional data for training from your original dataset using combinations with repetitions to produce multisets of the same cardinality/size as your original data. By increasing the size of your training set you can't improve the model predictive force, but just decrease the variance, narrowly tuning the prediction to expected outcome. Boosting is a two-step approach, where one first uses subsets of the original data to produce a series of averagely performing models and then "boosts" their performance by combining them together using a particular cost function (=majority vote). Unlike bagging, in the classical boosting the subset creation is not random and depends upon the performance of the previous models: every new subsets contains the elements that were (likely to be) misclassified by previous models. Stacking is a similar to boosting: you also apply several models to your original data. The difference here is, however, that you don't have just an empirical formula for your weight function, rather you introduce a meta-level and use another model/approach to estimate the input together with outputs of every model to estimate the weights or, in other words, to determine what models perform well and what badly given these input data. Here is a comparison table: As you see, these all are different approaches to combine several models into a better one, and there is no single winner here: everything depends upon your domain and what you're going to do. You can still treat stacking as a sort of more advances boosting, however, the difficulty of finding a good approach for your meta-level makes it difficult to apply this approach in practice. Short examples of each: Bagging: Ozone data. Boosting: is used to improve optical character recognition (OCR) accuracy. Stacking: is used in classification of cancer microarrays in medicine.
Bagging, boosting and stacking in machine learning
All three are so-called "meta-algorithms": approaches to combine several machine learning techniques into one predictive model in order to decrease the variance (bagging), bias (boosting) or improving
Bagging, boosting and stacking in machine learning All three are so-called "meta-algorithms": approaches to combine several machine learning techniques into one predictive model in order to decrease the variance (bagging), bias (boosting) or improving the predictive force (stacking alias ensemble). Every algorithm consists of two steps: Producing a distribution of simple ML models on subsets of the original data. Combining the distribution into one "aggregated" model. Here is a short description of all three methods: Bagging (stands for Bootstrap Aggregating) is a way to decrease the variance of your prediction by generating additional data for training from your original dataset using combinations with repetitions to produce multisets of the same cardinality/size as your original data. By increasing the size of your training set you can't improve the model predictive force, but just decrease the variance, narrowly tuning the prediction to expected outcome. Boosting is a two-step approach, where one first uses subsets of the original data to produce a series of averagely performing models and then "boosts" their performance by combining them together using a particular cost function (=majority vote). Unlike bagging, in the classical boosting the subset creation is not random and depends upon the performance of the previous models: every new subsets contains the elements that were (likely to be) misclassified by previous models. Stacking is a similar to boosting: you also apply several models to your original data. The difference here is, however, that you don't have just an empirical formula for your weight function, rather you introduce a meta-level and use another model/approach to estimate the input together with outputs of every model to estimate the weights or, in other words, to determine what models perform well and what badly given these input data. Here is a comparison table: As you see, these all are different approaches to combine several models into a better one, and there is no single winner here: everything depends upon your domain and what you're going to do. You can still treat stacking as a sort of more advances boosting, however, the difficulty of finding a good approach for your meta-level makes it difficult to apply this approach in practice. Short examples of each: Bagging: Ozone data. Boosting: is used to improve optical character recognition (OCR) accuracy. Stacking: is used in classification of cancer microarrays in medicine.
Bagging, boosting and stacking in machine learning All three are so-called "meta-algorithms": approaches to combine several machine learning techniques into one predictive model in order to decrease the variance (bagging), bias (boosting) or improving
325
Bagging, boosting and stacking in machine learning
Bagging: parallel ensemble: each model is built independently aim to decrease variance, not bias suitable for high variance low bias models (complex models) an example of a tree based method is random forest, which develop fully grown trees (note that RF modifies the grown procedure to reduce the correlation between trees) Boosting: sequential ensemble: try to add new models that do well where previous models lack aim to decrease bias, not variance suitable for low variance high bias models an example of a tree based method is gradient boosting
Bagging, boosting and stacking in machine learning
Bagging: parallel ensemble: each model is built independently aim to decrease variance, not bias suitable for high variance low bias models (complex models) an example of a tree based method is ran
Bagging, boosting and stacking in machine learning Bagging: parallel ensemble: each model is built independently aim to decrease variance, not bias suitable for high variance low bias models (complex models) an example of a tree based method is random forest, which develop fully grown trees (note that RF modifies the grown procedure to reduce the correlation between trees) Boosting: sequential ensemble: try to add new models that do well where previous models lack aim to decrease bias, not variance suitable for low variance high bias models an example of a tree based method is gradient boosting
Bagging, boosting and stacking in machine learning Bagging: parallel ensemble: each model is built independently aim to decrease variance, not bias suitable for high variance low bias models (complex models) an example of a tree based method is ran
326
Bagging, boosting and stacking in machine learning
Just to elaborate on Yuqian's answer a bit. The idea behind bagging is that when you OVERFIT with a nonparametric regression method (usually regression or classification trees, but can be just about any nonparametric method), you tend to go to the high variance, no (or low) bias part of the bias/variance tradeoff. This is because an overfitting model is very flexible (so low bias over many resamples from the same population, if those were available) but has high variability (if I collect a sample and overfit it, and you collect a sample and overfit it, our results will differ because the non-parametric regression tracks noise in the data). What can we do? We can take many resamples (from bootstrapping), each overfitting, and average them together. This should lead to the same bias (low) but cancel out some of the variance, at least in theory. Gradient boosting at its heart works with UNDERFIT nonparametric regressions, that are too simple and thus aren't flexible enough to describe the real relationship in the data (i.e. biased) but, because they are under fitting, have low variance (you'd tend to get the same result if you collect new data sets). How do you correct for this? Basically, if you under fit, the RESIDUALS of your model still contain useful structure (information about the population), so you augment the tree you have (or whatever nonparametric predictor) with a tree built on the residuals. This should be more flexible than the original tree. You repeatedly generate more and more trees, each at step k augmented by a weighted tree based on a tree fitted to the residuals from step k-1. One of these trees should be optimal, so you either end up by weighting all these trees together or selecting one that appears to be the best fit. Thus gradient boosting is a way to build a bunch of more flexible candidate trees. Like all nonparametric regression or classification approaches, sometimes bagging or boosting works great, sometimes one or the other approach is mediocre, and sometimes one or the other approach (or both) will crash and burn. Also, both of these techniques can be applied to regression approaches other than trees, but they are most commonly associated with trees, perhaps because it is difficult to set parameters so as to avoid under fitting or overfitting.
Bagging, boosting and stacking in machine learning
Just to elaborate on Yuqian's answer a bit. The idea behind bagging is that when you OVERFIT with a nonparametric regression method (usually regression or classification trees, but can be just about
Bagging, boosting and stacking in machine learning Just to elaborate on Yuqian's answer a bit. The idea behind bagging is that when you OVERFIT with a nonparametric regression method (usually regression or classification trees, but can be just about any nonparametric method), you tend to go to the high variance, no (or low) bias part of the bias/variance tradeoff. This is because an overfitting model is very flexible (so low bias over many resamples from the same population, if those were available) but has high variability (if I collect a sample and overfit it, and you collect a sample and overfit it, our results will differ because the non-parametric regression tracks noise in the data). What can we do? We can take many resamples (from bootstrapping), each overfitting, and average them together. This should lead to the same bias (low) but cancel out some of the variance, at least in theory. Gradient boosting at its heart works with UNDERFIT nonparametric regressions, that are too simple and thus aren't flexible enough to describe the real relationship in the data (i.e. biased) but, because they are under fitting, have low variance (you'd tend to get the same result if you collect new data sets). How do you correct for this? Basically, if you under fit, the RESIDUALS of your model still contain useful structure (information about the population), so you augment the tree you have (or whatever nonparametric predictor) with a tree built on the residuals. This should be more flexible than the original tree. You repeatedly generate more and more trees, each at step k augmented by a weighted tree based on a tree fitted to the residuals from step k-1. One of these trees should be optimal, so you either end up by weighting all these trees together or selecting one that appears to be the best fit. Thus gradient boosting is a way to build a bunch of more flexible candidate trees. Like all nonparametric regression or classification approaches, sometimes bagging or boosting works great, sometimes one or the other approach is mediocre, and sometimes one or the other approach (or both) will crash and burn. Also, both of these techniques can be applied to regression approaches other than trees, but they are most commonly associated with trees, perhaps because it is difficult to set parameters so as to avoid under fitting or overfitting.
Bagging, boosting and stacking in machine learning Just to elaborate on Yuqian's answer a bit. The idea behind bagging is that when you OVERFIT with a nonparametric regression method (usually regression or classification trees, but can be just about
327
Bagging, boosting and stacking in machine learning
See my ensemble learning blog post Sources for this image: Wikipedia sklearn
Bagging, boosting and stacking in machine learning
See my ensemble learning blog post Sources for this image: Wikipedia sklearn
Bagging, boosting and stacking in machine learning See my ensemble learning blog post Sources for this image: Wikipedia sklearn
Bagging, boosting and stacking in machine learning See my ensemble learning blog post Sources for this image: Wikipedia sklearn
328
Bagging, boosting and stacking in machine learning
To recap in short, Bagging and Boosting are normally used inside one algorithm, while Stacking is usually used to summarize several results from different algorithms. Bagging: Bootstrap subsets of features and samples to get several predictions and average(or other ways) the results, for example, Random Forest, which eliminate variance and does not have overfitting issue. Boosting: The difference from Bagging is that later model is trying to learn the error made by previous one, for example GBM and XGBoost, which eliminate the variance but have overfitting issue. Stacking: Normally used in competitions, when one uses multiple algorithms to train on the same data set and average(max, min or other combinations) the result in order to get a higher accuracy of prediction.
Bagging, boosting and stacking in machine learning
To recap in short, Bagging and Boosting are normally used inside one algorithm, while Stacking is usually used to summarize several results from different algorithms. Bagging: Bootstrap subsets of fe
Bagging, boosting and stacking in machine learning To recap in short, Bagging and Boosting are normally used inside one algorithm, while Stacking is usually used to summarize several results from different algorithms. Bagging: Bootstrap subsets of features and samples to get several predictions and average(or other ways) the results, for example, Random Forest, which eliminate variance and does not have overfitting issue. Boosting: The difference from Bagging is that later model is trying to learn the error made by previous one, for example GBM and XGBoost, which eliminate the variance but have overfitting issue. Stacking: Normally used in competitions, when one uses multiple algorithms to train on the same data set and average(max, min or other combinations) the result in order to get a higher accuracy of prediction.
Bagging, boosting and stacking in machine learning To recap in short, Bagging and Boosting are normally used inside one algorithm, while Stacking is usually used to summarize several results from different algorithms. Bagging: Bootstrap subsets of fe
329
Bagging, boosting and stacking in machine learning
Bagging Bootstrap AGGregatING (Bagging) is an ensemble generation method that uses variations of samples used to train base classifiers. For each classifier to be generated, Bagging selects (with repetition) N samples from the training set with size N and train a base classifier. This is repeated until the desired size of the ensemble is reached. Bagging should be used with unstable classifiers, that is, classifiers that are sensitive to variations in the training set such as Decision Trees and Perceptrons. Random Subspace is an interesting similar approach that uses variations in the features instead of variations in the samples, usually indicated on datasets with multiple dimensions and sparse feature space. Boosting Boosting generates an ensemble by adding classifiers that correctly classify “difficult samples”. For each iteration, boosting updates the weights of the samples, so that, samples that are misclassified by the ensemble can have a higher weight, and therefore, higher probability of being selected for training the new classifier. Boosting is an interesting approach but is very noise sensitive and is only effective using weak classifiers. There are several variations of Boosting techniques AdaBoost, BrownBoost (…), each one has its own weight update rule in order to avoid some specific problems (noise, class imbalance …). Stacking Stacking is a meta-learning approach in which an ensemble is used to “extract features” that will be used by another layer of the ensemble. The following image (from Kaggle Ensembling Guide) shows how this works. First (Bottom) several different classifiers are trained with the training set, and their outputs (probabilities) are used to train the next layer (middle layer), finally, the outputs (probabilities) of the classifiers in the second layer are combined using the average (AVG). There are several strategies using cross-validation, blending and other approaches to avoid stacking overfitting. But some general rules are to avoid such an approach on small datasets and try to use diverse classifiers so that they can “complement” each other. Stacking has been used in several machine learning competitions such as Kaggle and Top Coder. It is definitely a must-know in machine learning.
Bagging, boosting and stacking in machine learning
Bagging Bootstrap AGGregatING (Bagging) is an ensemble generation method that uses variations of samples used to train base classifiers. For each classifier to be generated, Bagging selects (with repe
Bagging, boosting and stacking in machine learning Bagging Bootstrap AGGregatING (Bagging) is an ensemble generation method that uses variations of samples used to train base classifiers. For each classifier to be generated, Bagging selects (with repetition) N samples from the training set with size N and train a base classifier. This is repeated until the desired size of the ensemble is reached. Bagging should be used with unstable classifiers, that is, classifiers that are sensitive to variations in the training set such as Decision Trees and Perceptrons. Random Subspace is an interesting similar approach that uses variations in the features instead of variations in the samples, usually indicated on datasets with multiple dimensions and sparse feature space. Boosting Boosting generates an ensemble by adding classifiers that correctly classify “difficult samples”. For each iteration, boosting updates the weights of the samples, so that, samples that are misclassified by the ensemble can have a higher weight, and therefore, higher probability of being selected for training the new classifier. Boosting is an interesting approach but is very noise sensitive and is only effective using weak classifiers. There are several variations of Boosting techniques AdaBoost, BrownBoost (…), each one has its own weight update rule in order to avoid some specific problems (noise, class imbalance …). Stacking Stacking is a meta-learning approach in which an ensemble is used to “extract features” that will be used by another layer of the ensemble. The following image (from Kaggle Ensembling Guide) shows how this works. First (Bottom) several different classifiers are trained with the training set, and their outputs (probabilities) are used to train the next layer (middle layer), finally, the outputs (probabilities) of the classifiers in the second layer are combined using the average (AVG). There are several strategies using cross-validation, blending and other approaches to avoid stacking overfitting. But some general rules are to avoid such an approach on small datasets and try to use diverse classifiers so that they can “complement” each other. Stacking has been used in several machine learning competitions such as Kaggle and Top Coder. It is definitely a must-know in machine learning.
Bagging, boosting and stacking in machine learning Bagging Bootstrap AGGregatING (Bagging) is an ensemble generation method that uses variations of samples used to train base classifiers. For each classifier to be generated, Bagging selects (with repe
330
Bagging, boosting and stacking in machine learning
both bagging and boosting use a single learning algorithm for all steps; but they use different methods on handling training samples. both are ensemble learning method that combines decisions from multiple models Bagging: 1. resamples training data to get M subsets (bootstrapping); 2. trains M classifiers(same algorithm) based on M datasets(different samples); 3. final classifier combines M outputs by voting; samples weight equally; classifiers weight equally; decreases error by decreasing the variance Boosting: here focus on adaboost algorithm 1. start with equal weight for all samples in the first round; 2. in the following M-1 rounds, increase weights of samples which are misclassified in last round, decrease weights of samples correctly classified in last round 3. using a weighted voting, final classifier combines multiple classifiers from previous rounds, and give larger weights to classifiers with less misclassifications. step-wise reweights samples; weights for each round based on results from last round re-weight samples(boosting) instead of resampling(bagging).
Bagging, boosting and stacking in machine learning
both bagging and boosting use a single learning algorithm for all steps; but they use different methods on handling training samples. both are ensemble learning method that combines decisions from mul
Bagging, boosting and stacking in machine learning both bagging and boosting use a single learning algorithm for all steps; but they use different methods on handling training samples. both are ensemble learning method that combines decisions from multiple models Bagging: 1. resamples training data to get M subsets (bootstrapping); 2. trains M classifiers(same algorithm) based on M datasets(different samples); 3. final classifier combines M outputs by voting; samples weight equally; classifiers weight equally; decreases error by decreasing the variance Boosting: here focus on adaboost algorithm 1. start with equal weight for all samples in the first round; 2. in the following M-1 rounds, increase weights of samples which are misclassified in last round, decrease weights of samples correctly classified in last round 3. using a weighted voting, final classifier combines multiple classifiers from previous rounds, and give larger weights to classifiers with less misclassifications. step-wise reweights samples; weights for each round based on results from last round re-weight samples(boosting) instead of resampling(bagging).
Bagging, boosting and stacking in machine learning both bagging and boosting use a single learning algorithm for all steps; but they use different methods on handling training samples. both are ensemble learning method that combines decisions from mul
331
Bagging, boosting and stacking in machine learning
Bagging and boosting tend to use many homogeneous models. Stacking combines results from heterogenous model types. As no single model type tends to be the best fit across any entire distribution you can see why this may increase predictive power.
Bagging, boosting and stacking in machine learning
Bagging and boosting tend to use many homogeneous models. Stacking combines results from heterogenous model types. As no single model type tends to be the best fit across any entire distribution you
Bagging, boosting and stacking in machine learning Bagging and boosting tend to use many homogeneous models. Stacking combines results from heterogenous model types. As no single model type tends to be the best fit across any entire distribution you can see why this may increase predictive power.
Bagging, boosting and stacking in machine learning Bagging and boosting tend to use many homogeneous models. Stacking combines results from heterogenous model types. As no single model type tends to be the best fit across any entire distribution you
332
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
Part of the issue is that the frequentist definition of a probability doesn't allow a nontrivial probability to be applied to the outcome of a particular experiment, but only to some fictitious population of experiments from which this particular experiment can be considered a sample. The definition of a CI is confusing as it is a statement about this (usually) fictitious population of experiments, rather than about the particular data collected in the instance at hand. So part of the issue is one of the definition of a probability: The idea of the true value lying within a particular interval with probability 95% is inconsistent with a frequentist framework. Another aspect of the issue is that the calculation of the frequentist confidence doesn't use all of the information contained in the particular sample relevant to bounding the true value of the statistic. My question "Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals" discusses a paper by Edwin Jaynes which has some really good examples that really highlight the difference between confidence intervals and credible intervals. One that is particularly relevant to this discussion is Example 5, which discusses the difference between a credible and a confidence interval for estimating the parameter of a truncated exponential distribution (for a problem in industrial quality control). In the example he gives, there is enough information in the sample to be certain that the true value of the parameter lies nowhere in a properly constructed 90% confidence interval! This may seem shocking to some, but the reason for this result is that confidence intervals and credible intervals are answers to two different questions, from two different interpretations of probability. The confidence interval is the answer to the request: "Give me an interval that will bracket the true value of the parameter in $100p$% of the instances of an experiment that is repeated a large number of times." The credible interval is an answer to the request: "Give me an interval that brackets the true value with probability $p$ given the particular sample I've actually observed." To be able to answer the latter request, we must first adopt either (a) a new concept of the data generating process or (b) a different concept of the definition of probability itself. The main reason that any particular 95% confidence interval does not imply a 95% chance of containing the mean is because the confidence interval is an answer to a different question, so it is only the right answer when the answer to the two questions happens to have the same numerical solution. In short, credible and confidence intervals answer different questions from different perspectives; both are useful, but you need to choose the right interval for the question you actually want to ask. If you want an interval that admits an interpretation of a 95% (posterior) probability of containing the true value, then choose a credible interval (and, with it, the attendant conceptualization of probability), not a confidence interval. The thing you ought not to do is to adopt a different definition of probability in the interpretation than that used in the analysis. Thanks to @cardinal for his refinements! Here is a concrete example, from David MaKay's excellent book "Information Theory, Inference and Learning Algorithms" (page 464): Let the parameter of interest be $\theta$ and the data $D$, a pair of points $x_1$ and $x_2$ drawn independently from the following distribution: $p(x|\theta) = \left\{\begin{array}{cl} 1/2 & x = \theta,\\1/2 & x = \theta + 1, \\ 0 & \mathrm{otherwise}\end{array}\right.$ If $\theta$ is $39$, then we would expect to see the datasets $(39,39)$, $(39,40)$, $(40,39)$ and $(40,40)$ all with equal probability $1/4$. Consider the confidence interval $[\theta_\mathrm{min}(D),\theta_\mathrm{max}(D)] = [\mathrm{min}(x_1,x_2), \mathrm{max}(x_1,x_2)]$. Clearly this is a valid 75% confidence interval because if you re-sampled the data, $D = (x_1,x_2)$, many times then the confidence interval constructed in this way would contain the true value 75% of the time. Now consider the data $D = (29,29)$. In this case the frequentist 75% confidence interval would be $[29, 29]$. However, assuming the model of the generating process is correct, $\theta$ could be 28 or 29 in this case, and we have no reason to suppose that 29 is more likely than 28, so the posterior probability is $p(\theta=28|D) = p(\theta=29|D) = 1/2$. So in this case the frequentist confidence interval is clearly not a 75% credible interval as there is only a 50% probability that it contains the true value of $\theta$, given what we can infer about $\theta$ from this particular sample. Yes, this is a contrived example, but if confidence intervals and credible intervals were not different, then they would still be identical in contrived examples. Note the key difference is that the confidence interval is a statement about what would happen if you repeated the experiment many times, the credible interval is a statement about what can be inferred from this particular sample.
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
Part of the issue is that the frequentist definition of a probability doesn't allow a nontrivial probability to be applied to the outcome of a particular experiment, but only to some fictitious popula
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? Part of the issue is that the frequentist definition of a probability doesn't allow a nontrivial probability to be applied to the outcome of a particular experiment, but only to some fictitious population of experiments from which this particular experiment can be considered a sample. The definition of a CI is confusing as it is a statement about this (usually) fictitious population of experiments, rather than about the particular data collected in the instance at hand. So part of the issue is one of the definition of a probability: The idea of the true value lying within a particular interval with probability 95% is inconsistent with a frequentist framework. Another aspect of the issue is that the calculation of the frequentist confidence doesn't use all of the information contained in the particular sample relevant to bounding the true value of the statistic. My question "Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals" discusses a paper by Edwin Jaynes which has some really good examples that really highlight the difference between confidence intervals and credible intervals. One that is particularly relevant to this discussion is Example 5, which discusses the difference between a credible and a confidence interval for estimating the parameter of a truncated exponential distribution (for a problem in industrial quality control). In the example he gives, there is enough information in the sample to be certain that the true value of the parameter lies nowhere in a properly constructed 90% confidence interval! This may seem shocking to some, but the reason for this result is that confidence intervals and credible intervals are answers to two different questions, from two different interpretations of probability. The confidence interval is the answer to the request: "Give me an interval that will bracket the true value of the parameter in $100p$% of the instances of an experiment that is repeated a large number of times." The credible interval is an answer to the request: "Give me an interval that brackets the true value with probability $p$ given the particular sample I've actually observed." To be able to answer the latter request, we must first adopt either (a) a new concept of the data generating process or (b) a different concept of the definition of probability itself. The main reason that any particular 95% confidence interval does not imply a 95% chance of containing the mean is because the confidence interval is an answer to a different question, so it is only the right answer when the answer to the two questions happens to have the same numerical solution. In short, credible and confidence intervals answer different questions from different perspectives; both are useful, but you need to choose the right interval for the question you actually want to ask. If you want an interval that admits an interpretation of a 95% (posterior) probability of containing the true value, then choose a credible interval (and, with it, the attendant conceptualization of probability), not a confidence interval. The thing you ought not to do is to adopt a different definition of probability in the interpretation than that used in the analysis. Thanks to @cardinal for his refinements! Here is a concrete example, from David MaKay's excellent book "Information Theory, Inference and Learning Algorithms" (page 464): Let the parameter of interest be $\theta$ and the data $D$, a pair of points $x_1$ and $x_2$ drawn independently from the following distribution: $p(x|\theta) = \left\{\begin{array}{cl} 1/2 & x = \theta,\\1/2 & x = \theta + 1, \\ 0 & \mathrm{otherwise}\end{array}\right.$ If $\theta$ is $39$, then we would expect to see the datasets $(39,39)$, $(39,40)$, $(40,39)$ and $(40,40)$ all with equal probability $1/4$. Consider the confidence interval $[\theta_\mathrm{min}(D),\theta_\mathrm{max}(D)] = [\mathrm{min}(x_1,x_2), \mathrm{max}(x_1,x_2)]$. Clearly this is a valid 75% confidence interval because if you re-sampled the data, $D = (x_1,x_2)$, many times then the confidence interval constructed in this way would contain the true value 75% of the time. Now consider the data $D = (29,29)$. In this case the frequentist 75% confidence interval would be $[29, 29]$. However, assuming the model of the generating process is correct, $\theta$ could be 28 or 29 in this case, and we have no reason to suppose that 29 is more likely than 28, so the posterior probability is $p(\theta=28|D) = p(\theta=29|D) = 1/2$. So in this case the frequentist confidence interval is clearly not a 75% credible interval as there is only a 50% probability that it contains the true value of $\theta$, given what we can infer about $\theta$ from this particular sample. Yes, this is a contrived example, but if confidence intervals and credible intervals were not different, then they would still be identical in contrived examples. Note the key difference is that the confidence interval is a statement about what would happen if you repeated the experiment many times, the credible interval is a statement about what can be inferred from this particular sample.
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? Part of the issue is that the frequentist definition of a probability doesn't allow a nontrivial probability to be applied to the outcome of a particular experiment, but only to some fictitious popula
333
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
In frequentist statistics probabilities are about events in the long run. They just don't apply to a single event after it's done. And the running of an experiment and calculation of the CI is just such an event. You wanted to compare it to the probability of a hidden coin being heads but you can't. You can relate it to something very close. If your game had a rule where you must state after the flip "heads" then the probability you'll be correct in the long run is 50% and that is analogous. When you run your experiment and collect your data then you've got something similar to the actual flip of the coin. The process of the experiment is like the process of the coin flipping in that it generates $\mu$ or it doesn't just like the coin is heads or it's not. Once you flip the coin, whether you see it or not, there is no probability that it's heads, it's either heads or it's not. Now suppose you call heads. That's what calculating the CI is. Because you can't ever reveal the coin (your analogy to an experiment would vanish). Either you're right or you're wrong, that's it. Does it's current state have any relation to the probability of it coming up heads on the next flip, or that I could have predicted what it is? No. The process by which the head is produced has a 0.5 probability of producing them but it does not mean that a head that already exists has a 0.5 probability of being. Once you calculate your CI there is no probability that it captures $\mu$, it either does or it doesn't—you've already flipped the coin. OK, I think I've tortured that enough. The critical point is really that your analogy is misguided. You can never reveal the coin; you can only call heads or tails based on assumptions about coins (experiments). You might want to make a bet afterwards on your heads or tails being correct but you can't ever collect on it. Also, it's a critical component of the CI procedure that you're stating the value of import is in the interval. If you don't then you don't have a CI (or at least not one at the stated %). Probably the thing that makes the CI confusing is it's name. It's a range of values that either do or don't contain $\mu$. We think they contain $\mu$ but the probability of that isn't the same as the process that went into developing it. The 95% part of the 95% CI name is just about the process. You can calculate a range that you believe afterwards contains $\mu$ at some probability level but that's a different calculation and not a CI. It's better to think of the name 95% CI as a designation of a kind of measurement of a range of values that you think plausibly contain $\mu$ and separate the 95% from that plausibility. We could call it the Jennifer CI while the 99% CI is the Wendy CI. That might actually be better. Then, afterwards we can say that we believe $\mu$ is likely to be in the range of values and no one would get stuck saying that there is a Wendy probability that we've captured $\mu$. If you'd like a different designation I think you should probably feel free to get rid of the "confidence" part of CI as well (but it is an interval).
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
In frequentist statistics probabilities are about events in the long run. They just don't apply to a single event after it's done. And the running of an experiment and calculation of the CI is just su
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? In frequentist statistics probabilities are about events in the long run. They just don't apply to a single event after it's done. And the running of an experiment and calculation of the CI is just such an event. You wanted to compare it to the probability of a hidden coin being heads but you can't. You can relate it to something very close. If your game had a rule where you must state after the flip "heads" then the probability you'll be correct in the long run is 50% and that is analogous. When you run your experiment and collect your data then you've got something similar to the actual flip of the coin. The process of the experiment is like the process of the coin flipping in that it generates $\mu$ or it doesn't just like the coin is heads or it's not. Once you flip the coin, whether you see it or not, there is no probability that it's heads, it's either heads or it's not. Now suppose you call heads. That's what calculating the CI is. Because you can't ever reveal the coin (your analogy to an experiment would vanish). Either you're right or you're wrong, that's it. Does it's current state have any relation to the probability of it coming up heads on the next flip, or that I could have predicted what it is? No. The process by which the head is produced has a 0.5 probability of producing them but it does not mean that a head that already exists has a 0.5 probability of being. Once you calculate your CI there is no probability that it captures $\mu$, it either does or it doesn't—you've already flipped the coin. OK, I think I've tortured that enough. The critical point is really that your analogy is misguided. You can never reveal the coin; you can only call heads or tails based on assumptions about coins (experiments). You might want to make a bet afterwards on your heads or tails being correct but you can't ever collect on it. Also, it's a critical component of the CI procedure that you're stating the value of import is in the interval. If you don't then you don't have a CI (or at least not one at the stated %). Probably the thing that makes the CI confusing is it's name. It's a range of values that either do or don't contain $\mu$. We think they contain $\mu$ but the probability of that isn't the same as the process that went into developing it. The 95% part of the 95% CI name is just about the process. You can calculate a range that you believe afterwards contains $\mu$ at some probability level but that's a different calculation and not a CI. It's better to think of the name 95% CI as a designation of a kind of measurement of a range of values that you think plausibly contain $\mu$ and separate the 95% from that plausibility. We could call it the Jennifer CI while the 99% CI is the Wendy CI. That might actually be better. Then, afterwards we can say that we believe $\mu$ is likely to be in the range of values and no one would get stuck saying that there is a Wendy probability that we've captured $\mu$. If you'd like a different designation I think you should probably feel free to get rid of the "confidence" part of CI as well (but it is an interval).
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? In frequentist statistics probabilities are about events in the long run. They just don't apply to a single event after it's done. And the running of an experiment and calculation of the CI is just su
334
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
Formal, explicit ideas about arguments, inference and logic originated, within the Western tradition, with Aristotle. Aristotle wrote about these topics in several different works (including one called the Topics ;-) ). However, the most basic single principle is The Law of Non-contradiction, which can be found in various places, including Metaphysics book IV, chapters 3 & 4. A typical formulation is: " ...it is impossible for anything at the same time to be and not to be [in the same sense]" (1006 a 1). Its importance is stated slightly earlier, " ...this is naturally the starting-point even for all the other axioms" (1005 b 30). Forgive me for waxing philosophical, but this question by its nature has philosophical content that cannot simply be pushed aside for convenience. Consider this thought-experiment: Alex flips a coin, catches it and turns it over onto his forearm with his hand covering the side facing up. Bob was standing in just the right position; he briefly saw the coin in Alex's hand, and thus can deduce which side is facing up now. However, Carlos did not see the coin--he wasn't in the right spot. At this point, Alex asks them what the probability is that the coin shows heads. Carlos suggests that the probability is .5, as that is the long-run frequency of heads. Bob disagrees, he confidently asserts that the probability is nothing else but exactly 0. Now, who is right? It is possible, of course, that Bob mis-saw and is incorrect (let us assume that he did not mis-see). Nonetheless, you cannot hold that both are right and hold to the law of non-contradiction. (I suppose that if you don't believe in the law of non-contradiction, you could think they're both right, or some other such formulation.) Now imagine a similar case, but without Bob present, could Carlos' suggestion be more right (eh?) without Bob around, since no one saw the coin? The application of the law of non-contradiction is not quite as clear in this case, but I think it is obvious that the parts of the situation that seem to be important are held constant from the former to the latter. There have been many attempts to define probability, and in the future there may still yet be many more, but a definition of probability as a function of who happens to be standing around and where they happen to be positioned has little appeal. At any rate (guessing by your use of the phrase "confidence interval"), we are working within the Frequentist approach, and therein whether anyone knows the true state of the coin is irrelevant. It is not a random variable--it is a realized value and either it shows heads, or it shows tails. As @John notes, the state of a coin may not at first seem similar to the question of whether a confidence interval covers the true mean. However, instead of a coin, we can understand this abstractly as a realized value drawn from a Bernoulli distribution with parameter $p$. In the coin situation, $p=.5$, whereas for a 95% CI, $p=.95$. What's important to realize in making the connection is that the important part of the metaphor isn't the $p$ that governs the situation, but rather that the flipped coin or the calculated CI is a realized value, not a random variable. It is important for me to note at this point that all of this is the case within a Frequentist conception of probability. The Bayesian perspective does not violate the law of non-contradiction, it simply starts from different metaphysical assumptions about the nature of reality (more specifically about probability). Others on CV are much better versed in the Bayesian perspective than I am, and perhaps they may explain why the assumptions behind your question do not apply within the Bayesian approach, and that in fact, there may well be a 95% probability of the mean lying within a 95% credible interval, under certain conditions including (among others) that the prior used was accurate (see the comment by @DikranMarsupial below). However, I think all would agree, that once you state you are working within the Frequentist approach, it cannot be the case that the probability of the true mean lying within any particular 95% CI is .95.
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
Formal, explicit ideas about arguments, inference and logic originated, within the Western tradition, with Aristotle. Aristotle wrote about these topics in several different works (including one call
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? Formal, explicit ideas about arguments, inference and logic originated, within the Western tradition, with Aristotle. Aristotle wrote about these topics in several different works (including one called the Topics ;-) ). However, the most basic single principle is The Law of Non-contradiction, which can be found in various places, including Metaphysics book IV, chapters 3 & 4. A typical formulation is: " ...it is impossible for anything at the same time to be and not to be [in the same sense]" (1006 a 1). Its importance is stated slightly earlier, " ...this is naturally the starting-point even for all the other axioms" (1005 b 30). Forgive me for waxing philosophical, but this question by its nature has philosophical content that cannot simply be pushed aside for convenience. Consider this thought-experiment: Alex flips a coin, catches it and turns it over onto his forearm with his hand covering the side facing up. Bob was standing in just the right position; he briefly saw the coin in Alex's hand, and thus can deduce which side is facing up now. However, Carlos did not see the coin--he wasn't in the right spot. At this point, Alex asks them what the probability is that the coin shows heads. Carlos suggests that the probability is .5, as that is the long-run frequency of heads. Bob disagrees, he confidently asserts that the probability is nothing else but exactly 0. Now, who is right? It is possible, of course, that Bob mis-saw and is incorrect (let us assume that he did not mis-see). Nonetheless, you cannot hold that both are right and hold to the law of non-contradiction. (I suppose that if you don't believe in the law of non-contradiction, you could think they're both right, or some other such formulation.) Now imagine a similar case, but without Bob present, could Carlos' suggestion be more right (eh?) without Bob around, since no one saw the coin? The application of the law of non-contradiction is not quite as clear in this case, but I think it is obvious that the parts of the situation that seem to be important are held constant from the former to the latter. There have been many attempts to define probability, and in the future there may still yet be many more, but a definition of probability as a function of who happens to be standing around and where they happen to be positioned has little appeal. At any rate (guessing by your use of the phrase "confidence interval"), we are working within the Frequentist approach, and therein whether anyone knows the true state of the coin is irrelevant. It is not a random variable--it is a realized value and either it shows heads, or it shows tails. As @John notes, the state of a coin may not at first seem similar to the question of whether a confidence interval covers the true mean. However, instead of a coin, we can understand this abstractly as a realized value drawn from a Bernoulli distribution with parameter $p$. In the coin situation, $p=.5$, whereas for a 95% CI, $p=.95$. What's important to realize in making the connection is that the important part of the metaphor isn't the $p$ that governs the situation, but rather that the flipped coin or the calculated CI is a realized value, not a random variable. It is important for me to note at this point that all of this is the case within a Frequentist conception of probability. The Bayesian perspective does not violate the law of non-contradiction, it simply starts from different metaphysical assumptions about the nature of reality (more specifically about probability). Others on CV are much better versed in the Bayesian perspective than I am, and perhaps they may explain why the assumptions behind your question do not apply within the Bayesian approach, and that in fact, there may well be a 95% probability of the mean lying within a 95% credible interval, under certain conditions including (among others) that the prior used was accurate (see the comment by @DikranMarsupial below). However, I think all would agree, that once you state you are working within the Frequentist approach, it cannot be the case that the probability of the true mean lying within any particular 95% CI is .95.
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? Formal, explicit ideas about arguments, inference and logic originated, within the Western tradition, with Aristotle. Aristotle wrote about these topics in several different works (including one call
335
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
Why does a 95% CI not imply a 95% chance of containing the mean? There are many issues to be clarified in this question and in the majority of the given responses. I shall confine myself only to two of them. a. What is a population mean? Does exist a true population mean? The concept of population mean is model-dependent. As all models are wrong, but some are useful, this population mean is a fiction that is defined just to provide useful interpretations. The fiction begins with a probability model. The probability model is defined by the triplet $$(\mathcal{X}, \mathcal{F}, P),$$ where $\mathcal{X}$ is the sample space (a non-empty set), $\mathcal{F}$ is a family of subsets of $\mathcal{X}$ and $P$ is a well-defined probability measure defined over $\mathcal{F}$ (it governs the data behavior). Without loss of generality, consider only the discrete case. The population mean is defined by $$ \mu = \sum_{x \in \mathcal{X}} xP(X=x), $$ that is, it represents the central tendency under $P$ and it can also be interpreted as the center of mass of all points in $\mathcal{X}$, where the weight of each $x \in \mathcal{X}$ is given by $P(X=x)$. In the probability theory, the measure $P$ is considered known, therefore the population mean is accessible through the above simple operation. However, in practice, the probability $P$ is hardly known. Without a probability $P$, one cannot describe the probabilistic behavior of the data. As we cannot set a precise probability $P$ to explain the data behavior, we set a family $\mathcal{M}$ containing probability measures that possibly govern (or explain) the data behavior. Then, the classical statistical model emerges $$(\mathcal{X}, \mathcal{F}, \mathcal{M}).$$ The above model is said to be a parametric model if there exists $\Theta \subseteq \mathbb{R}^p$ with $p< \infty$ such that $\mathcal{M} \equiv \{P_\theta: \ \theta \in \Theta\}$. Let us consider just the parametric model in this post. Notice that, for each probability measure $P_\theta \in \mathcal{M}$, there is a respective mean definition $$\mu_\theta = \sum_{x \in \mathcal{X}} x P_\theta(X=x).$$ That is, there is a family of population means $\{\mu_\theta: \ \theta \in \Theta\}$ that depends tightly on the definition of $\mathcal{M}$. The family $\mathcal{M}$ is defined by limited humans and therefore it may not contain the true probability measure that governs the data behavior. Actually, the chosen family will hardly contain the true measure, moreover this true measure may not even exist. As the concept of a population mean depends on the probability measures in $\mathcal{M}$, the population mean is model-dependent. The Bayesian approach considers a prior probability over the subsets of $\mathcal{M}$ (or, equivalently, $\Theta$), but in this post I will concentrated only on the classical version. b. What is the definition and the purpose of a confidence interval? As aforementioned, the population mean is model-dependent and provides useful interpretations. However, we have a family of population means, because the statistical model is defined by a family of probability measures (each probability measure generates a population mean). Therefore, based on an experiment, inferential procedures should be employed in order to estimate a small set (interval) containing good candidates of population means. One well-known procedure is the ($1-\alpha$) confidence region, which is defined by a set $C_\alpha$ such that, for all $\theta \in \Theta$, $$ P_\theta(C_\alpha(X) \ni \mu_\theta) \geq 1-\alpha \ \ \ \mbox{and} \ \ \ \inf_{\theta\in \Theta} P_\theta(C_\alpha(X) \ni \mu_\theta) = 1-\alpha, $$ where $P_\theta(C_\alpha(X) = \varnothing) = 0$ (see Schervish, 1995). This is a very general definition and encompasses virtually any type of confidence intervals. Here, $P_\theta(C_\alpha(X) \ni \mu_\theta)$ is the probability that $C_\alpha(X)$ contains $\mu_\theta$ under the measure $P_\theta$. This probability should be always greater than (or equal to) $1-\alpha$, the equality occurs at the worst case. Remark: The readers should notice that it is not necessary to make assumptions on the state of reality, the confidence region is defined for a well-defined statistical model without making reference to any "true" mean. Even if the "true" probability measure does not exist or it is not in $\mathcal{M}$, the confidence region definition will work, since the assumptions are about statistical modelling rather than the states of reality. On the one hand, before observing the data, $C_\alpha(X)$ is a random set (or random interval) and the probability that "$C_\alpha(X)$ contains the mean $\mu_\theta$" is, at least, $(1-\alpha)$ for all $\theta \in \Theta$. This is a very desirable feature for the frequentist paradigm. On the other hand, after observing the data $x$, $C_\alpha(x)$ is just a fixed set and the probability that "$C_\alpha(x)$ contains the mean $\mu_\theta$" should be in {0,1} for all $\theta \in \Theta$. That is, after observing the data $x$, we cannot employ the probabilistic reasoning anymore. As far as I know, there is no theory to treat confidence sets for an observed sample (I am working on it and I am getting some nice results). For a while, the frequentist must believe that the observed set (or interval) $C_\alpha(x)$ is one of the $(1-\alpha)100\%$ sets that contains $\mu_\theta$ for all $\theta\in \Theta$. PS: I invite any comments, reviews, critiques, or even objections to my post. Let's discuss it in depth. As I am not a native English speaker, my post surely contains typos and grammar mistakes. Reference: Schervish, M. (1995), Theory of Statistics, Second ed, Springer.
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
Why does a 95% CI not imply a 95% chance of containing the mean? There are many issues to be clarified in this question and in the majority of the given responses. I shall confine myself only to two o
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? Why does a 95% CI not imply a 95% chance of containing the mean? There are many issues to be clarified in this question and in the majority of the given responses. I shall confine myself only to two of them. a. What is a population mean? Does exist a true population mean? The concept of population mean is model-dependent. As all models are wrong, but some are useful, this population mean is a fiction that is defined just to provide useful interpretations. The fiction begins with a probability model. The probability model is defined by the triplet $$(\mathcal{X}, \mathcal{F}, P),$$ where $\mathcal{X}$ is the sample space (a non-empty set), $\mathcal{F}$ is a family of subsets of $\mathcal{X}$ and $P$ is a well-defined probability measure defined over $\mathcal{F}$ (it governs the data behavior). Without loss of generality, consider only the discrete case. The population mean is defined by $$ \mu = \sum_{x \in \mathcal{X}} xP(X=x), $$ that is, it represents the central tendency under $P$ and it can also be interpreted as the center of mass of all points in $\mathcal{X}$, where the weight of each $x \in \mathcal{X}$ is given by $P(X=x)$. In the probability theory, the measure $P$ is considered known, therefore the population mean is accessible through the above simple operation. However, in practice, the probability $P$ is hardly known. Without a probability $P$, one cannot describe the probabilistic behavior of the data. As we cannot set a precise probability $P$ to explain the data behavior, we set a family $\mathcal{M}$ containing probability measures that possibly govern (or explain) the data behavior. Then, the classical statistical model emerges $$(\mathcal{X}, \mathcal{F}, \mathcal{M}).$$ The above model is said to be a parametric model if there exists $\Theta \subseteq \mathbb{R}^p$ with $p< \infty$ such that $\mathcal{M} \equiv \{P_\theta: \ \theta \in \Theta\}$. Let us consider just the parametric model in this post. Notice that, for each probability measure $P_\theta \in \mathcal{M}$, there is a respective mean definition $$\mu_\theta = \sum_{x \in \mathcal{X}} x P_\theta(X=x).$$ That is, there is a family of population means $\{\mu_\theta: \ \theta \in \Theta\}$ that depends tightly on the definition of $\mathcal{M}$. The family $\mathcal{M}$ is defined by limited humans and therefore it may not contain the true probability measure that governs the data behavior. Actually, the chosen family will hardly contain the true measure, moreover this true measure may not even exist. As the concept of a population mean depends on the probability measures in $\mathcal{M}$, the population mean is model-dependent. The Bayesian approach considers a prior probability over the subsets of $\mathcal{M}$ (or, equivalently, $\Theta$), but in this post I will concentrated only on the classical version. b. What is the definition and the purpose of a confidence interval? As aforementioned, the population mean is model-dependent and provides useful interpretations. However, we have a family of population means, because the statistical model is defined by a family of probability measures (each probability measure generates a population mean). Therefore, based on an experiment, inferential procedures should be employed in order to estimate a small set (interval) containing good candidates of population means. One well-known procedure is the ($1-\alpha$) confidence region, which is defined by a set $C_\alpha$ such that, for all $\theta \in \Theta$, $$ P_\theta(C_\alpha(X) \ni \mu_\theta) \geq 1-\alpha \ \ \ \mbox{and} \ \ \ \inf_{\theta\in \Theta} P_\theta(C_\alpha(X) \ni \mu_\theta) = 1-\alpha, $$ where $P_\theta(C_\alpha(X) = \varnothing) = 0$ (see Schervish, 1995). This is a very general definition and encompasses virtually any type of confidence intervals. Here, $P_\theta(C_\alpha(X) \ni \mu_\theta)$ is the probability that $C_\alpha(X)$ contains $\mu_\theta$ under the measure $P_\theta$. This probability should be always greater than (or equal to) $1-\alpha$, the equality occurs at the worst case. Remark: The readers should notice that it is not necessary to make assumptions on the state of reality, the confidence region is defined for a well-defined statistical model without making reference to any "true" mean. Even if the "true" probability measure does not exist or it is not in $\mathcal{M}$, the confidence region definition will work, since the assumptions are about statistical modelling rather than the states of reality. On the one hand, before observing the data, $C_\alpha(X)$ is a random set (or random interval) and the probability that "$C_\alpha(X)$ contains the mean $\mu_\theta$" is, at least, $(1-\alpha)$ for all $\theta \in \Theta$. This is a very desirable feature for the frequentist paradigm. On the other hand, after observing the data $x$, $C_\alpha(x)$ is just a fixed set and the probability that "$C_\alpha(x)$ contains the mean $\mu_\theta$" should be in {0,1} for all $\theta \in \Theta$. That is, after observing the data $x$, we cannot employ the probabilistic reasoning anymore. As far as I know, there is no theory to treat confidence sets for an observed sample (I am working on it and I am getting some nice results). For a while, the frequentist must believe that the observed set (or interval) $C_\alpha(x)$ is one of the $(1-\alpha)100\%$ sets that contains $\mu_\theta$ for all $\theta\in \Theta$. PS: I invite any comments, reviews, critiques, or even objections to my post. Let's discuss it in depth. As I am not a native English speaker, my post surely contains typos and grammar mistakes. Reference: Schervish, M. (1995), Theory of Statistics, Second ed, Springer.
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? Why does a 95% CI not imply a 95% chance of containing the mean? There are many issues to be clarified in this question and in the majority of the given responses. I shall confine myself only to two o
336
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
I'm surprised that no one has brought up Berger's example of an essentially useless 75% confidence interval described in the second chapter of "The Likelihood Principle". The details can be found in the original text (which is available for free on Project Euclid): what is essential about the example is that it describes, unambiguously, a situation in which you know with absolute certainty the value of an ostensibly unknown parameter after observing data, but you would assert that you have only 75% confidence that your interval contains the true value. Working through the details of that example was what enabled me to understand the entire logic of constructing confidence intervals. Edit: The Project Euclid link appears to be broken as of 2022-01-21. The monograph can be found e.g. here or here.
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
I'm surprised that no one has brought up Berger's example of an essentially useless 75% confidence interval described in the second chapter of "The Likelihood Principle". The details can be found in t
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? I'm surprised that no one has brought up Berger's example of an essentially useless 75% confidence interval described in the second chapter of "The Likelihood Principle". The details can be found in the original text (which is available for free on Project Euclid): what is essential about the example is that it describes, unambiguously, a situation in which you know with absolute certainty the value of an ostensibly unknown parameter after observing data, but you would assert that you have only 75% confidence that your interval contains the true value. Working through the details of that example was what enabled me to understand the entire logic of constructing confidence intervals. Edit: The Project Euclid link appears to be broken as of 2022-01-21. The monograph can be found e.g. here or here.
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? I'm surprised that no one has brought up Berger's example of an essentially useless 75% confidence interval described in the second chapter of "The Likelihood Principle". The details can be found in t
337
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
I don't know whether this should be asked as a new question but it is addressing the very same question asked above by proposing a thought experiment. Firstly, I'm going to assume that if I select a playing card at random from a standard deck, the probability that I've selected a club (without looking at it) is 13 / 52 = 25%. And secondly, it's been stated many times that a 95% confidence interval should be interpreted in terms of repeating an experiment multiple times and the calculated interval will contain the true mean 95% of the time – I think this was demonstated reasonably convincingly by James Waters simulation. Most people seem to accept this interpretation of a 95% CI. Now, for the thought experiment. Let's assume that we have a normally distributed variable in a large population - maybe heights of adult males or females. I have a willing and tireless assistant whom I task with performing multiple sampling processes of a given sample size from the population and calculating the sample mean and 95% confidence interval for each sample. My assistant is very keen and manages to measure all possible samples from the population. Then, for each sample, my assistant either records the resulting confidence interval as green (if the CI contains the true mean) or red (if the CI doesn't contain the true mean). Unfortunately, my assistant will not show me the results of his experiments. I need to get some information about the heights of adults in the population but I only have time, resources and patience to do the experiment once. I make a single random sample (of the same sample size used by my assistant) and calculate the confidence interval (using the same equation). I have no way of seeing my assistant's results. So, what is the probability that the random sample I have selected will yield a green CI (i.e. the interval contains the true mean)? In my mind, this is the same as the deck of cards situation outlined previously and can be interpreted that is a 95% probability that the calculated interval contains the true mean (i.e. is green). And yet, the concensus seems to be that a 95% confidence interval can NOT be interpreted as there being a 95% probability that the interval contains the true mean. Why (and where) does my reasoning in the above thought experiment fall apart?
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
I don't know whether this should be asked as a new question but it is addressing the very same question asked above by proposing a thought experiment. Firstly, I'm going to assume that if I select a p
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? I don't know whether this should be asked as a new question but it is addressing the very same question asked above by proposing a thought experiment. Firstly, I'm going to assume that if I select a playing card at random from a standard deck, the probability that I've selected a club (without looking at it) is 13 / 52 = 25%. And secondly, it's been stated many times that a 95% confidence interval should be interpreted in terms of repeating an experiment multiple times and the calculated interval will contain the true mean 95% of the time – I think this was demonstated reasonably convincingly by James Waters simulation. Most people seem to accept this interpretation of a 95% CI. Now, for the thought experiment. Let's assume that we have a normally distributed variable in a large population - maybe heights of adult males or females. I have a willing and tireless assistant whom I task with performing multiple sampling processes of a given sample size from the population and calculating the sample mean and 95% confidence interval for each sample. My assistant is very keen and manages to measure all possible samples from the population. Then, for each sample, my assistant either records the resulting confidence interval as green (if the CI contains the true mean) or red (if the CI doesn't contain the true mean). Unfortunately, my assistant will not show me the results of his experiments. I need to get some information about the heights of adults in the population but I only have time, resources and patience to do the experiment once. I make a single random sample (of the same sample size used by my assistant) and calculate the confidence interval (using the same equation). I have no way of seeing my assistant's results. So, what is the probability that the random sample I have selected will yield a green CI (i.e. the interval contains the true mean)? In my mind, this is the same as the deck of cards situation outlined previously and can be interpreted that is a 95% probability that the calculated interval contains the true mean (i.e. is green). And yet, the concensus seems to be that a 95% confidence interval can NOT be interpreted as there being a 95% probability that the interval contains the true mean. Why (and where) does my reasoning in the above thought experiment fall apart?
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? I don't know whether this should be asked as a new question but it is addressing the very same question asked above by proposing a thought experiment. Firstly, I'm going to assume that if I select a p
338
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
For practical purposes, you're no more wrong to bet that your 95% CI included the true mean at 95:5 odds, than you are to bet on your friend's coin flip at 50:50 odds. If your friend already flipped the coin, and you think there's a 50% probability of it being heads, then you're just using a different definition of the word probability. As others have said, for frequentists you can't assign a probability to an event having occurred, but rather you can describe the probability of an event occurring in the future using a given process. From another blog: The frequentist will say: "A particular event cannot have a probability. The coin shows either head or tails, and unless you show it, I simply can't say what is the fact. Only if you would repeat the toss many, many times, any if you vary the initial conditions of the tosses strongly enough, I'd expect that the relative frequency of heads in all thes many tosses will approach 0.5". http://www.researchgate.net/post/What_is_the_difference_between_frequentist_and_bayesian_probability
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
For practical purposes, you're no more wrong to bet that your 95% CI included the true mean at 95:5 odds, than you are to bet on your friend's coin flip at 50:50 odds. If your friend already flipped t
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? For practical purposes, you're no more wrong to bet that your 95% CI included the true mean at 95:5 odds, than you are to bet on your friend's coin flip at 50:50 odds. If your friend already flipped the coin, and you think there's a 50% probability of it being heads, then you're just using a different definition of the word probability. As others have said, for frequentists you can't assign a probability to an event having occurred, but rather you can describe the probability of an event occurring in the future using a given process. From another blog: The frequentist will say: "A particular event cannot have a probability. The coin shows either head or tails, and unless you show it, I simply can't say what is the fact. Only if you would repeat the toss many, many times, any if you vary the initial conditions of the tosses strongly enough, I'd expect that the relative frequency of heads in all thes many tosses will approach 0.5". http://www.researchgate.net/post/What_is_the_difference_between_frequentist_and_bayesian_probability
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? For practical purposes, you're no more wrong to bet that your 95% CI included the true mean at 95:5 odds, than you are to bet on your friend's coin flip at 50:50 odds. If your friend already flipped t
339
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
While there has been extensive discussion in the numerous great answers, I want to add a more simple perspective. (although it has been alluded in other answers - but not explicitly.) For some parameter $\theta$, and given a sample $(X_1,X_2,\cdots,X_n)$, a $100p\%$ confidence interval is a probability statement of the form $$P\left(g(X_1,X_2,\cdots,X_n)<\theta<f(X_1,X_2,\cdots,X_n)\right)=p$$ If we consider $\theta$ to be a constant, then the above statement is about the random variables $g(X_1,X_2,\cdots,X_n)$ and $f(X_1,X_2,\cdots,X_n)$, or more accurately, it is about the random interval $\left(g(X_1,X_2,\cdots,X_n),f(X_1,X_2,\cdots,X_n)\right)$. So instead of giving any information about the probability of the parameter being contained in the interval, it is giving information about the probability of the interval containing the parameter - as the interval is made from random variables.
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
While there has been extensive discussion in the numerous great answers, I want to add a more simple perspective. (although it has been alluded in other answers - but not explicitly.) For some paramet
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? While there has been extensive discussion in the numerous great answers, I want to add a more simple perspective. (although it has been alluded in other answers - but not explicitly.) For some parameter $\theta$, and given a sample $(X_1,X_2,\cdots,X_n)$, a $100p\%$ confidence interval is a probability statement of the form $$P\left(g(X_1,X_2,\cdots,X_n)<\theta<f(X_1,X_2,\cdots,X_n)\right)=p$$ If we consider $\theta$ to be a constant, then the above statement is about the random variables $g(X_1,X_2,\cdots,X_n)$ and $f(X_1,X_2,\cdots,X_n)$, or more accurately, it is about the random interval $\left(g(X_1,X_2,\cdots,X_n),f(X_1,X_2,\cdots,X_n)\right)$. So instead of giving any information about the probability of the parameter being contained in the interval, it is giving information about the probability of the interval containing the parameter - as the interval is made from random variables.
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? While there has been extensive discussion in the numerous great answers, I want to add a more simple perspective. (although it has been alluded in other answers - but not explicitly.) For some paramet
340
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
It all depends on whether you are looking at the probability conditional or unconditional on the data. Suppose you have an unknown parameter $\theta \in \Theta$ and you make a confidence interval for this parameter using sample data $\mathbf{x}$. Let $\text{CI}_\theta(\mathbf{X},1-\alpha)$ denote the (random) confidence interval at confidence level $1-\alpha$ and with (random) data $\mathbf{X}$. An exact confidence interval satisfies the following conditional probability condition: $$\mathbb{P}(\theta \in \text{CI}_\theta(\mathbf{X},1-\alpha) | \theta) = 1-\alpha \quad \quad \quad \quad \quad \text{for all } \theta \in \Theta.$$ If we are willing to ascribe a probability distribution to $\theta$ (e.g., as in Bayesian analysis) this also implies the marginal probability that: $$\mathbb{P}(\theta \in \text{CI}_\theta(\mathbf{X},1-\alpha)) = 1-\alpha.$$ However, it is not generally true that: $$\mathbb{P}(\theta \in \text{CI}_\theta(\mathbf{X},1-\alpha) | \mathbf{X} = \mathbf{x}) = 1-\alpha.$$ As you can see from the above, if we are looking at the probability unconditional on the data (and either conditional or unconditional on the parameter) then we can say that the probability of the unknown quantity falling into the confidence interval is equal to the confidence level. However, if we are looking at the probability conditional on the data we cannot say that the probability of the unknown quantity falling into the confidence interval is equal to the confidence level. Typically, we frame this by saying that the confidence interval procedure/method (considered prior to substitution of the data) will cover the true parameter with probability equal to the confidence level, but once we have an actual confidence interval (i.e., after substituting the observed data and conditioning our probability statements on the data) this probability statement no longer holds. This is the reason we refer to having 95% "confidence" rather than 95% probability for the parameter being in the interval.
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
It all depends on whether you are looking at the probability conditional or unconditional on the data. Suppose you have an unknown parameter $\theta \in \Theta$ and you make a confidence interval for
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? It all depends on whether you are looking at the probability conditional or unconditional on the data. Suppose you have an unknown parameter $\theta \in \Theta$ and you make a confidence interval for this parameter using sample data $\mathbf{x}$. Let $\text{CI}_\theta(\mathbf{X},1-\alpha)$ denote the (random) confidence interval at confidence level $1-\alpha$ and with (random) data $\mathbf{X}$. An exact confidence interval satisfies the following conditional probability condition: $$\mathbb{P}(\theta \in \text{CI}_\theta(\mathbf{X},1-\alpha) | \theta) = 1-\alpha \quad \quad \quad \quad \quad \text{for all } \theta \in \Theta.$$ If we are willing to ascribe a probability distribution to $\theta$ (e.g., as in Bayesian analysis) this also implies the marginal probability that: $$\mathbb{P}(\theta \in \text{CI}_\theta(\mathbf{X},1-\alpha)) = 1-\alpha.$$ However, it is not generally true that: $$\mathbb{P}(\theta \in \text{CI}_\theta(\mathbf{X},1-\alpha) | \mathbf{X} = \mathbf{x}) = 1-\alpha.$$ As you can see from the above, if we are looking at the probability unconditional on the data (and either conditional or unconditional on the parameter) then we can say that the probability of the unknown quantity falling into the confidence interval is equal to the confidence level. However, if we are looking at the probability conditional on the data we cannot say that the probability of the unknown quantity falling into the confidence interval is equal to the confidence level. Typically, we frame this by saying that the confidence interval procedure/method (considered prior to substitution of the data) will cover the true parameter with probability equal to the confidence level, but once we have an actual confidence interval (i.e., after substituting the observed data and conditioning our probability statements on the data) this probability statement no longer holds. This is the reason we refer to having 95% "confidence" rather than 95% probability for the parameter being in the interval.
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? It all depends on whether you are looking at the probability conditional or unconditional on the data. Suppose you have an unknown parameter $\theta \in \Theta$ and you make a confidence interval for
341
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
In this answer to a different question, Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals, I explained a difference between confidence intervals and credible intervals. Both intervals can be constructed such that they will contain a certain fraction of the times the true parameter. However there is a difference in the conditional dependence on the observation and the conditional dependence on the true parameter values. An $\alpha \%$-confidence interval will contain the parameter a fraction $\alpha \%$ of the time, independent from the true parameter. But the confidence interval will not contain the parameter a fraction $\alpha \%$ of the time, independent from the observation value. This contrasts with An $\alpha \%$-credible interval will contain the parameter a fraction $\alpha \%$ of the time, independent from the observation value. But the credible interval will not contain the parameter a fraction $\alpha \%$ of the time, independent from the true parameter. See also the image accompanying that answer:
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
In this answer to a different question, Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals, I explained a difference between confidence
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? In this answer to a different question, Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals, I explained a difference between confidence intervals and credible intervals. Both intervals can be constructed such that they will contain a certain fraction of the times the true parameter. However there is a difference in the conditional dependence on the observation and the conditional dependence on the true parameter values. An $\alpha \%$-confidence interval will contain the parameter a fraction $\alpha \%$ of the time, independent from the true parameter. But the confidence interval will not contain the parameter a fraction $\alpha \%$ of the time, independent from the observation value. This contrasts with An $\alpha \%$-credible interval will contain the parameter a fraction $\alpha \%$ of the time, independent from the observation value. But the credible interval will not contain the parameter a fraction $\alpha \%$ of the time, independent from the true parameter. See also the image accompanying that answer:
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? In this answer to a different question, Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals, I explained a difference between confidence
342
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
Say that the CI you calculated from the particular set of data you have is one of the 5% of possible CIs that does not contain the mean. How close is it to being the 95% credible interval that you would like to imagine it to be? (That is, how close is it to containing the mean with 95% probability?) You have no assurance that it's close at all. In fact, your CI may not overlap with even a single one of the 95% of 95% CIs which do actually contain the mean. Not to mention that it doesn't contain the mean itself, which also suggests it's not a 95% credible interval. Maybe you want to ignore this and optimistically assume that your CI is one of the 95% that does contain the mean. OK, what do we know about your CI, given that it's in the 95%? That it contains the mean, but perhaps only way out at the extreme, excluding everything else on the other side of the mean. Not likely to contain 95% of the distribution. Either way, there's no guarantee, perhaps not even a reasonable hope that your 95% CI is a 95% credible interval.
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
Say that the CI you calculated from the particular set of data you have is one of the 5% of possible CIs that does not contain the mean. How close is it to being the 95% credible interval that you wou
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? Say that the CI you calculated from the particular set of data you have is one of the 5% of possible CIs that does not contain the mean. How close is it to being the 95% credible interval that you would like to imagine it to be? (That is, how close is it to containing the mean with 95% probability?) You have no assurance that it's close at all. In fact, your CI may not overlap with even a single one of the 95% of 95% CIs which do actually contain the mean. Not to mention that it doesn't contain the mean itself, which also suggests it's not a 95% credible interval. Maybe you want to ignore this and optimistically assume that your CI is one of the 95% that does contain the mean. OK, what do we know about your CI, given that it's in the 95%? That it contains the mean, but perhaps only way out at the extreme, excluding everything else on the other side of the mean. Not likely to contain 95% of the distribution. Either way, there's no guarantee, perhaps not even a reasonable hope that your 95% CI is a 95% credible interval.
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? Say that the CI you calculated from the particular set of data you have is one of the 5% of possible CIs that does not contain the mean. How close is it to being the 95% credible interval that you wou
343
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
First, let's give a definition of the confidence interval, or, in spaces of dimension greater than one, the confidence region. The definition is a concise version of that given by Jerzy Neyman in his 1937 paper to the Royal Society. Let the parameter be $\mathfrak{p}$ and the statistic be $\mathfrak{s}$. Each possible parameter value $p$ is associated with an acceptance region $\mathcal{A}(p,\alpha)$ for which $\mathrm{prob}(\mathfrak{s} \in \mathcal{A}(p,\alpha) | \mathfrak{p} = p, \mathcal{I}) = \alpha$, with $\alpha$ being the confidence coefficient, or confidence level (typically 0.95), and $\mathcal{I}$ being the background information which we have to define our probabilities. The confidence region for $\mathfrak{p}$, given $\mathfrak{s} = s$, is then $\mathcal{C}(s,\alpha) = \{p | s \in \mathcal{A}(p,\alpha)\}$. In other words, the parameter values which form the confidence region are just those whose corresponding $\alpha$-probability region of the sample space contains the statistic. Now consider that for any possible parameter value $p$: \begin{align} \int{[p \in \mathcal{C}(s,\alpha)]\:\mathrm{prob}(\mathfrak{s} = s | \mathfrak{p} = p, \mathcal{I})}\:ds &= \int{[s \in \mathcal{A}(p,\alpha)]\:\mathrm{prob}(\mathfrak{s} = s | \mathfrak{p} = p, \mathcal{I})}\:ds \\ &= \alpha \end{align} where the square brackets are Iverson brackets. This is the key result for a confidence interval or region. It says that the expectation of $[p \in \mathcal{C}(s,\alpha)]$, under the sampling distribution conditional on $p$, is $\alpha$. This result is guaranteed by the construction of the acceptance regions, and moreover it applies to $\mathfrak{p}$, because $\mathfrak{p}$ is a possible parameter value. However, it is not a probability statement about $\mathfrak{p}$, because expectations are not probabilities! The probability for which that expectation is commonly mistaken is the probability, conditional on $\mathfrak{s} = s$, that the parameter lies in the confidence region: $$ \mathrm{prob}(\mathfrak{p} \in \mathcal{C}(s,\alpha) | \mathfrak{s} = s, \mathcal{I}) = \frac{\int_{\mathcal{C}(s,\alpha)} \mathrm{prob}(\mathfrak{s} = s | \mathfrak{p} = p, \mathcal{I}) \:\mathrm{prob}(\mathfrak{p} = p | \mathcal{I}) \: dp}{\int \mathrm{prob}(\mathfrak{s} = s | \mathfrak{p} = p, \mathcal{I}) \:\mathrm{prob}(\mathfrak{p} = p | \mathcal{I}) \: dp} $$ This probability reduces to $\alpha$ only for certain combinations of information $\mathcal{I}$ and acceptance regions $\mathcal{A}(p,\alpha)$. For example, if the prior is uniform and the sampling distribution is symmetric in $s$ and $p$ (e.g. a Gaussian with $p$ as the mean), then: \begin{align} \mathrm{prob}(\mathfrak{p} \in \mathcal{C}(s,\alpha) | \mathfrak{s} = s, \mathcal{I}) &= \frac{\int_{\mathcal{C}(s,\alpha)} \mathrm{prob}(\mathfrak{s} = p | \mathfrak{p} = s, \mathcal{I}) \: dp}{\int \mathrm{prob}(\mathfrak{s} = p | \mathfrak{p} = s, \mathcal{I}) \: dp} \\ &= \mathrm{prob}(\mathfrak{s} \in \mathcal{C}(s,\alpha) | \mathfrak{p} = s, \mathcal{I}) \\ &= \mathrm{prob}(s \in \mathcal{A}(\mathfrak{s},\alpha) | \mathfrak{p} = s, \mathcal{I}) \end{align} If in addition the acceptance regions are such that $s \in \mathcal{A} (\mathfrak{s},\alpha) \iff \mathfrak{s} \in \mathcal{A}(s,\alpha)$, then: \begin{align} \mathrm{prob}(\mathfrak{p} \in \mathcal{C}(s,\alpha) | \mathfrak{s} = s, \mathcal{I}) &= \mathrm{prob}(\mathfrak{s} \in \mathcal{A}(s,\alpha) | \mathfrak{p} = s, \mathcal{I}) \\ &= \alpha \end{align} The textbook example of estimating a population mean with a standard confidence interval constructed about a normal statistic is a special case of the preceding assumptions. Therefore the standard 95% confidence interval does contain the mean with probability 0.95; but this correspondence does not generally hold.
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
First, let's give a definition of the confidence interval, or, in spaces of dimension greater than one, the confidence region. The definition is a concise version of that given by Jerzy Neyman in his
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? First, let's give a definition of the confidence interval, or, in spaces of dimension greater than one, the confidence region. The definition is a concise version of that given by Jerzy Neyman in his 1937 paper to the Royal Society. Let the parameter be $\mathfrak{p}$ and the statistic be $\mathfrak{s}$. Each possible parameter value $p$ is associated with an acceptance region $\mathcal{A}(p,\alpha)$ for which $\mathrm{prob}(\mathfrak{s} \in \mathcal{A}(p,\alpha) | \mathfrak{p} = p, \mathcal{I}) = \alpha$, with $\alpha$ being the confidence coefficient, or confidence level (typically 0.95), and $\mathcal{I}$ being the background information which we have to define our probabilities. The confidence region for $\mathfrak{p}$, given $\mathfrak{s} = s$, is then $\mathcal{C}(s,\alpha) = \{p | s \in \mathcal{A}(p,\alpha)\}$. In other words, the parameter values which form the confidence region are just those whose corresponding $\alpha$-probability region of the sample space contains the statistic. Now consider that for any possible parameter value $p$: \begin{align} \int{[p \in \mathcal{C}(s,\alpha)]\:\mathrm{prob}(\mathfrak{s} = s | \mathfrak{p} = p, \mathcal{I})}\:ds &= \int{[s \in \mathcal{A}(p,\alpha)]\:\mathrm{prob}(\mathfrak{s} = s | \mathfrak{p} = p, \mathcal{I})}\:ds \\ &= \alpha \end{align} where the square brackets are Iverson brackets. This is the key result for a confidence interval or region. It says that the expectation of $[p \in \mathcal{C}(s,\alpha)]$, under the sampling distribution conditional on $p$, is $\alpha$. This result is guaranteed by the construction of the acceptance regions, and moreover it applies to $\mathfrak{p}$, because $\mathfrak{p}$ is a possible parameter value. However, it is not a probability statement about $\mathfrak{p}$, because expectations are not probabilities! The probability for which that expectation is commonly mistaken is the probability, conditional on $\mathfrak{s} = s$, that the parameter lies in the confidence region: $$ \mathrm{prob}(\mathfrak{p} \in \mathcal{C}(s,\alpha) | \mathfrak{s} = s, \mathcal{I}) = \frac{\int_{\mathcal{C}(s,\alpha)} \mathrm{prob}(\mathfrak{s} = s | \mathfrak{p} = p, \mathcal{I}) \:\mathrm{prob}(\mathfrak{p} = p | \mathcal{I}) \: dp}{\int \mathrm{prob}(\mathfrak{s} = s | \mathfrak{p} = p, \mathcal{I}) \:\mathrm{prob}(\mathfrak{p} = p | \mathcal{I}) \: dp} $$ This probability reduces to $\alpha$ only for certain combinations of information $\mathcal{I}$ and acceptance regions $\mathcal{A}(p,\alpha)$. For example, if the prior is uniform and the sampling distribution is symmetric in $s$ and $p$ (e.g. a Gaussian with $p$ as the mean), then: \begin{align} \mathrm{prob}(\mathfrak{p} \in \mathcal{C}(s,\alpha) | \mathfrak{s} = s, \mathcal{I}) &= \frac{\int_{\mathcal{C}(s,\alpha)} \mathrm{prob}(\mathfrak{s} = p | \mathfrak{p} = s, \mathcal{I}) \: dp}{\int \mathrm{prob}(\mathfrak{s} = p | \mathfrak{p} = s, \mathcal{I}) \: dp} \\ &= \mathrm{prob}(\mathfrak{s} \in \mathcal{C}(s,\alpha) | \mathfrak{p} = s, \mathcal{I}) \\ &= \mathrm{prob}(s \in \mathcal{A}(\mathfrak{s},\alpha) | \mathfrak{p} = s, \mathcal{I}) \end{align} If in addition the acceptance regions are such that $s \in \mathcal{A} (\mathfrak{s},\alpha) \iff \mathfrak{s} \in \mathcal{A}(s,\alpha)$, then: \begin{align} \mathrm{prob}(\mathfrak{p} \in \mathcal{C}(s,\alpha) | \mathfrak{s} = s, \mathcal{I}) &= \mathrm{prob}(\mathfrak{s} \in \mathcal{A}(s,\alpha) | \mathfrak{p} = s, \mathcal{I}) \\ &= \alpha \end{align} The textbook example of estimating a population mean with a standard confidence interval constructed about a normal statistic is a special case of the preceding assumptions. Therefore the standard 95% confidence interval does contain the mean with probability 0.95; but this correspondence does not generally hold.
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? First, let's give a definition of the confidence interval, or, in spaces of dimension greater than one, the confidence region. The definition is a concise version of that given by Jerzy Neyman in his
344
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
What one should not say when using frequentist inference is, "There is 95% probability that the unknown fixed true theta is within the computed confidence interval." To the frequentist probability describes the emergent pattern over many (observable!) samples and is not a statement about a single event. However, understanding the long-run emergent pattern gives us confidence in what to expect in a single event. The key is to replace "probability" with "confidence," i.e. "I am 95% confident that the unknown fixed true theta is within the computed confidence interval." This is analogous to knowing the bias of a coin is 0.95 in favor of heads (95% of the time the coin lands heads) and the confidence this knowledge of the long-run proportion imbues regarding the outcome of a single flip. If asked how confident you are that the coin will land heads (or has already landed heads), you would say you are 95% confident based on its long-run performance. To the frequentist, the limiting proportion is the probability and our confidence is a result of knowing this limiting proportion. To the Bayesian, the long-run emergent pattern over many samples is not a probability. The belief of the experimenter is the probability. The Bayesian is also willing to make (belief) probability statements about an unobservable population parameter without any connection to sampling. Such statements are not verifiable statements about the actual parameter, the hypothesis, nor the experiment. These are statements about the experimenter. The frequentist is not willing to make such statements. Here is a related thread showing the interpretation of frequentist confidence and Bayesian belief in the context of a COVID screening test. Here is a related thread comparing frequentist and Bayesian inference for a binomial proportion near 0 or 1. To the frequentist, the Bayesian posterior can be viewed as a crude approximate p-value function showing p-values and confidence intervals of all levels.
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
What one should not say when using frequentist inference is, "There is 95% probability that the unknown fixed true theta is within the computed confidence interval." To the frequentist probability des
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? What one should not say when using frequentist inference is, "There is 95% probability that the unknown fixed true theta is within the computed confidence interval." To the frequentist probability describes the emergent pattern over many (observable!) samples and is not a statement about a single event. However, understanding the long-run emergent pattern gives us confidence in what to expect in a single event. The key is to replace "probability" with "confidence," i.e. "I am 95% confident that the unknown fixed true theta is within the computed confidence interval." This is analogous to knowing the bias of a coin is 0.95 in favor of heads (95% of the time the coin lands heads) and the confidence this knowledge of the long-run proportion imbues regarding the outcome of a single flip. If asked how confident you are that the coin will land heads (or has already landed heads), you would say you are 95% confident based on its long-run performance. To the frequentist, the limiting proportion is the probability and our confidence is a result of knowing this limiting proportion. To the Bayesian, the long-run emergent pattern over many samples is not a probability. The belief of the experimenter is the probability. The Bayesian is also willing to make (belief) probability statements about an unobservable population parameter without any connection to sampling. Such statements are not verifiable statements about the actual parameter, the hypothesis, nor the experiment. These are statements about the experimenter. The frequentist is not willing to make such statements. Here is a related thread showing the interpretation of frequentist confidence and Bayesian belief in the context of a COVID screening test. Here is a related thread comparing frequentist and Bayesian inference for a binomial proportion near 0 or 1. To the frequentist, the Bayesian posterior can be viewed as a crude approximate p-value function showing p-values and confidence intervals of all levels.
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? What one should not say when using frequentist inference is, "There is 95% probability that the unknown fixed true theta is within the computed confidence interval." To the frequentist probability des
345
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
In his answer, Dikran Marsupial provides the following example as evidence that no confidence interval is admissible as a set of plausible parameter values consistent with the observed data: Let the parameter of interest be $\theta$ and the data $D$, a pair of points $x_1$ and $x_2$ drawn independently from the following distribution: $p(x|\theta) = \left\{\begin{array}{cl} 1/2 & x = \theta,\\1/2 & x = > \theta + 1, \\ 0 & \mathrm{otherwise}\end{array}\right.$ If $\theta$ is $39$, then we would expect to see the datasets $(39,39)$, $(39,40)$, $(40,39)$ and $(40,40)$ all with equal probability $1/4$. We are then asked to consider the confidence interval $[\theta_\mathrm{min}(D),\theta_\mathrm{max}(D)] = [\mathrm{min}(x_1,x_2), \mathrm{max}(x_1,x_2)]$ and informed this will correctly cover the unknown fixed true $\theta$ $75\%$ of the time in repeated sampling. We are also informed that for an observed data set, $D=\{29,29\}$, the posterior belief probabilities for $\theta=28$ and $\theta=29$ are $p(\theta=28|D) = p(\theta=29|D) = 1/2$ (without reference to a prior) while the $75\%$ confidence interval is $\theta\in(29)$. Dikran Marsupial claims that since the confidence level of the confidence interval is a statement about repeated experiments it does not allow one to infer the unknown fixed true $\theta$ based on a particular sample. He further claims that only Bayesian belief is capable of such inference based on a sample. It is best to view a confidence interval as the inversion of a hypothesis test, especially when dealing with a discrete parameter space. For this example we can use the entire data set as the test statistic when calculating the p-value. For $H_0: 27\ge\theta\ge 30$, the probability of the observed result, $D=\{29,29\}$, or something more extreme is $0$, so we can rule out these hypotheses without error. We can therefore construct the $100\%$ confidence interval $\theta \in(28,29)$. This is a direct contradiction to Dikran's claim that a confidence interval does not allow one to infer the unknown fixed true $\theta$ based on a particular sample. This interval was constructed without any prior belief. The remaining hypotheses available for constructing a narrower confidence interval are $H: \theta=28$ and $H:\theta=29$. Under $H_0: \theta=28$, the upper-tailed probability of the observed result, $D=\{29,29\}$, or something more extreme is $0.25$. One conclusion is to "rule out" $H_0: \theta=28$ at the $0.25$ level in favor of $H_1:\theta=29$, producing the $75\%$ confidence interval $\theta \in (29)$. Likewise, under $H_0: \theta=29$ the lower-tailed probability of the observed result, $D=\{29,29\}$, or something more extreme is $0.25$. Another conclusion is to "rule out" $H_0: \theta=29$ at the $0.25$ level in favor of $H_1:\theta=28$, producing the $75\%$ confidence interval $\theta \in (28)$. The confidence level of these intervals is not a measure of the experimenter's belief, it is a restatement of the p-value and a measure of the interval's performance over repeated experiments. This does not preclude the confidence interval as a method for performing inference on a parameter based on a particular sample. Dikran's posterior belief probabilities and credible intervals can instead be viewed as crude approximate p-values and confidence intervals. The $100\%$ credible interval is $(28,29)$, the posterior probability "ruling out" $H_0: \theta=28$ is $0.5$, and the posterior probability "ruling out" $H_0: \theta=29$ is $0.5$.
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
In his answer, Dikran Marsupial provides the following example as evidence that no confidence interval is admissible as a set of plausible parameter values consistent with the observed data: Let the
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? In his answer, Dikran Marsupial provides the following example as evidence that no confidence interval is admissible as a set of plausible parameter values consistent with the observed data: Let the parameter of interest be $\theta$ and the data $D$, a pair of points $x_1$ and $x_2$ drawn independently from the following distribution: $p(x|\theta) = \left\{\begin{array}{cl} 1/2 & x = \theta,\\1/2 & x = > \theta + 1, \\ 0 & \mathrm{otherwise}\end{array}\right.$ If $\theta$ is $39$, then we would expect to see the datasets $(39,39)$, $(39,40)$, $(40,39)$ and $(40,40)$ all with equal probability $1/4$. We are then asked to consider the confidence interval $[\theta_\mathrm{min}(D),\theta_\mathrm{max}(D)] = [\mathrm{min}(x_1,x_2), \mathrm{max}(x_1,x_2)]$ and informed this will correctly cover the unknown fixed true $\theta$ $75\%$ of the time in repeated sampling. We are also informed that for an observed data set, $D=\{29,29\}$, the posterior belief probabilities for $\theta=28$ and $\theta=29$ are $p(\theta=28|D) = p(\theta=29|D) = 1/2$ (without reference to a prior) while the $75\%$ confidence interval is $\theta\in(29)$. Dikran Marsupial claims that since the confidence level of the confidence interval is a statement about repeated experiments it does not allow one to infer the unknown fixed true $\theta$ based on a particular sample. He further claims that only Bayesian belief is capable of such inference based on a sample. It is best to view a confidence interval as the inversion of a hypothesis test, especially when dealing with a discrete parameter space. For this example we can use the entire data set as the test statistic when calculating the p-value. For $H_0: 27\ge\theta\ge 30$, the probability of the observed result, $D=\{29,29\}$, or something more extreme is $0$, so we can rule out these hypotheses without error. We can therefore construct the $100\%$ confidence interval $\theta \in(28,29)$. This is a direct contradiction to Dikran's claim that a confidence interval does not allow one to infer the unknown fixed true $\theta$ based on a particular sample. This interval was constructed without any prior belief. The remaining hypotheses available for constructing a narrower confidence interval are $H: \theta=28$ and $H:\theta=29$. Under $H_0: \theta=28$, the upper-tailed probability of the observed result, $D=\{29,29\}$, or something more extreme is $0.25$. One conclusion is to "rule out" $H_0: \theta=28$ at the $0.25$ level in favor of $H_1:\theta=29$, producing the $75\%$ confidence interval $\theta \in (29)$. Likewise, under $H_0: \theta=29$ the lower-tailed probability of the observed result, $D=\{29,29\}$, or something more extreme is $0.25$. Another conclusion is to "rule out" $H_0: \theta=29$ at the $0.25$ level in favor of $H_1:\theta=28$, producing the $75\%$ confidence interval $\theta \in (28)$. The confidence level of these intervals is not a measure of the experimenter's belief, it is a restatement of the p-value and a measure of the interval's performance over repeated experiments. This does not preclude the confidence interval as a method for performing inference on a parameter based on a particular sample. Dikran's posterior belief probabilities and credible intervals can instead be viewed as crude approximate p-values and confidence intervals. The $100\%$ credible interval is $(28,29)$, the posterior probability "ruling out" $H_0: \theta=28$ is $0.5$, and the posterior probability "ruling out" $H_0: \theta=29$ is $0.5$.
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? In his answer, Dikran Marsupial provides the following example as evidence that no confidence interval is admissible as a set of plausible parameter values consistent with the observed data: Let the
346
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
There are some interesting answers here, but I thought I'd add a little hands-on demonstration using R. We recently used this code in a stats course to highlight how confidence intervals work. Here's what the code does: 1 - It samples from a known distribution (n=1000) 2 - It calculates the 95% CI for the mean of each sample 3 - It asks whether or not each sample's CI includes the true mean. 4 - It reports in the console the fraction of CIs that included the true mean. I just ran the script a bunch of times and it's actually not too uncommon to find that less than 94% of the CIs contained the true mean. At least to me, this helps dispel the idea that a confidence interval has a 95% probability of containing the true parameter. # In the following code, we simulate the process of # sampling from a distribution and calculating # a confidence interval for the mean of that # distribution. How often do the confidence # intervals actually include the mean? Let's see! # # You can change the number of replicates in the # first line to change the number of times the # loop is run (and the number of confidence intervals # that you simulate). # # The results from each simulation are saved to a # data frame. In the data frame, each row represents # the results from one simulation or replicate of the # loop. There are three columns in the data frame, # one which lists the lower confidence limits, one with # the higher confidence limits, and a third column, which # I called "Valid" which is either TRUE or FALSE # depending on whether or not that simulated confidence # interval includes the true mean of the distribution. # # To see the results of the simulation, run the whole # code at once, from "start" to "finish" and look in the # console to find the answer to the question. # "start" replicates <- 1000 conf.int.low <- rep(NA, replicates) conf.int.high <- rep(NA, replicates) conf.int.check <- rep(NA, replicates) for (i in 1:replicates) { n <- 10 mu <- 70 variance <- 25 sigma <- sqrt(variance) sample <- rnorm(n, mu, sigma) se.mean <- sigma/sqrt(n) sample.avg <- mean(sample) prob <- 0.95 alpha <- 1-prob q.alpha <- qnorm(1-alpha/2) low.95 <- sample.avg - q.alpha*se.mean high.95 <- sample.avg + q.alpha*se.mean conf.int.low[i] <- low.95 conf.int.high[i] <- high.95 conf.int.check[i] <- low.95 < mu & mu < high.95 } # Collect the intervals in a data frame ci.dataframe <- data.frame( LowerCI=conf.int.low, UpperCI=conf.int.high, Valid=conf.int.check ) # Take a peak at the top of the data frame head(ci.dataframe) # What fraction of the intervals included the true mean? ci.fraction <- length(which(conf.int.check, useNames=TRUE))/replicates ci.fraction # "finish" Hope this helps!
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
There are some interesting answers here, but I thought I'd add a little hands-on demonstration using R. We recently used this code in a stats course to highlight how confidence intervals work. Here'
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? There are some interesting answers here, but I thought I'd add a little hands-on demonstration using R. We recently used this code in a stats course to highlight how confidence intervals work. Here's what the code does: 1 - It samples from a known distribution (n=1000) 2 - It calculates the 95% CI for the mean of each sample 3 - It asks whether or not each sample's CI includes the true mean. 4 - It reports in the console the fraction of CIs that included the true mean. I just ran the script a bunch of times and it's actually not too uncommon to find that less than 94% of the CIs contained the true mean. At least to me, this helps dispel the idea that a confidence interval has a 95% probability of containing the true parameter. # In the following code, we simulate the process of # sampling from a distribution and calculating # a confidence interval for the mean of that # distribution. How often do the confidence # intervals actually include the mean? Let's see! # # You can change the number of replicates in the # first line to change the number of times the # loop is run (and the number of confidence intervals # that you simulate). # # The results from each simulation are saved to a # data frame. In the data frame, each row represents # the results from one simulation or replicate of the # loop. There are three columns in the data frame, # one which lists the lower confidence limits, one with # the higher confidence limits, and a third column, which # I called "Valid" which is either TRUE or FALSE # depending on whether or not that simulated confidence # interval includes the true mean of the distribution. # # To see the results of the simulation, run the whole # code at once, from "start" to "finish" and look in the # console to find the answer to the question. # "start" replicates <- 1000 conf.int.low <- rep(NA, replicates) conf.int.high <- rep(NA, replicates) conf.int.check <- rep(NA, replicates) for (i in 1:replicates) { n <- 10 mu <- 70 variance <- 25 sigma <- sqrt(variance) sample <- rnorm(n, mu, sigma) se.mean <- sigma/sqrt(n) sample.avg <- mean(sample) prob <- 0.95 alpha <- 1-prob q.alpha <- qnorm(1-alpha/2) low.95 <- sample.avg - q.alpha*se.mean high.95 <- sample.avg + q.alpha*se.mean conf.int.low[i] <- low.95 conf.int.high[i] <- high.95 conf.int.check[i] <- low.95 < mu & mu < high.95 } # Collect the intervals in a data frame ci.dataframe <- data.frame( LowerCI=conf.int.low, UpperCI=conf.int.high, Valid=conf.int.check ) # Take a peak at the top of the data frame head(ci.dataframe) # What fraction of the intervals included the true mean? ci.fraction <- length(which(conf.int.check, useNames=TRUE))/replicates ci.fraction # "finish" Hope this helps!
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? There are some interesting answers here, but I thought I'd add a little hands-on demonstration using R. We recently used this code in a stats course to highlight how confidence intervals work. Here'
347
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
I've always wondered this myself. My statistics background is limited, but here are the two different thoughts that made the difference clear to me. If you flip a fair coin 20 times and get 18 heads. Does your confidence interval have a 95% chance of containing 10? Obviously not. The probability only works the other way. Second example. You run one experiment and get a CI from 3-6. You perform the same experiment again and your CI is from 4-7. You can't then use Bayesian analysis to combine those two results or else you'd get whacky things like the true mean is 19 times more likely to be between 4-6 than between 3-4 or 6-7.
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
I've always wondered this myself. My statistics background is limited, but here are the two different thoughts that made the difference clear to me. If you flip a fair coin 20 times and get 18 heads.
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? I've always wondered this myself. My statistics background is limited, but here are the two different thoughts that made the difference clear to me. If you flip a fair coin 20 times and get 18 heads. Does your confidence interval have a 95% chance of containing 10? Obviously not. The probability only works the other way. Second example. You run one experiment and get a CI from 3-6. You perform the same experiment again and your CI is from 4-7. You can't then use Bayesian analysis to combine those two results or else you'd get whacky things like the true mean is 19 times more likely to be between 4-6 than between 3-4 or 6-7.
Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? I've always wondered this myself. My statistics background is limited, but here are the two different thoughts that made the difference clear to me. If you flip a fair coin 20 times and get 18 heads.
348
What is batch size in neural network?
The batch size defines the number of samples that will be propagated through the network. For instance, let's say you have 1050 training samples and you want to set up a batch_size equal to 100. The algorithm takes the first 100 samples (from 1st to 100th) from the training dataset and trains the network. Next, it takes the second 100 samples (from 101st to 200th) and trains the network again. We can keep doing this procedure until we have propagated all samples through of the network. Problem might happen with the last set of samples. In our example, we've used 1050 which is not divisible by 100 without remainder. The simplest solution is just to get the final 50 samples and train the network. Advantages of using a batch size < number of all samples: It requires less memory. Since you train the network using fewer samples, the overall training procedure requires less memory. That's especially important if you are not able to fit the whole dataset in your machine's memory. Typically networks train faster with mini-batches. That's because we update the weights after each propagation. In our example we've propagated 11 batches (10 of them had 100 samples and 1 had 50 samples) and after each of them we've updated our network's parameters. If we used all samples during propagation we would make only 1 update for the network's parameter. Disadvantages of using a batch size < number of all samples: The smaller the batch the less accurate the estimate of the gradient will be. In the figure below, you can see that the direction of the mini-batch gradient (green color) fluctuates much more in comparison to the direction of the full batch gradient (blue color). Stochastic is just a mini-batch with batch_size equal to 1. In that case, the gradient changes its direction even more often than a mini-batch gradient.
What is batch size in neural network?
The batch size defines the number of samples that will be propagated through the network. For instance, let's say you have 1050 training samples and you want to set up a batch_size equal to 100. The a
What is batch size in neural network? The batch size defines the number of samples that will be propagated through the network. For instance, let's say you have 1050 training samples and you want to set up a batch_size equal to 100. The algorithm takes the first 100 samples (from 1st to 100th) from the training dataset and trains the network. Next, it takes the second 100 samples (from 101st to 200th) and trains the network again. We can keep doing this procedure until we have propagated all samples through of the network. Problem might happen with the last set of samples. In our example, we've used 1050 which is not divisible by 100 without remainder. The simplest solution is just to get the final 50 samples and train the network. Advantages of using a batch size < number of all samples: It requires less memory. Since you train the network using fewer samples, the overall training procedure requires less memory. That's especially important if you are not able to fit the whole dataset in your machine's memory. Typically networks train faster with mini-batches. That's because we update the weights after each propagation. In our example we've propagated 11 batches (10 of them had 100 samples and 1 had 50 samples) and after each of them we've updated our network's parameters. If we used all samples during propagation we would make only 1 update for the network's parameter. Disadvantages of using a batch size < number of all samples: The smaller the batch the less accurate the estimate of the gradient will be. In the figure below, you can see that the direction of the mini-batch gradient (green color) fluctuates much more in comparison to the direction of the full batch gradient (blue color). Stochastic is just a mini-batch with batch_size equal to 1. In that case, the gradient changes its direction even more often than a mini-batch gradient.
What is batch size in neural network? The batch size defines the number of samples that will be propagated through the network. For instance, let's say you have 1050 training samples and you want to set up a batch_size equal to 100. The a
349
What is batch size in neural network?
In the neural network terminology: one epoch = one forward pass and one backward pass of all the training examples batch size = the number of training examples in one forward/backward pass. The higher the batch size, the more memory space you'll need. number of iterations = number of passes, each pass using [batch size] number of examples. To be clear, one pass = one forward pass + one backward pass (we do not count the forward pass and backward pass as two different passes). Example: if you have 1000 training examples, and your batch size is 500, then it will take 2 iterations to complete 1 epoch. FYI: Tradeoff batch size vs. number of iterations to train a neural network
What is batch size in neural network?
In the neural network terminology: one epoch = one forward pass and one backward pass of all the training examples batch size = the number of training examples in one forward/backward pass. The highe
What is batch size in neural network? In the neural network terminology: one epoch = one forward pass and one backward pass of all the training examples batch size = the number of training examples in one forward/backward pass. The higher the batch size, the more memory space you'll need. number of iterations = number of passes, each pass using [batch size] number of examples. To be clear, one pass = one forward pass + one backward pass (we do not count the forward pass and backward pass as two different passes). Example: if you have 1000 training examples, and your batch size is 500, then it will take 2 iterations to complete 1 epoch. FYI: Tradeoff batch size vs. number of iterations to train a neural network
What is batch size in neural network? In the neural network terminology: one epoch = one forward pass and one backward pass of all the training examples batch size = the number of training examples in one forward/backward pass. The highe
350
What is batch size in neural network?
The question has been asked a while ago but I think people are still tumbling across it. For me it helped to know about the mathematical background to understand batching and where the advantages/disadvantages mentioned in itdxer's answer come from. So please take this as a complementary explanation to the accepted answer. Consider Gradient Descent as an optimization algorithm to minimize your Loss function $J(\theta)$. The updating step in Gradient Descent is given by $$\theta_{k+1} = \theta_{k} - \alpha \nabla J(\theta)$$ For simplicity let's assume you only have 1 parameter ($n=1$), but you have a total of 1050 training samples ($m = 1050$) as suggested by itdxer. Full-Batch Gradient Descent In Full-Batch Gradient Descent one computes the gradient for all training samples first (represented by the sum in below equation, here the batch comprises all samples $m$ = full-batch) and then updates the parameter: $$\theta_{k+1} = \theta_{k} - \alpha \sum^m_{j=1} \nabla J_j(\theta)$$ This is what is described in the wikipedia excerpt from the OP. For large number of training samples, the updating step becomes very expensive since the gradient has to be evaluated for each summand. Mini-Batch Gradient Descent In Mini-Batch we apply the same equation but compute the gradient for batches of the training sample only (here the batch comprises a subset $b$ of all training samples $m$, thus mini-batch) before updating the parameter. $$\theta_{k+1} = \theta_{k} - \alpha \sum^b_{j=1} \nabla J_j(\theta)$$ Let's say we divide our 1050 training samples in 50 batches each comprising 21 training samples ($b$ = 21). Then we would evaluate the equation 50 times (once for each batch) and each time we would sum up the gradients for 21 training samples before updating the parameter. Stochastic Gradient Descent In Stochastic Gradient Descent one computes the gradient for one training sample and updates the paramter immediately. Basically, it is mini-batch with batch size = 1, as already mentioned by itdxer. These two steps are repeated for all training samples. for each sample j compute: $$\theta_{k+1} = \theta_{k} - \alpha \nabla J_j(\theta)$$ One updating step is less expensive since the gradient is only evaluated for a single training sample j. Difference between the approaches Updating Speed: Batch gradient descent tends to converge more slowly because the gradient has to be computed for all training samples before updating. Within the same number of computation steps, Stochastic Gradient Descent already updated the parameter multiple times. But why should we then even choose Batch Gradient Descent? Convergence Direction: Faster updating speed comes at the cost of lower "accuracy". Since in Stochastic Gradient Descent we only incorporate a single training sample to estimate the gradient it does not converge as directly as batch gradient descent. One could say, that the amount of information in each updating step is lower in SGD compared to BGD. The less direct convergence is nicely depicted in itdxer's answer. Full-Batch has the most direct route of convergence, where as mini-batch or stochastic fluctuate a lot more. Also with SDG it can theoretically happen, that the solution never fully converges. Memory Capacity: As pointed out by itdxer feeding training samples as batches requires memory capacity to load the batches. The greater the batch, the more memory capacity is required. Feeding a full-batch would require a lot of memory for large datasets and feeding each sample individually would take the algorithm a long time to converge. Thus, we typically choose a mini-batch (some batch size between one and full). Summary In my example I used Gradient Descent and no particular loss function, but the concept stays the same since optimization on computers basically always comprises iterative approaches. So, by batching you have influence over training speed (smaller batch size) vs. gradient estimation accuracy (larger batch size). By choosing the batch size you define how many training samples are combined to estimate the gradient before updating the parameter(s).
What is batch size in neural network?
The question has been asked a while ago but I think people are still tumbling across it. For me it helped to know about the mathematical background to understand batching and where the advantages/disa
What is batch size in neural network? The question has been asked a while ago but I think people are still tumbling across it. For me it helped to know about the mathematical background to understand batching and where the advantages/disadvantages mentioned in itdxer's answer come from. So please take this as a complementary explanation to the accepted answer. Consider Gradient Descent as an optimization algorithm to minimize your Loss function $J(\theta)$. The updating step in Gradient Descent is given by $$\theta_{k+1} = \theta_{k} - \alpha \nabla J(\theta)$$ For simplicity let's assume you only have 1 parameter ($n=1$), but you have a total of 1050 training samples ($m = 1050$) as suggested by itdxer. Full-Batch Gradient Descent In Full-Batch Gradient Descent one computes the gradient for all training samples first (represented by the sum in below equation, here the batch comprises all samples $m$ = full-batch) and then updates the parameter: $$\theta_{k+1} = \theta_{k} - \alpha \sum^m_{j=1} \nabla J_j(\theta)$$ This is what is described in the wikipedia excerpt from the OP. For large number of training samples, the updating step becomes very expensive since the gradient has to be evaluated for each summand. Mini-Batch Gradient Descent In Mini-Batch we apply the same equation but compute the gradient for batches of the training sample only (here the batch comprises a subset $b$ of all training samples $m$, thus mini-batch) before updating the parameter. $$\theta_{k+1} = \theta_{k} - \alpha \sum^b_{j=1} \nabla J_j(\theta)$$ Let's say we divide our 1050 training samples in 50 batches each comprising 21 training samples ($b$ = 21). Then we would evaluate the equation 50 times (once for each batch) and each time we would sum up the gradients for 21 training samples before updating the parameter. Stochastic Gradient Descent In Stochastic Gradient Descent one computes the gradient for one training sample and updates the paramter immediately. Basically, it is mini-batch with batch size = 1, as already mentioned by itdxer. These two steps are repeated for all training samples. for each sample j compute: $$\theta_{k+1} = \theta_{k} - \alpha \nabla J_j(\theta)$$ One updating step is less expensive since the gradient is only evaluated for a single training sample j. Difference between the approaches Updating Speed: Batch gradient descent tends to converge more slowly because the gradient has to be computed for all training samples before updating. Within the same number of computation steps, Stochastic Gradient Descent already updated the parameter multiple times. But why should we then even choose Batch Gradient Descent? Convergence Direction: Faster updating speed comes at the cost of lower "accuracy". Since in Stochastic Gradient Descent we only incorporate a single training sample to estimate the gradient it does not converge as directly as batch gradient descent. One could say, that the amount of information in each updating step is lower in SGD compared to BGD. The less direct convergence is nicely depicted in itdxer's answer. Full-Batch has the most direct route of convergence, where as mini-batch or stochastic fluctuate a lot more. Also with SDG it can theoretically happen, that the solution never fully converges. Memory Capacity: As pointed out by itdxer feeding training samples as batches requires memory capacity to load the batches. The greater the batch, the more memory capacity is required. Feeding a full-batch would require a lot of memory for large datasets and feeding each sample individually would take the algorithm a long time to converge. Thus, we typically choose a mini-batch (some batch size between one and full). Summary In my example I used Gradient Descent and no particular loss function, but the concept stays the same since optimization on computers basically always comprises iterative approaches. So, by batching you have influence over training speed (smaller batch size) vs. gradient estimation accuracy (larger batch size). By choosing the batch size you define how many training samples are combined to estimate the gradient before updating the parameter(s).
What is batch size in neural network? The question has been asked a while ago but I think people are still tumbling across it. For me it helped to know about the mathematical background to understand batching and where the advantages/disa
351
What is batch size in neural network?
When solving with a CPU or a GPU an Optimization Problem, you iteratively apply an Algorithm over some Input Data. In each of these iterations you usually update a Metric of your problem doing some Calculations on the Data. Now when the size of your data is large it might need a considerable amount of time to complete every iteration, and may consume a lot of resources. So sometimes you choose to apply these iterative calculations on a Portion of the Data to save time and computational resources. This portion is the batch_size and the process is called (in the Neural Network Lingo) batch data processing. When you apply your computations on all your data, then you do online data processing. I guess the terminology comes from the 60s, and even before. Does anyone remember the .bat DOS files? But of course the concept incarnated to mean a thread or portion of the data to be used.
What is batch size in neural network?
When solving with a CPU or a GPU an Optimization Problem, you iteratively apply an Algorithm over some Input Data. In each of these iterations you usually update a Metric of your problem doing some Ca
What is batch size in neural network? When solving with a CPU or a GPU an Optimization Problem, you iteratively apply an Algorithm over some Input Data. In each of these iterations you usually update a Metric of your problem doing some Calculations on the Data. Now when the size of your data is large it might need a considerable amount of time to complete every iteration, and may consume a lot of resources. So sometimes you choose to apply these iterative calculations on a Portion of the Data to save time and computational resources. This portion is the batch_size and the process is called (in the Neural Network Lingo) batch data processing. When you apply your computations on all your data, then you do online data processing. I guess the terminology comes from the 60s, and even before. Does anyone remember the .bat DOS files? But of course the concept incarnated to mean a thread or portion of the data to be used.
What is batch size in neural network? When solving with a CPU or a GPU an Optimization Problem, you iteratively apply an Algorithm over some Input Data. In each of these iterations you usually update a Metric of your problem doing some Ca
352
What is batch size in neural network?
The documentation for Keras about batch size can be found under the fit function in the Models (functional API) page batch_size: Integer or None. Number of samples per gradient update. If unspecified, batch_size will default to 32. If you have a small dataset, it would be best to make the batch size equal to the size of the training data. First try with a small batch then increase to save time. As itdxer mentioned, there's a tradeoff between accuracy and speed.
What is batch size in neural network?
The documentation for Keras about batch size can be found under the fit function in the Models (functional API) page batch_size: Integer or None. Number of samples per gradient update. If unspecifi
What is batch size in neural network? The documentation for Keras about batch size can be found under the fit function in the Models (functional API) page batch_size: Integer or None. Number of samples per gradient update. If unspecified, batch_size will default to 32. If you have a small dataset, it would be best to make the batch size equal to the size of the training data. First try with a small batch then increase to save time. As itdxer mentioned, there's a tradeoff between accuracy and speed.
What is batch size in neural network? The documentation for Keras about batch size can be found under the fit function in the Models (functional API) page batch_size: Integer or None. Number of samples per gradient update. If unspecifi
353
What is batch size in neural network?
Batch size is a hyperparameter that define number of sample to work through before updating internal model parameters.
What is batch size in neural network?
Batch size is a hyperparameter that define number of sample to work through before updating internal model parameters.
What is batch size in neural network? Batch size is a hyperparameter that define number of sample to work through before updating internal model parameters.
What is batch size in neural network? Batch size is a hyperparameter that define number of sample to work through before updating internal model parameters.
354
What is the meaning of p values and t values in statistical tests?
Understanding $p$-value Suppose, that you want to test the hypothesis that the average height of male students at your University is $5$ ft $7$ inches. You collect heights of $100$ students selected at random and compute the sample mean (say it turns out to be $5$ ft $9$ inches). Using an appropriate formula/statistical routine you compute the $p$-value for your hypothesis and say it turns out to be $0.06$. In order to interpret $p=0.06$ appropriately, we should keep several things in mind: The first step under classical hypothesis testing is the assumption that the hypothesis under consideration is true. (In our context, we assume that the true average height is $5$ ft $7$ inches.) Imagine doing the following calculation: Compute the probability that the sample mean is greater than $5$ ft $9$ inches assuming that our hypothesis is in fact correct (see point 1). In other words, we want to know $$\mathrm{P}(\mathrm{Sample\: mean} \ge 5 \:\mathrm{ft} \:9 \:\mathrm{inches} \:|\: \mathrm{True\: value} = 5 \:\mathrm{ft}\: 7\: \mathrm{inches}).$$ The calculation in step 2 is what is called the $p$-value. Therefore, a $p$-value of $0.06$ would mean that if we were to repeat our experiment many, many times (each time we select $100$ students at random and compute the sample mean) then $6$ times out of $100$ we can expect to see a sample mean greater than or equal to $5$ ft $9$ inches. Given the above understanding, should we still retain our assumption that our hypothesis is true (see step 1)? Well, a $p=0.06$ indicates that one of two things have happened: (A) Either our hypothesis is correct and an extremely unlikely event has occurred (e.g., all $100$ students are student athletes) or (B) Our assumption is incorrect and the sample we have obtained is not that unusual. The traditional way to choose between (A) and (B) is to choose an arbitrary cut-off for $p$. We choose (A) if $p > 0.05$ and (B) if $p < 0.05$.
What is the meaning of p values and t values in statistical tests?
Understanding $p$-value Suppose, that you want to test the hypothesis that the average height of male students at your University is $5$ ft $7$ inches. You collect heights of $100$ students selected a
What is the meaning of p values and t values in statistical tests? Understanding $p$-value Suppose, that you want to test the hypothesis that the average height of male students at your University is $5$ ft $7$ inches. You collect heights of $100$ students selected at random and compute the sample mean (say it turns out to be $5$ ft $9$ inches). Using an appropriate formula/statistical routine you compute the $p$-value for your hypothesis and say it turns out to be $0.06$. In order to interpret $p=0.06$ appropriately, we should keep several things in mind: The first step under classical hypothesis testing is the assumption that the hypothesis under consideration is true. (In our context, we assume that the true average height is $5$ ft $7$ inches.) Imagine doing the following calculation: Compute the probability that the sample mean is greater than $5$ ft $9$ inches assuming that our hypothesis is in fact correct (see point 1). In other words, we want to know $$\mathrm{P}(\mathrm{Sample\: mean} \ge 5 \:\mathrm{ft} \:9 \:\mathrm{inches} \:|\: \mathrm{True\: value} = 5 \:\mathrm{ft}\: 7\: \mathrm{inches}).$$ The calculation in step 2 is what is called the $p$-value. Therefore, a $p$-value of $0.06$ would mean that if we were to repeat our experiment many, many times (each time we select $100$ students at random and compute the sample mean) then $6$ times out of $100$ we can expect to see a sample mean greater than or equal to $5$ ft $9$ inches. Given the above understanding, should we still retain our assumption that our hypothesis is true (see step 1)? Well, a $p=0.06$ indicates that one of two things have happened: (A) Either our hypothesis is correct and an extremely unlikely event has occurred (e.g., all $100$ students are student athletes) or (B) Our assumption is incorrect and the sample we have obtained is not that unusual. The traditional way to choose between (A) and (B) is to choose an arbitrary cut-off for $p$. We choose (A) if $p > 0.05$ and (B) if $p < 0.05$.
What is the meaning of p values and t values in statistical tests? Understanding $p$-value Suppose, that you want to test the hypothesis that the average height of male students at your University is $5$ ft $7$ inches. You collect heights of $100$ students selected a
355
What is the meaning of p values and t values in statistical tests?
A Dialog Between a Teacher and a Thoughtful Student Humbly submitted in the belief that not enough crayons have been used so far in this thread. A brief illustrated synopsis appears at the end. Student: What does a p-value mean? A lot of people seem to agree it's the chance we will "see a sample mean greater than or equal to" a statistic or it's "the probability of observing this outcome ... given the null hypothesis is true" or where "my sample's statistic fell on [a simulated] distribution" and even "the probability of observing a test statistic at least as large as the one calculated assuming the null hypothesis is true". Teacher: Properly understood, all those statements are correct in many circumstances. Student: I don't see how most of them are relevant. Didn't you teach us that we have to state a null hypothesis $H_0$ and an alternative hypothesis $H_A$? How are they involved in these ideas of "greater than or equal to" or "at least as large" or the very popular "more extreme"? Teacher: Because it can seem complicated in general, would it help for us to explore a concrete example? Student: Sure. But please make it a realistic but simple one if you can. Teacher: This theory of hypothesis testing historically began with the need of astronomers to analyze observational errors, so how about starting there. I was going through some old documents one day where a scientist described his efforts to reduce the measurement error in his apparatus. He had taken a lot of measurements of a star in a known position and recorded their displacements ahead of or behind that position. To visualize those displacements, he drew a histogram that--when smoothed a little--looked like this one. Student: I remember how histograms work: the vertical axis is labeled "Density" to remind me that the relative frequencies of the measurements are represented by area rather than height. Teacher: That's right. An "unusual" or "extreme" value would be located in a region with pretty small area. Here's a crayon. Do you think you could color in a region whose area is just one-tenth the total? Student: Sure; that's easy. [Colors in the figure.] Teacher: Very good! That looks like about 10% of the area to me. Remember, though, that the only areas in the histogram that matter are those between vertical lines: they represent the chance or probability that the displacement would be located between those lines on the horizontal axis. That means you needed to color all the way down to the bottom and that would be over half the area, wouldn't it? Student: Oh, I see. Let me try again. I'm going to want to color in where the curve is really low, won't I? It's lowest at the two ends. Do I have to color in just one area or would it be ok to break it into several parts? Teacher: Using several parts is a smart idea. Where would they be? Student (pointing): Here and here. Because this crayon isn't very sharp, I used a pen to show you the lines I'm using. Teacher: Very nice! Let me tell you the rest of the story. The scientist made some improvements to his device and then he took additional measurements. He wrote that the displacement of the first one was only $0.1$, which he thought was a good sign, but being a careful scientist he proceeded to take more measurements as a check. Unfortunately, those other measurements are lost--the manuscript breaks off at this point--and all we have is that single number, $0.1$. Student: That's too bad. But isn't that much better than the wide spread of displacements in your figure? Teacher: That's the question I would like you to answer. To start with, what should we posit as $H_0$? Student: Well, a sceptic would wonder whether the improvements made to the device had any effect at all. The burden of proof is on the scientist: he would want to show that the sceptic is wrong. That makes me think the null hypothesis is kind of bad for the scientist: it says that all the new measurements--including the value of $0.1$ we know about--ought to behave as described by the first histogram. Or maybe even worse than that: they might be even more spread out. Teacher: Go on, you're doing well. Student: And so the alternative is that the new measurements would be less spread out, right? Teacher: Very good! Could you draw me a picture of what a histogram with less spread would look like? Here's another copy of the first histogram; you can draw on top of it as a reference. Student (drawing): I'm using a pen to outline the new histogram and I'm coloring in the area beneath it. I have made it so most of the curve is close to zero on the horizontal axis and so most of its area is near a (horizontal) value of zero: that's what it means to be less spread out or more precise. Teacher: That's a good start. But remember that a histogram showing chances should have a total area of $1$. The total area of the first histogram therefore is $1$. How much area is inside your new histogram? Student: Less than half, I think. I see that's a problem, but I don't know how to fix it. What should I do? Teacher: The trick is to make the new histogram higher than the old so that its total area is $1$. Here, I'll show you a computer-generated version to illustrate. Student: I see: you stretched it out vertically so its shape didn't really change but now the red area and gray area (including the part under the red) are the same amounts. Teacher: Right. You are looking at a picture of the null hypothesis (in blue, spread out) and part of the alternative hypothesis (in red, with less spread). Student: What do you mean by "part" of the alternative? Isn't it just the alternative hypothesis? Teacher: Statisticians and grammar don't seem to mix. :-) Seriously, what they mean by a "hypothesis" usually is a whole big set of possibilities. Here, the alternative (as you stated so well before) is that the measurements are "less spread out" than before. But how much less? There are many possibilities. Here, let me show you another. I drew it with yellow dashes. It's in between the previous two. Student: I see: you can have different amounts of spread but you don't know in advance how much the spread will really be. But why did you make the funny shading in this picture? Teacher: I wanted to highlight where and how the histograms differ. I shaded them in gray where the alternative histograms are lower than the null and in red where the alternatives are higher. Student: Why would that matter? Teacher: Do you remember how you colored the first histogram in both the tails? [Looking through the papers.] Ah, here it is. Let's color this picture in the same way. Student: I remember: those are the extreme values. I found the places where the null density was as small as possible and colored in 10% of the area there. Teacher: Tell me about the alternatives in those extreme areas. Student: It's hard to see, because the crayon covered it up, but it looks like there's almost no chance for any alternative to be in the areas I colored. Their histograms are right down against value axis and there's no room for any area beneath them. Teacher: Let's continue that thought. If I told you, hypothetically, that a measurement had a displacement of $-2$, and asked you to pick which of these three histograms was the one it most likely came from, which would it be? Student: The first one--the blue one. It's the most spread out and it's the only one where $-2$ seems to have any chance of occurring. Teacher: And what about the value of $0.1$ in the manuscript? Student: Hmmm... that's a different story. All three histograms are pretty high above the ground at $0.1$. Teacher: OK, fair enough. But suppose I told you the value was somewhere near $0.1$, like between $0$ and $0.2$. Does that help you read some probabilities off of these graphs? Student: Sure, because I can use areas. I just have to estimate the areas underneath each curve between $0$ and $0.2$. But that looks pretty hard. Teacher: You don't need to go that far. Can you just tell which area is the largest? Student: The one beneath the tallest curve, of course. All three areas have the same base, so the taller the curve, the more area there is beneath it and the base. That means the tallest histogram--the one I drew, with the red dashes--is the likeliest one for a displacement of $0.1$. I think I see where you're going with this, but I'm a little concerned: don't I have to look at all the histograms for all the alternatives, not just the one or two shown here? How could I possibly do that? Teacher: You're good at picking up patterns, so tell me: as the measurement apparatus is made more and more precise, what happens to its histogram? Student: It gets narrower--oh, and it has to get taller, too, so its total area stays the same. That makes it pretty hard to compare the histograms. The alternative ones are all higher than the null right at $0$, that's obvious. But at other values sometimes the alternatives are higher and sometimes they are lower! For example, [pointing at a value near $3/4$], right here my red histogram is the lowest, the yellow histogram is the highest, and the original null histogram is between them. But over on the right the null is the highest. Teacher: In general, comparing histograms is a complicated business. To help us do it, I have asked the computer to make another plot: it has divided each of the alternative histogram heights (or "densities") by the null histogram height, creating values known as "likelihood ratios." As a result, a value greater than $1$ means the alternative is more likely, while a value less than $1$ means the alternative is less likely. It has drawn yet one more alternative: it's more spread out than the other two, but still less spread out than the original apparatus was. Teacher (continuing): Could you show me where the alternatives tend to be more likely than the null? Student (coloring): Here in the middle, obviously. And because these are not histograms anymore, I guess we should be looking at heights rather than areas, so I'm just marking a range of values on the horizontal axis. But how do I know how much of the middle to color in? Where do I stop coloring? Teacher: There's no firm rule. It all depends on how we plan to use our conclusions and how fierce the sceptics are. But sit back and think about what you have accomplished: you now realize that outcomes with large likelihood ratios are evidence for the alternative and outcomes with small likelihood ratios are evidence against the alternative. What I will ask you to do is to color in an area that, insofar as is possible, has a small chance of occurring under the null hypothesis and a relatively large chance of occurring under the alternatives. Going back to the first diagram you colored, way back at the start of our conversation, you colored in the two tails of the null because they were "extreme." Would they still do a good job? Student: I don't think so. Even though they were pretty extreme and rare under the null hypothesis, they are practically impossible for any of the alternatives. If my new measurement were, say $3.0$, I think I would side with the sceptic and deny that any improvement had occurred, even though $3.0$ was an unusual outcome in any case. I want to change that coloring. Here--let me have another crayon. Teacher: What does that represent? Student: We started out with you asking me to draw in just 10% of the area under the original histogram--the one describing the null. So now I drew in 10% of the area where the alternatives seem more likely to be occurring. I think that when a new measurement is in that area, it's telling us we ought to believe the alternative. Teacher: And how should the sceptic react to that? Student: A sceptic never has to admit he's wrong, does he? But I think his faith should be a little shaken. After all, we arranged it so that although a measurement could be inside the area I just drew, it only has a 10% chance of being there when the null is true. And it has a larger chance of being there when the alternative is true. I just can't tell you how much larger that chance is, because it would depend on how much the scientist improved the apparatus. I just know it's larger. So the evidence would be against the sceptic. Teacher: All right. Would you mind summarizing your understanding so that we're perfectly clear about what you have learned? Student: I learned that to compare alternative hypotheses to null hypotheses, we should compare their histograms. We divide the densities of the alternatives by the density of the null: that's what you called the "likelihood ratio." To make a good test, I should pick a small number like 10% or whatever might be enough to shake a sceptic. Then I should find values where the likelihood ratio is as high as possible and color them in until 10% (or whatever) has been colored. Teacher: And how would you use that coloring? Student: As you reminded me earlier, the coloring has to be between vertical lines. Values (on the horizontal axis) that lie under the coloring are evidence against the null hypothesis. Other values--well, it's hard to say what they might mean without taking a more detailed look at all the histograms involved. Teacher: Going back to the value of $0.1$ in the manuscript, what would you conclude? Student: That's within the area I last colored, so I think the scientist probably was right and the apparatus really was improved. Teacher: One last thing. Your conclusion was based on picking 10% as the criterion, or "size" of the test. Many people like to use 5% instead. Some prefer 1%. What could you tell them? Student: I couldn't do all those tests at once! Well, maybe I could in a way. I can see that no matter what size the test should be, I ought to start coloring from $0$, which is in this sense the "most extreme" value, and work outwards in both directions from there. If I were to stop right at $0.1$--the value actually observed--I think I would have colored in an area somewhere between $0.05$ and $0.1$, say $0.08$. The 5% and 1% people could tell right away that I colored too much: if they wanted to color just 5% or 1%, they could, but they wouldn't get as far out as $0.1$. They wouldn't come to the same conclusion I did: they would say there's not enough evidence that a change actually occurred. Teacher: You have just told me what all those quotations at the beginning really mean. It should be obvious from this example that they cannot possibly intend "more extreme" or "greater than or equal" or "at least as large" in the sense of having a bigger value or even having a value where the null density is small. They really mean these things in the sense of large likelihood ratios that you have described. By the way, the number around $0.08$ that you computed is called the "p-value." It can only properly be understood in the way you have described: with respect to an analysis of relative histogram heights--the likelihood ratios. Student: Thank you. I'm not confident I fully understand all of this yet, but you have given me a lot to think about. Teacher: If you would like to go further, take a look at the Neyman-Pearson Lemma. You are probably ready to understand it now. Synopsis Many tests that are based on a single statistic like the one in the dialog will call it "$z$" or "$t$". These are ways of hinting what the null histogram looks like, but they are only hints: what we name this number doesn't really matter. The construction summarized by the student, as illustrated here, shows how it is related to the p-value. The p-value is the smallest test size that would cause an observation of $t=0.1$ to lead to a rejection of the null hypothesis. In this figure, which is zoomed to show detail, the null hypothesis is plotted in solid blue and two typical alternatives are plotted with dashed lines. The region where those alternatives tend to be much larger than the null is shaded in. The shading starts where the relative likelihoods of the alternatives are greatest (at $0$). The shading stops when the observation $t=0.1$ is reached. The p-value is the area of the shaded region under the null histogram: it is the chance, assuming the null is true, of observing an outcome whose likelihood ratios tend to be large regardless of which alternative happens to be true. In particular, this construction depends intimately on the alternative hypothesis. It cannot be carried out without specifying the possible alternatives. For two practical examples of the test described here -- one published, the other hypothetical -- see https://stats.stackexchange.com/a/5408/919. A detailed application of these ideas to testing a median is presented in my post at https://stats.stackexchange.com/a/131489/919.
What is the meaning of p values and t values in statistical tests?
A Dialog Between a Teacher and a Thoughtful Student Humbly submitted in the belief that not enough crayons have been used so far in this thread. A brief illustrated synopsis appears at the end. Stud
What is the meaning of p values and t values in statistical tests? A Dialog Between a Teacher and a Thoughtful Student Humbly submitted in the belief that not enough crayons have been used so far in this thread. A brief illustrated synopsis appears at the end. Student: What does a p-value mean? A lot of people seem to agree it's the chance we will "see a sample mean greater than or equal to" a statistic or it's "the probability of observing this outcome ... given the null hypothesis is true" or where "my sample's statistic fell on [a simulated] distribution" and even "the probability of observing a test statistic at least as large as the one calculated assuming the null hypothesis is true". Teacher: Properly understood, all those statements are correct in many circumstances. Student: I don't see how most of them are relevant. Didn't you teach us that we have to state a null hypothesis $H_0$ and an alternative hypothesis $H_A$? How are they involved in these ideas of "greater than or equal to" or "at least as large" or the very popular "more extreme"? Teacher: Because it can seem complicated in general, would it help for us to explore a concrete example? Student: Sure. But please make it a realistic but simple one if you can. Teacher: This theory of hypothesis testing historically began with the need of astronomers to analyze observational errors, so how about starting there. I was going through some old documents one day where a scientist described his efforts to reduce the measurement error in his apparatus. He had taken a lot of measurements of a star in a known position and recorded their displacements ahead of or behind that position. To visualize those displacements, he drew a histogram that--when smoothed a little--looked like this one. Student: I remember how histograms work: the vertical axis is labeled "Density" to remind me that the relative frequencies of the measurements are represented by area rather than height. Teacher: That's right. An "unusual" or "extreme" value would be located in a region with pretty small area. Here's a crayon. Do you think you could color in a region whose area is just one-tenth the total? Student: Sure; that's easy. [Colors in the figure.] Teacher: Very good! That looks like about 10% of the area to me. Remember, though, that the only areas in the histogram that matter are those between vertical lines: they represent the chance or probability that the displacement would be located between those lines on the horizontal axis. That means you needed to color all the way down to the bottom and that would be over half the area, wouldn't it? Student: Oh, I see. Let me try again. I'm going to want to color in where the curve is really low, won't I? It's lowest at the two ends. Do I have to color in just one area or would it be ok to break it into several parts? Teacher: Using several parts is a smart idea. Where would they be? Student (pointing): Here and here. Because this crayon isn't very sharp, I used a pen to show you the lines I'm using. Teacher: Very nice! Let me tell you the rest of the story. The scientist made some improvements to his device and then he took additional measurements. He wrote that the displacement of the first one was only $0.1$, which he thought was a good sign, but being a careful scientist he proceeded to take more measurements as a check. Unfortunately, those other measurements are lost--the manuscript breaks off at this point--and all we have is that single number, $0.1$. Student: That's too bad. But isn't that much better than the wide spread of displacements in your figure? Teacher: That's the question I would like you to answer. To start with, what should we posit as $H_0$? Student: Well, a sceptic would wonder whether the improvements made to the device had any effect at all. The burden of proof is on the scientist: he would want to show that the sceptic is wrong. That makes me think the null hypothesis is kind of bad for the scientist: it says that all the new measurements--including the value of $0.1$ we know about--ought to behave as described by the first histogram. Or maybe even worse than that: they might be even more spread out. Teacher: Go on, you're doing well. Student: And so the alternative is that the new measurements would be less spread out, right? Teacher: Very good! Could you draw me a picture of what a histogram with less spread would look like? Here's another copy of the first histogram; you can draw on top of it as a reference. Student (drawing): I'm using a pen to outline the new histogram and I'm coloring in the area beneath it. I have made it so most of the curve is close to zero on the horizontal axis and so most of its area is near a (horizontal) value of zero: that's what it means to be less spread out or more precise. Teacher: That's a good start. But remember that a histogram showing chances should have a total area of $1$. The total area of the first histogram therefore is $1$. How much area is inside your new histogram? Student: Less than half, I think. I see that's a problem, but I don't know how to fix it. What should I do? Teacher: The trick is to make the new histogram higher than the old so that its total area is $1$. Here, I'll show you a computer-generated version to illustrate. Student: I see: you stretched it out vertically so its shape didn't really change but now the red area and gray area (including the part under the red) are the same amounts. Teacher: Right. You are looking at a picture of the null hypothesis (in blue, spread out) and part of the alternative hypothesis (in red, with less spread). Student: What do you mean by "part" of the alternative? Isn't it just the alternative hypothesis? Teacher: Statisticians and grammar don't seem to mix. :-) Seriously, what they mean by a "hypothesis" usually is a whole big set of possibilities. Here, the alternative (as you stated so well before) is that the measurements are "less spread out" than before. But how much less? There are many possibilities. Here, let me show you another. I drew it with yellow dashes. It's in between the previous two. Student: I see: you can have different amounts of spread but you don't know in advance how much the spread will really be. But why did you make the funny shading in this picture? Teacher: I wanted to highlight where and how the histograms differ. I shaded them in gray where the alternative histograms are lower than the null and in red where the alternatives are higher. Student: Why would that matter? Teacher: Do you remember how you colored the first histogram in both the tails? [Looking through the papers.] Ah, here it is. Let's color this picture in the same way. Student: I remember: those are the extreme values. I found the places where the null density was as small as possible and colored in 10% of the area there. Teacher: Tell me about the alternatives in those extreme areas. Student: It's hard to see, because the crayon covered it up, but it looks like there's almost no chance for any alternative to be in the areas I colored. Their histograms are right down against value axis and there's no room for any area beneath them. Teacher: Let's continue that thought. If I told you, hypothetically, that a measurement had a displacement of $-2$, and asked you to pick which of these three histograms was the one it most likely came from, which would it be? Student: The first one--the blue one. It's the most spread out and it's the only one where $-2$ seems to have any chance of occurring. Teacher: And what about the value of $0.1$ in the manuscript? Student: Hmmm... that's a different story. All three histograms are pretty high above the ground at $0.1$. Teacher: OK, fair enough. But suppose I told you the value was somewhere near $0.1$, like between $0$ and $0.2$. Does that help you read some probabilities off of these graphs? Student: Sure, because I can use areas. I just have to estimate the areas underneath each curve between $0$ and $0.2$. But that looks pretty hard. Teacher: You don't need to go that far. Can you just tell which area is the largest? Student: The one beneath the tallest curve, of course. All three areas have the same base, so the taller the curve, the more area there is beneath it and the base. That means the tallest histogram--the one I drew, with the red dashes--is the likeliest one for a displacement of $0.1$. I think I see where you're going with this, but I'm a little concerned: don't I have to look at all the histograms for all the alternatives, not just the one or two shown here? How could I possibly do that? Teacher: You're good at picking up patterns, so tell me: as the measurement apparatus is made more and more precise, what happens to its histogram? Student: It gets narrower--oh, and it has to get taller, too, so its total area stays the same. That makes it pretty hard to compare the histograms. The alternative ones are all higher than the null right at $0$, that's obvious. But at other values sometimes the alternatives are higher and sometimes they are lower! For example, [pointing at a value near $3/4$], right here my red histogram is the lowest, the yellow histogram is the highest, and the original null histogram is between them. But over on the right the null is the highest. Teacher: In general, comparing histograms is a complicated business. To help us do it, I have asked the computer to make another plot: it has divided each of the alternative histogram heights (or "densities") by the null histogram height, creating values known as "likelihood ratios." As a result, a value greater than $1$ means the alternative is more likely, while a value less than $1$ means the alternative is less likely. It has drawn yet one more alternative: it's more spread out than the other two, but still less spread out than the original apparatus was. Teacher (continuing): Could you show me where the alternatives tend to be more likely than the null? Student (coloring): Here in the middle, obviously. And because these are not histograms anymore, I guess we should be looking at heights rather than areas, so I'm just marking a range of values on the horizontal axis. But how do I know how much of the middle to color in? Where do I stop coloring? Teacher: There's no firm rule. It all depends on how we plan to use our conclusions and how fierce the sceptics are. But sit back and think about what you have accomplished: you now realize that outcomes with large likelihood ratios are evidence for the alternative and outcomes with small likelihood ratios are evidence against the alternative. What I will ask you to do is to color in an area that, insofar as is possible, has a small chance of occurring under the null hypothesis and a relatively large chance of occurring under the alternatives. Going back to the first diagram you colored, way back at the start of our conversation, you colored in the two tails of the null because they were "extreme." Would they still do a good job? Student: I don't think so. Even though they were pretty extreme and rare under the null hypothesis, they are practically impossible for any of the alternatives. If my new measurement were, say $3.0$, I think I would side with the sceptic and deny that any improvement had occurred, even though $3.0$ was an unusual outcome in any case. I want to change that coloring. Here--let me have another crayon. Teacher: What does that represent? Student: We started out with you asking me to draw in just 10% of the area under the original histogram--the one describing the null. So now I drew in 10% of the area where the alternatives seem more likely to be occurring. I think that when a new measurement is in that area, it's telling us we ought to believe the alternative. Teacher: And how should the sceptic react to that? Student: A sceptic never has to admit he's wrong, does he? But I think his faith should be a little shaken. After all, we arranged it so that although a measurement could be inside the area I just drew, it only has a 10% chance of being there when the null is true. And it has a larger chance of being there when the alternative is true. I just can't tell you how much larger that chance is, because it would depend on how much the scientist improved the apparatus. I just know it's larger. So the evidence would be against the sceptic. Teacher: All right. Would you mind summarizing your understanding so that we're perfectly clear about what you have learned? Student: I learned that to compare alternative hypotheses to null hypotheses, we should compare their histograms. We divide the densities of the alternatives by the density of the null: that's what you called the "likelihood ratio." To make a good test, I should pick a small number like 10% or whatever might be enough to shake a sceptic. Then I should find values where the likelihood ratio is as high as possible and color them in until 10% (or whatever) has been colored. Teacher: And how would you use that coloring? Student: As you reminded me earlier, the coloring has to be between vertical lines. Values (on the horizontal axis) that lie under the coloring are evidence against the null hypothesis. Other values--well, it's hard to say what they might mean without taking a more detailed look at all the histograms involved. Teacher: Going back to the value of $0.1$ in the manuscript, what would you conclude? Student: That's within the area I last colored, so I think the scientist probably was right and the apparatus really was improved. Teacher: One last thing. Your conclusion was based on picking 10% as the criterion, or "size" of the test. Many people like to use 5% instead. Some prefer 1%. What could you tell them? Student: I couldn't do all those tests at once! Well, maybe I could in a way. I can see that no matter what size the test should be, I ought to start coloring from $0$, which is in this sense the "most extreme" value, and work outwards in both directions from there. If I were to stop right at $0.1$--the value actually observed--I think I would have colored in an area somewhere between $0.05$ and $0.1$, say $0.08$. The 5% and 1% people could tell right away that I colored too much: if they wanted to color just 5% or 1%, they could, but they wouldn't get as far out as $0.1$. They wouldn't come to the same conclusion I did: they would say there's not enough evidence that a change actually occurred. Teacher: You have just told me what all those quotations at the beginning really mean. It should be obvious from this example that they cannot possibly intend "more extreme" or "greater than or equal" or "at least as large" in the sense of having a bigger value or even having a value where the null density is small. They really mean these things in the sense of large likelihood ratios that you have described. By the way, the number around $0.08$ that you computed is called the "p-value." It can only properly be understood in the way you have described: with respect to an analysis of relative histogram heights--the likelihood ratios. Student: Thank you. I'm not confident I fully understand all of this yet, but you have given me a lot to think about. Teacher: If you would like to go further, take a look at the Neyman-Pearson Lemma. You are probably ready to understand it now. Synopsis Many tests that are based on a single statistic like the one in the dialog will call it "$z$" or "$t$". These are ways of hinting what the null histogram looks like, but they are only hints: what we name this number doesn't really matter. The construction summarized by the student, as illustrated here, shows how it is related to the p-value. The p-value is the smallest test size that would cause an observation of $t=0.1$ to lead to a rejection of the null hypothesis. In this figure, which is zoomed to show detail, the null hypothesis is plotted in solid blue and two typical alternatives are plotted with dashed lines. The region where those alternatives tend to be much larger than the null is shaded in. The shading starts where the relative likelihoods of the alternatives are greatest (at $0$). The shading stops when the observation $t=0.1$ is reached. The p-value is the area of the shaded region under the null histogram: it is the chance, assuming the null is true, of observing an outcome whose likelihood ratios tend to be large regardless of which alternative happens to be true. In particular, this construction depends intimately on the alternative hypothesis. It cannot be carried out without specifying the possible alternatives. For two practical examples of the test described here -- one published, the other hypothetical -- see https://stats.stackexchange.com/a/5408/919. A detailed application of these ideas to testing a median is presented in my post at https://stats.stackexchange.com/a/131489/919.
What is the meaning of p values and t values in statistical tests? A Dialog Between a Teacher and a Thoughtful Student Humbly submitted in the belief that not enough crayons have been used so far in this thread. A brief illustrated synopsis appears at the end. Stud
356
What is the meaning of p values and t values in statistical tests?
Before touching this topic, I always make sure that students are happy moving between percentages, decimals, odds and fractions. If they are not completely happy with this then they can get confused very quickly. I like to explain hypothesis testing for the first time (and therefore p-values and test statistics) through Fisher's classic tea experiment. I have several reasons for this: (i) I think working through an experiment and defining the terms as we go along makes more sense that just defining all of these terms to begin with. (ii) You don't need to rely explicitly on probability distributions, areas under the curve, etc to get over the key points of hypothesis testing. (iii) It explains this ridiculous notion of "as or more extreme than those observed" in a fairly sensible manner (iv) I find students like to understand the history, origins and back story of what they are studying as it makes it more real than some abstract theories. (v) It doesn't matter what discipline or subject the students come from, they can relate to the example of tea (N.B. Some international students have difficulty with this peculiarly British institution of tea with milk.) [Note: I originally got this idea from Dennis Lindley's wonderful article "The Analysis of Experimental Data: The Appreciation of Tea & Wine" in which he demonstrates why Bayesian methods are superior to classical methods.] The back story is that Muriel Bristol visits Fisher one afternoon in the 1920's at Rothamsted Experimental Station for a cup of tea. When Fisher put the milk in last she complained saying that she could also tell whether the milk was poured first (or last) and that she preferred the former. To put this to the test he designed his classic tea experiment where Muriel is presented with a pair of tea cups and she must identify which one had the milk added first. This is repeated with six pairs of tea cups. Her choices are either Right (R) or Wrong (W) and her results are: RRRRRW. Suppose that Muriel is actually just guessing and has no ability to discriminate whatsoever. This is called the Null Hypothesis. According to Fisher the purpose of the experiment is to discredit this null hypothesis. If Muriel is guessing she will identify the tea cup correctly with probability 0.5 on each turn and as they are independent the observed result has 0.5$^6$ = 0.016 (or 1/64). Fisher then argues that either: (a) the null hypothesis (Muriel is guessing) is true and an event of small probability has occurred or, (b) the null hypothesis is false and Muriel has discriminatory powers. The p-value (or probability value) is the probability of observing this outcome (RRRRRW) given the null hypothesis is true - it's the small probability referred to in (a), above. In this instance it's 0.016. Since events with small probabilities only occur rarely (by definition) situation (b) might be a more preferable explanation of what occurred than situation (a). When we reject the null hypothesis we're in fact accepting the opposite hypothesis which is we call the alternative hypothesis. In this example, Muriel has discriminatory powers is the alternative hypothesis. An important consideration is what do we class as a "small" probability? What's the cutoff point at which we're willing to say that an event is unlikely? The standard benchmark is 5% (0.05) and this is called the significance level. When the p-value is smaller than the significance level we reject the null hypothesis as being false and accept our alternative hypothesis. It is common parlance to claim a result is "significant" when the p-value is smaller than the significance level i.e. when the probability of what we observed occurring given the null hypothesis is true is smaller than our cutoff point. It is important to be clear that using 5% is completely subjective (as is using the other common significance levels of 1% and 10%). Fisher realised that this doesn't work; every possible outcome with one wrong pair was equally suggestive of discriminatory powers. The relevant probability for situation (a), above, is therefore 6(0.5)^6 = 0.094 (or 6/64) which now is not significant at a significance level of 5%. To overcome this Fisher argued that if 1 error in 6 is considered evidence of discriminatory powers then so is no errors i.e. outcomes that more strongly indicate discriminatory powers than the one observed should be included when calculating the p-value. This resulted in the following amendment to the reasoning, either: (a) the null hypothesis (Muriel is guessing) is true and the probability of events as, or more, extreme than that observed is small, or (b) the null hypothesis is false and Muriel has discriminatory powers. Back to our tea experiment and we find that the p-value under this set-up is 7(0.5)^6 = 0.109 which still is not significant at the 5% threshold. I then get students to work with some other examples such as coin tossing to work out whether or not a coin is fair. This drills home the concepts of the null/alternative hypothesis, p-values and significance levels. We then move onto the case of a continuous variable and introduce the notion of a test-statistic. As we have already covered the normal distribution, standard normal distribution and the z-transformation in depth it's merely a matter of bolting together several concepts. As well as calculating test-statistics, p-values and making a decision (significant/not significant) I get students to work through published papers in a fill in the missing blanks game.
What is the meaning of p values and t values in statistical tests?
Before touching this topic, I always make sure that students are happy moving between percentages, decimals, odds and fractions. If they are not completely happy with this then they can get confused v
What is the meaning of p values and t values in statistical tests? Before touching this topic, I always make sure that students are happy moving between percentages, decimals, odds and fractions. If they are not completely happy with this then they can get confused very quickly. I like to explain hypothesis testing for the first time (and therefore p-values and test statistics) through Fisher's classic tea experiment. I have several reasons for this: (i) I think working through an experiment and defining the terms as we go along makes more sense that just defining all of these terms to begin with. (ii) You don't need to rely explicitly on probability distributions, areas under the curve, etc to get over the key points of hypothesis testing. (iii) It explains this ridiculous notion of "as or more extreme than those observed" in a fairly sensible manner (iv) I find students like to understand the history, origins and back story of what they are studying as it makes it more real than some abstract theories. (v) It doesn't matter what discipline or subject the students come from, they can relate to the example of tea (N.B. Some international students have difficulty with this peculiarly British institution of tea with milk.) [Note: I originally got this idea from Dennis Lindley's wonderful article "The Analysis of Experimental Data: The Appreciation of Tea & Wine" in which he demonstrates why Bayesian methods are superior to classical methods.] The back story is that Muriel Bristol visits Fisher one afternoon in the 1920's at Rothamsted Experimental Station for a cup of tea. When Fisher put the milk in last she complained saying that she could also tell whether the milk was poured first (or last) and that she preferred the former. To put this to the test he designed his classic tea experiment where Muriel is presented with a pair of tea cups and she must identify which one had the milk added first. This is repeated with six pairs of tea cups. Her choices are either Right (R) or Wrong (W) and her results are: RRRRRW. Suppose that Muriel is actually just guessing and has no ability to discriminate whatsoever. This is called the Null Hypothesis. According to Fisher the purpose of the experiment is to discredit this null hypothesis. If Muriel is guessing she will identify the tea cup correctly with probability 0.5 on each turn and as they are independent the observed result has 0.5$^6$ = 0.016 (or 1/64). Fisher then argues that either: (a) the null hypothesis (Muriel is guessing) is true and an event of small probability has occurred or, (b) the null hypothesis is false and Muriel has discriminatory powers. The p-value (or probability value) is the probability of observing this outcome (RRRRRW) given the null hypothesis is true - it's the small probability referred to in (a), above. In this instance it's 0.016. Since events with small probabilities only occur rarely (by definition) situation (b) might be a more preferable explanation of what occurred than situation (a). When we reject the null hypothesis we're in fact accepting the opposite hypothesis which is we call the alternative hypothesis. In this example, Muriel has discriminatory powers is the alternative hypothesis. An important consideration is what do we class as a "small" probability? What's the cutoff point at which we're willing to say that an event is unlikely? The standard benchmark is 5% (0.05) and this is called the significance level. When the p-value is smaller than the significance level we reject the null hypothesis as being false and accept our alternative hypothesis. It is common parlance to claim a result is "significant" when the p-value is smaller than the significance level i.e. when the probability of what we observed occurring given the null hypothesis is true is smaller than our cutoff point. It is important to be clear that using 5% is completely subjective (as is using the other common significance levels of 1% and 10%). Fisher realised that this doesn't work; every possible outcome with one wrong pair was equally suggestive of discriminatory powers. The relevant probability for situation (a), above, is therefore 6(0.5)^6 = 0.094 (or 6/64) which now is not significant at a significance level of 5%. To overcome this Fisher argued that if 1 error in 6 is considered evidence of discriminatory powers then so is no errors i.e. outcomes that more strongly indicate discriminatory powers than the one observed should be included when calculating the p-value. This resulted in the following amendment to the reasoning, either: (a) the null hypothesis (Muriel is guessing) is true and the probability of events as, or more, extreme than that observed is small, or (b) the null hypothesis is false and Muriel has discriminatory powers. Back to our tea experiment and we find that the p-value under this set-up is 7(0.5)^6 = 0.109 which still is not significant at the 5% threshold. I then get students to work with some other examples such as coin tossing to work out whether or not a coin is fair. This drills home the concepts of the null/alternative hypothesis, p-values and significance levels. We then move onto the case of a continuous variable and introduce the notion of a test-statistic. As we have already covered the normal distribution, standard normal distribution and the z-transformation in depth it's merely a matter of bolting together several concepts. As well as calculating test-statistics, p-values and making a decision (significant/not significant) I get students to work through published papers in a fill in the missing blanks game.
What is the meaning of p values and t values in statistical tests? Before touching this topic, I always make sure that students are happy moving between percentages, decimals, odds and fractions. If they are not completely happy with this then they can get confused v
357
What is the meaning of p values and t values in statistical tests?
No amount of verbal explanation or calculations really helped me to understand at a gut level what p-values were, but it really snapped into focus for me once I took a course that involved simulation. That gave me the ability to actually see data generated by the null hypothesis and to plot the means/etc. of simulated samples, then look at where my sample's statistic fell on that distribution. I think the key advantage to this is that it lets students forget about the math and the test statistic distributions for a minute and focus on the concepts at hand. Granted, it required that I learn how to simulate that stuff, which will cause problems for an entirely different set of students. But it worked for me, and I've used simulation countless times to help explain statistics to others with great success (e.g., "This is what your data looks like; this is what a Poisson distribution looks like overlaid. Are you SURE you want to do a Poisson regression?"). This doesn't exactly answer the questions you posed, but for me, at least, it made them trivial.
What is the meaning of p values and t values in statistical tests?
No amount of verbal explanation or calculations really helped me to understand at a gut level what p-values were, but it really snapped into focus for me once I took a course that involved simulation.
What is the meaning of p values and t values in statistical tests? No amount of verbal explanation or calculations really helped me to understand at a gut level what p-values were, but it really snapped into focus for me once I took a course that involved simulation. That gave me the ability to actually see data generated by the null hypothesis and to plot the means/etc. of simulated samples, then look at where my sample's statistic fell on that distribution. I think the key advantage to this is that it lets students forget about the math and the test statistic distributions for a minute and focus on the concepts at hand. Granted, it required that I learn how to simulate that stuff, which will cause problems for an entirely different set of students. But it worked for me, and I've used simulation countless times to help explain statistics to others with great success (e.g., "This is what your data looks like; this is what a Poisson distribution looks like overlaid. Are you SURE you want to do a Poisson regression?"). This doesn't exactly answer the questions you posed, but for me, at least, it made them trivial.
What is the meaning of p values and t values in statistical tests? No amount of verbal explanation or calculations really helped me to understand at a gut level what p-values were, but it really snapped into focus for me once I took a course that involved simulation.
358
What is the meaning of p values and t values in statistical tests?
A nice definition of p-value is "the probability of observing a test statistic at least as large as the one calculated assuming the null hypothesis is true". The problem with that is that it requires an understanding of "test statistic" and "null hypothesis". But, that's easy to get across. If the null hypothesis is true, usually something like "parameter from population A is equal to parameter from population B", and you calculate statistics to estimate those parameters, what is the probability of seeing a test statistic that says, "they're this different"? E.g., If the coin is fair, what is the probability I'd see 60 heads out of 100 tosses? That's testing the null hypothesis, "the coin is fair", or "p = .5" where p is the probability of heads. The test statistic in that case would be the number of heads. Now, I assume that what you're calling "t-value" is a generic "test statistic", not a value from a "t distribution". They're not the same thing, and the term "t-value" isn't (necessarily) widely used and could be confusing. What you're calling "t-value" is probably what I'm calling "test statistic". In order to calculate a p-value (remember, it's just a probability) you need a distribution, and a value to plug into that distribution which will return a probability. Once you do that, the probability you return is your p-value. You can see that they are related because under the same distribution, different test-statistics are going to return different p-values. More extreme test-statistics will return lower p-values giving greater indication that the null hypothesis is false. I've ignored the issue of one-sided and two-sided p-values here.
What is the meaning of p values and t values in statistical tests?
A nice definition of p-value is "the probability of observing a test statistic at least as large as the one calculated assuming the null hypothesis is true". The problem with that is that it requires
What is the meaning of p values and t values in statistical tests? A nice definition of p-value is "the probability of observing a test statistic at least as large as the one calculated assuming the null hypothesis is true". The problem with that is that it requires an understanding of "test statistic" and "null hypothesis". But, that's easy to get across. If the null hypothesis is true, usually something like "parameter from population A is equal to parameter from population B", and you calculate statistics to estimate those parameters, what is the probability of seeing a test statistic that says, "they're this different"? E.g., If the coin is fair, what is the probability I'd see 60 heads out of 100 tosses? That's testing the null hypothesis, "the coin is fair", or "p = .5" where p is the probability of heads. The test statistic in that case would be the number of heads. Now, I assume that what you're calling "t-value" is a generic "test statistic", not a value from a "t distribution". They're not the same thing, and the term "t-value" isn't (necessarily) widely used and could be confusing. What you're calling "t-value" is probably what I'm calling "test statistic". In order to calculate a p-value (remember, it's just a probability) you need a distribution, and a value to plug into that distribution which will return a probability. Once you do that, the probability you return is your p-value. You can see that they are related because under the same distribution, different test-statistics are going to return different p-values. More extreme test-statistics will return lower p-values giving greater indication that the null hypothesis is false. I've ignored the issue of one-sided and two-sided p-values here.
What is the meaning of p values and t values in statistical tests? A nice definition of p-value is "the probability of observing a test statistic at least as large as the one calculated assuming the null hypothesis is true". The problem with that is that it requires
359
What is the meaning of p values and t values in statistical tests?
Imagine you have a bag containing 900 black marbles and 100 white, i.e. 10% of the marbles are white. Now imagine you take 1 marble out, look at it and record its colour, take out another, record its colour etc.. and do this 100 times. At the end of this process you will have a number for white marbles which, ideally, we would expect to be 10, i.e. 10% of 100, but in actual fact may be 8, or 13 or whatever simply due to randomness. If you repeat this 100 marble withdrawal experiment many, many times and then plot a histogram of the number of white marbles drawn per experiment, you'll find you will have a Bell Curve centred about 10. This represents your 10% hypothesis: with any bag containing 1000 marbles of which 10% are white, if you randomly take out 100 marbles you will find 10 white marbles in the selection, give or take 4 or so. The p-value is all about this "give or take 4 or so." Let's say by referring to the Bell Curve created earlier you can determine that less than 5% of the time would you get 5 or fewer white marbles and another < 5% of the time accounts for 15 or more white marbles i.e. > 90% of the time your 100 marble selection will contain between 6 to 14 white marbles inclusive. Now assuming someone plonks down a bag of 1000 marbles with an unknown number of white marbles in it, we have the tools to answer these questions i) Are there fewer than 100 white marbles? ii) Are there more than 100 white marbles? iii) Does the bag contain 100 white marbles? Simply take out 100 marbles from the bag and count how many of this sample are white. a) If there are 6 to 14 whites in the sample you cannot reject the hypothesis that there are 100 white marbles in the bag and the corresponding p-values for 6 through 14 will be > 0.05. b) If there are 5 or fewer whites in the sample you can reject the hypothesis that there are 100 white marbles in the bag and the corresponding p-values for 5 or fewer will be < 0.05. You would expect the bag to contain < 10% white marbles. c) If there are 15 or more whites in the sample you can reject the hypothesis that there are 100 white marbles in the bag and the corresponding p-values for 15 or more will be < 0.05. You would expect the bag to contain > 10% white marbles. In response to Baltimark's comment Given the example above, there is an approximately:- 4.8% chance of getter 5 white balls or fewer 1.85% chance of 4 or fewer 0.55% chance of 3 or fewer 0.1% chance of 2 or fewer 6.25% chance of 15 or more 3.25% chance of 16 or more 1.5% chance of 17 or more 0.65% chance of 18 or more 0.25% chance of 19 or more 0.1% chance of 20 or more 0.05% chance of 21 or more These numbers were estimated from an empirical distribution generated by a simple Monte Carlo routine run in R and the resultant quantiles of the sampling distribution. For the purposes of answering the original question, suppose you draw 5 white balls, there is only an approximate 4.8% chance that if the 1000 marble bag really does contain 10% white balls you would pull out only 5 whites in a sample of 100. This equates to a p value < 0.05. You now have to choose between i) There really are 10% white balls in the bag and I have just been "unlucky" to draw so few or ii) I have drawn so few white balls that there can't really be 10% white balls (reject the hypothesis of 10% white balls)
What is the meaning of p values and t values in statistical tests?
Imagine you have a bag containing 900 black marbles and 100 white, i.e. 10% of the marbles are white. Now imagine you take 1 marble out, look at it and record its colour, take out another, record its
What is the meaning of p values and t values in statistical tests? Imagine you have a bag containing 900 black marbles and 100 white, i.e. 10% of the marbles are white. Now imagine you take 1 marble out, look at it and record its colour, take out another, record its colour etc.. and do this 100 times. At the end of this process you will have a number for white marbles which, ideally, we would expect to be 10, i.e. 10% of 100, but in actual fact may be 8, or 13 or whatever simply due to randomness. If you repeat this 100 marble withdrawal experiment many, many times and then plot a histogram of the number of white marbles drawn per experiment, you'll find you will have a Bell Curve centred about 10. This represents your 10% hypothesis: with any bag containing 1000 marbles of which 10% are white, if you randomly take out 100 marbles you will find 10 white marbles in the selection, give or take 4 or so. The p-value is all about this "give or take 4 or so." Let's say by referring to the Bell Curve created earlier you can determine that less than 5% of the time would you get 5 or fewer white marbles and another < 5% of the time accounts for 15 or more white marbles i.e. > 90% of the time your 100 marble selection will contain between 6 to 14 white marbles inclusive. Now assuming someone plonks down a bag of 1000 marbles with an unknown number of white marbles in it, we have the tools to answer these questions i) Are there fewer than 100 white marbles? ii) Are there more than 100 white marbles? iii) Does the bag contain 100 white marbles? Simply take out 100 marbles from the bag and count how many of this sample are white. a) If there are 6 to 14 whites in the sample you cannot reject the hypothesis that there are 100 white marbles in the bag and the corresponding p-values for 6 through 14 will be > 0.05. b) If there are 5 or fewer whites in the sample you can reject the hypothesis that there are 100 white marbles in the bag and the corresponding p-values for 5 or fewer will be < 0.05. You would expect the bag to contain < 10% white marbles. c) If there are 15 or more whites in the sample you can reject the hypothesis that there are 100 white marbles in the bag and the corresponding p-values for 15 or more will be < 0.05. You would expect the bag to contain > 10% white marbles. In response to Baltimark's comment Given the example above, there is an approximately:- 4.8% chance of getter 5 white balls or fewer 1.85% chance of 4 or fewer 0.55% chance of 3 or fewer 0.1% chance of 2 or fewer 6.25% chance of 15 or more 3.25% chance of 16 or more 1.5% chance of 17 or more 0.65% chance of 18 or more 0.25% chance of 19 or more 0.1% chance of 20 or more 0.05% chance of 21 or more These numbers were estimated from an empirical distribution generated by a simple Monte Carlo routine run in R and the resultant quantiles of the sampling distribution. For the purposes of answering the original question, suppose you draw 5 white balls, there is only an approximate 4.8% chance that if the 1000 marble bag really does contain 10% white balls you would pull out only 5 whites in a sample of 100. This equates to a p value < 0.05. You now have to choose between i) There really are 10% white balls in the bag and I have just been "unlucky" to draw so few or ii) I have drawn so few white balls that there can't really be 10% white balls (reject the hypothesis of 10% white balls)
What is the meaning of p values and t values in statistical tests? Imagine you have a bag containing 900 black marbles and 100 white, i.e. 10% of the marbles are white. Now imagine you take 1 marble out, look at it and record its colour, take out another, record its
360
What is the meaning of p values and t values in statistical tests?
What the p-value doesn't tell you is how likely it is that the null hypothesis is true. Under the conventional (Fisher) significance testing framework we first compute the likelihood of observing the data assuming the null hypothesis is true, this is the p-value. It seems intuitively reasonable then to assume the null hypothesis is probably false if the data are sufficiently unlikely to be observed under the null hypothesis. This is entirely reasonable. Statisticians tranditionally use a threshold and "reject the null hypothesis at the 95% significance level" if (1 - p) > 0.95; however this is just a convention that has proven reasonable in practice - it doesn't mean that there is less than 5% probability that the null hypothesis is false (and therefore a 95% probability that the alternative hypothesis is true). One reason that we can't say this is that we have not looked at the alternative hypothesis yet. Imaging a function f() that maps the p-value onto the probability that the alternative hypothesis is true. It would be reasonable to assert that this function is strictly decreasing (such that the more likely the observations under the null hypothesis, the less likely the alternative hypothesis is true), and that it gives values between 0 and 1 (as it gives an estimate of probability). However, that is all that we know about f(), so while there is a relationship between p and the probability that the alternative hypothesis is true, it is uncalibrated. This means we cannot use the p-value to make quantitative statements about the plausibility of the nulll and alternatve hypotheses. Caveat lector: It isn't really within the frequentist framework to speak of the probability that a hypothesis is true, as it isn't a random variable - it is either true or it isn't. So where I have talked of the probability of the truth of a hypothesis I have implicitly moved to a Bayesian interpretation. It is incorrect to mix Bayesian and frequentist, however there is always a temptation to do so as what we really want is an quantative indication of the relative plausibility/probability of the hypotheses. But this is not what the p-value provides.
What is the meaning of p values and t values in statistical tests?
What the p-value doesn't tell you is how likely it is that the null hypothesis is true. Under the conventional (Fisher) significance testing framework we first compute the likelihood of observing the
What is the meaning of p values and t values in statistical tests? What the p-value doesn't tell you is how likely it is that the null hypothesis is true. Under the conventional (Fisher) significance testing framework we first compute the likelihood of observing the data assuming the null hypothesis is true, this is the p-value. It seems intuitively reasonable then to assume the null hypothesis is probably false if the data are sufficiently unlikely to be observed under the null hypothesis. This is entirely reasonable. Statisticians tranditionally use a threshold and "reject the null hypothesis at the 95% significance level" if (1 - p) > 0.95; however this is just a convention that has proven reasonable in practice - it doesn't mean that there is less than 5% probability that the null hypothesis is false (and therefore a 95% probability that the alternative hypothesis is true). One reason that we can't say this is that we have not looked at the alternative hypothesis yet. Imaging a function f() that maps the p-value onto the probability that the alternative hypothesis is true. It would be reasonable to assert that this function is strictly decreasing (such that the more likely the observations under the null hypothesis, the less likely the alternative hypothesis is true), and that it gives values between 0 and 1 (as it gives an estimate of probability). However, that is all that we know about f(), so while there is a relationship between p and the probability that the alternative hypothesis is true, it is uncalibrated. This means we cannot use the p-value to make quantitative statements about the plausibility of the nulll and alternatve hypotheses. Caveat lector: It isn't really within the frequentist framework to speak of the probability that a hypothesis is true, as it isn't a random variable - it is either true or it isn't. So where I have talked of the probability of the truth of a hypothesis I have implicitly moved to a Bayesian interpretation. It is incorrect to mix Bayesian and frequentist, however there is always a temptation to do so as what we really want is an quantative indication of the relative plausibility/probability of the hypotheses. But this is not what the p-value provides.
What is the meaning of p values and t values in statistical tests? What the p-value doesn't tell you is how likely it is that the null hypothesis is true. Under the conventional (Fisher) significance testing framework we first compute the likelihood of observing the
361
What is the meaning of p values and t values in statistical tests?
In statistics you can never say something is absolutely certain, so statisticians use another approach to gauge whether a hypothesis is true or not. They try to reject all the other hypotheses that are not supported by the data. To do this, statistical tests have a null hypothesis and an alternate hypothesis. The p-value reported from a statistical test is the likelihood of the result given that the null hypothesis was correct. That's why we want small p-values. The smaller they are, the less likely the result would be if the null hypothesis was correct. If the p-value is small enough (ie,it is very unlikely for the result to have occurred if the null hypothesis was correct), then the null hypothesis is rejected. In this fashion, null hypotheses can be formulated and subsequently rejected. If the null hypothesis is rejected, you accept the alternate hypothesis as the best explanation. Just remember though that the alternate hypothesis is never certain, since the null hypothesis could have, by chance, generated the results.
What is the meaning of p values and t values in statistical tests?
In statistics you can never say something is absolutely certain, so statisticians use another approach to gauge whether a hypothesis is true or not. They try to reject all the other hypotheses that ar
What is the meaning of p values and t values in statistical tests? In statistics you can never say something is absolutely certain, so statisticians use another approach to gauge whether a hypothesis is true or not. They try to reject all the other hypotheses that are not supported by the data. To do this, statistical tests have a null hypothesis and an alternate hypothesis. The p-value reported from a statistical test is the likelihood of the result given that the null hypothesis was correct. That's why we want small p-values. The smaller they are, the less likely the result would be if the null hypothesis was correct. If the p-value is small enough (ie,it is very unlikely for the result to have occurred if the null hypothesis was correct), then the null hypothesis is rejected. In this fashion, null hypotheses can be formulated and subsequently rejected. If the null hypothesis is rejected, you accept the alternate hypothesis as the best explanation. Just remember though that the alternate hypothesis is never certain, since the null hypothesis could have, by chance, generated the results.
What is the meaning of p values and t values in statistical tests? In statistics you can never say something is absolutely certain, so statisticians use another approach to gauge whether a hypothesis is true or not. They try to reject all the other hypotheses that ar
362
What is the meaning of p values and t values in statistical tests?
I am bit diffident to revive the old topic, but I jumped from here, so I post this as a response to the question in the link. The p-value is a concrete term, there should be no room for misunderstanding. But, it is somehow mystical that colloquial translations of the definition of p-value leads to many different misinterpretations. I think the root of the problem is in the use of the phrases "at least as adverse to null hypothesis" or "at least as extreme as the one in your sample data" etc. For instance, Wikipedia says ...the p-value is the probability of obtaining the observed sample results (or a more extreme result) when the null hypothesis is actually true. Meaning of $p$-value is blurred when people first stumble upon "(or a more extreme result)" and start thinking "more extreeeme?". I think it is better to leave the "more extreme result" to something like indirect speech act. So, my take is The p-value is the probability of seeing what you see in a "imaginary world" where the null hypothesis is true. To make the idea concrete, suppose you have sample x consisting of 10 observations and you hypothesize that the population mean is $\mu_0=20$. So, in your hypothesized world, population distribution is $N(20,1)$. x #[1] 20.82600 19.30229 18.74753 18.99071 20.14312 16.76647 #[7] 18.94962 17.99331 19.22598 18.68633 You compute t-stat as $t_0=\sqrt{n}\frac{\bar{X}-\mu_0}{s}$, and find out that sqrt(10) * (mean(x) - 20) / sd(x) #-2.974405 So, what is the probability of observing $|t_0|$ as large as 2.97 ( "more extreme" comes here) in the imaginary world? In the imaginary world $t_0\sim t(9)$, thus, the p-value must be $$p-value=Pr(|t_0|\geq 2.97)= 0.01559054$$ 2*(1 - pt(2.974405, 9)) #[1] 0.01559054 Since p-value is small, it is very unlikely that the sample x would have been drawn in the hypothesized world. Therefore, we conclude that it is very unlikely that the hypothesized world was in fact the actual world.
What is the meaning of p values and t values in statistical tests?
I am bit diffident to revive the old topic, but I jumped from here, so I post this as a response to the question in the link. The p-value is a concrete term, there should be no room for misunderstandi
What is the meaning of p values and t values in statistical tests? I am bit diffident to revive the old topic, but I jumped from here, so I post this as a response to the question in the link. The p-value is a concrete term, there should be no room for misunderstanding. But, it is somehow mystical that colloquial translations of the definition of p-value leads to many different misinterpretations. I think the root of the problem is in the use of the phrases "at least as adverse to null hypothesis" or "at least as extreme as the one in your sample data" etc. For instance, Wikipedia says ...the p-value is the probability of obtaining the observed sample results (or a more extreme result) when the null hypothesis is actually true. Meaning of $p$-value is blurred when people first stumble upon "(or a more extreme result)" and start thinking "more extreeeme?". I think it is better to leave the "more extreme result" to something like indirect speech act. So, my take is The p-value is the probability of seeing what you see in a "imaginary world" where the null hypothesis is true. To make the idea concrete, suppose you have sample x consisting of 10 observations and you hypothesize that the population mean is $\mu_0=20$. So, in your hypothesized world, population distribution is $N(20,1)$. x #[1] 20.82600 19.30229 18.74753 18.99071 20.14312 16.76647 #[7] 18.94962 17.99331 19.22598 18.68633 You compute t-stat as $t_0=\sqrt{n}\frac{\bar{X}-\mu_0}{s}$, and find out that sqrt(10) * (mean(x) - 20) / sd(x) #-2.974405 So, what is the probability of observing $|t_0|$ as large as 2.97 ( "more extreme" comes here) in the imaginary world? In the imaginary world $t_0\sim t(9)$, thus, the p-value must be $$p-value=Pr(|t_0|\geq 2.97)= 0.01559054$$ 2*(1 - pt(2.974405, 9)) #[1] 0.01559054 Since p-value is small, it is very unlikely that the sample x would have been drawn in the hypothesized world. Therefore, we conclude that it is very unlikely that the hypothesized world was in fact the actual world.
What is the meaning of p values and t values in statistical tests? I am bit diffident to revive the old topic, but I jumped from here, so I post this as a response to the question in the link. The p-value is a concrete term, there should be no room for misunderstandi
363
What is the meaning of p values and t values in statistical tests?
I have also found simulations to be a useful in teaching. Here is a simulation for the arguably most basic case in which we sample $n$ times from $N(\mu,1)$ (hence, $\sigma^2=1$ is known for simplicity) and test $H_0:\mu=\mu_0$ against a left-sided alternative. Then, the $t$-statistic $\text{tstat}:=\sqrt{n}(\bar{X}-\mu_0)$ is $N(0,1)$ under $H_0$, such that the $p$-value is simply $\Phi(\text{tstat})$ or pnorm(tstat) in R. In the simulation, it is the fraction of times that data generated under the null $N(\mu_0,1)$ (here, $\mu_0=2$) yields sample means stored in nullMeans that are less (i.e., ``more extreme'' in this left-sided test) than the one calculated from the observed data. # p value set.seed(1) reps <- 1000 n <- 100 mu <- 1.85 # true value mu_0 <- 2 # null value xaxis <- seq(-3, 3, length = 100) X <- rnorm(n,mu) nullMeans <- counter <- rep(NA,reps) yvals <- jitter(rep(0,reps),2) for (i in 1:reps) { tstat <- sqrt(n)*(mean(X)-mu_0) # test statistic, N(0,1) under the given assumptions par(mfrow=c(1,3)) plot(xaxis,dnorm(xaxis),ylab="null distribution",xlab="possible test statistics",type="l") points(tstat,0,cex=2,col="salmon",pch=21,bg="salmon") X_null <- rnorm(n,mu_0) # generate data under H_0 nullMeans[i] <- mean(X_null) plot(nullMeans[1:i],yvals[1:i],col="blue",pch=21,xlab="actual means and those generated under the null",ylab="", yaxt='n',ylim=c(-1,1),xlim=c(1.5,2.5)) abline(v=mu_0,lty=2) points(mean(X),0,cex=4,col="salmon",pch=21,bg="salmon") # counts 1 if sample generated under H_0 is more extreme: counter[i] <- (nullMeans[i] < mean(X)) # i.e. we test against H_1: mu < mu_0 barplot(table(counter[1:i])/i,col=c("green","red"),xlab="more extreme mean under the null than the mean actually observed") if(i<10) locator(1) } mean(counter) pnorm(tstat)
What is the meaning of p values and t values in statistical tests?
I have also found simulations to be a useful in teaching. Here is a simulation for the arguably most basic case in which we sample $n$ times from $N(\mu,1)$ (hence, $\sigma^2=1$ is known for simplici
What is the meaning of p values and t values in statistical tests? I have also found simulations to be a useful in teaching. Here is a simulation for the arguably most basic case in which we sample $n$ times from $N(\mu,1)$ (hence, $\sigma^2=1$ is known for simplicity) and test $H_0:\mu=\mu_0$ against a left-sided alternative. Then, the $t$-statistic $\text{tstat}:=\sqrt{n}(\bar{X}-\mu_0)$ is $N(0,1)$ under $H_0$, such that the $p$-value is simply $\Phi(\text{tstat})$ or pnorm(tstat) in R. In the simulation, it is the fraction of times that data generated under the null $N(\mu_0,1)$ (here, $\mu_0=2$) yields sample means stored in nullMeans that are less (i.e., ``more extreme'' in this left-sided test) than the one calculated from the observed data. # p value set.seed(1) reps <- 1000 n <- 100 mu <- 1.85 # true value mu_0 <- 2 # null value xaxis <- seq(-3, 3, length = 100) X <- rnorm(n,mu) nullMeans <- counter <- rep(NA,reps) yvals <- jitter(rep(0,reps),2) for (i in 1:reps) { tstat <- sqrt(n)*(mean(X)-mu_0) # test statistic, N(0,1) under the given assumptions par(mfrow=c(1,3)) plot(xaxis,dnorm(xaxis),ylab="null distribution",xlab="possible test statistics",type="l") points(tstat,0,cex=2,col="salmon",pch=21,bg="salmon") X_null <- rnorm(n,mu_0) # generate data under H_0 nullMeans[i] <- mean(X_null) plot(nullMeans[1:i],yvals[1:i],col="blue",pch=21,xlab="actual means and those generated under the null",ylab="", yaxt='n',ylim=c(-1,1),xlim=c(1.5,2.5)) abline(v=mu_0,lty=2) points(mean(X),0,cex=4,col="salmon",pch=21,bg="salmon") # counts 1 if sample generated under H_0 is more extreme: counter[i] <- (nullMeans[i] < mean(X)) # i.e. we test against H_1: mu < mu_0 barplot(table(counter[1:i])/i,col=c("green","red"),xlab="more extreme mean under the null than the mean actually observed") if(i<10) locator(1) } mean(counter) pnorm(tstat)
What is the meaning of p values and t values in statistical tests? I have also found simulations to be a useful in teaching. Here is a simulation for the arguably most basic case in which we sample $n$ times from $N(\mu,1)$ (hence, $\sigma^2=1$ is known for simplici
364
What is the meaning of p values and t values in statistical tests?
I find it helpful to follow a sequence in which you explain concepts in the following order: (1) The z score and proportions above and below the z score assuming a normal curve. (2) The notion of a sampling distribution, and the z score for a given sample mean when the population standard deviation is known (and thence the one sample z test) (3) The one-sample t-test and the likelihood of a sample mean when the population standard deviation is unknown (replete with stories about the secret identity of a certain industrial statistician and why Guinness is Good For Statistics). (4) The two-sample t-test and the sampling distribution of mean differences. The ease with which introductory students grasp the t-test has much to do with the groundwork that is laid in preparation for this topic. /* instructor of terrified students mode off */
What is the meaning of p values and t values in statistical tests?
I find it helpful to follow a sequence in which you explain concepts in the following order: (1) The z score and proportions above and below the z score assuming a normal curve. (2) The notion of a sa
What is the meaning of p values and t values in statistical tests? I find it helpful to follow a sequence in which you explain concepts in the following order: (1) The z score and proportions above and below the z score assuming a normal curve. (2) The notion of a sampling distribution, and the z score for a given sample mean when the population standard deviation is known (and thence the one sample z test) (3) The one-sample t-test and the likelihood of a sample mean when the population standard deviation is unknown (replete with stories about the secret identity of a certain industrial statistician and why Guinness is Good For Statistics). (4) The two-sample t-test and the sampling distribution of mean differences. The ease with which introductory students grasp the t-test has much to do with the groundwork that is laid in preparation for this topic. /* instructor of terrified students mode off */
What is the meaning of p values and t values in statistical tests? I find it helpful to follow a sequence in which you explain concepts in the following order: (1) The z score and proportions above and below the z score assuming a normal curve. (2) The notion of a sa
365
What is the meaning of p values and t values in statistical tests?
I have yet to prove the following argument so it might contain errors, but I really want to throw in my two cents (Hopefully, I'll update it with a rigorous proof soon). Another way of looking at the $p$-value is $p$-value - A statistic $X$ such that $$\forall 0 \le c \le 1, F_{X|H_0}(\inf\{x: F_{X|H_0}(x) \ge c\}) = c$$ where $F_{X|H_0}$ is the distribution function of $X$ under $H_0$. Specifically, if $X$ has a continuous distribution and you're not using approximation, then Every $p$-value is a statistic with a uniform distribution on $[0, 1]$, and Every statistic with a uniform distribution on $[0, 1]$ is a $p$-value. You may consider this a generalized description of the $p$-values.
What is the meaning of p values and t values in statistical tests?
I have yet to prove the following argument so it might contain errors, but I really want to throw in my two cents (Hopefully, I'll update it with a rigorous proof soon). Another way of looking at the
What is the meaning of p values and t values in statistical tests? I have yet to prove the following argument so it might contain errors, but I really want to throw in my two cents (Hopefully, I'll update it with a rigorous proof soon). Another way of looking at the $p$-value is $p$-value - A statistic $X$ such that $$\forall 0 \le c \le 1, F_{X|H_0}(\inf\{x: F_{X|H_0}(x) \ge c\}) = c$$ where $F_{X|H_0}$ is the distribution function of $X$ under $H_0$. Specifically, if $X$ has a continuous distribution and you're not using approximation, then Every $p$-value is a statistic with a uniform distribution on $[0, 1]$, and Every statistic with a uniform distribution on $[0, 1]$ is a $p$-value. You may consider this a generalized description of the $p$-values.
What is the meaning of p values and t values in statistical tests? I have yet to prove the following argument so it might contain errors, but I really want to throw in my two cents (Hopefully, I'll update it with a rigorous proof soon). Another way of looking at the
366
What is the meaning of p values and t values in statistical tests?
What does a "p-value" mean in relation to the hypothesis being tested? In an ontological sense (what is truth?), it means nothing. Any hypothesis testing is based on untested assumptions. This are normally part of the test itself, but are also part of whatever model you are using (e.g. in a regression model). Since we are merely assuming these, we cannot know if the reason why the p-value is below our threshold is because the null is false. It is a non sequitur to deduce unconditionally that because of a low p-value we must reject the null. For instance, something in the model could be wrong. In an epistemological sense (what can we learn?), it means something. You gain knowledge conditional on the untested premises being true. Since (at least until now) we cannot prove every edifice of reality, all our knowledge will be necessarily conditional. We will never get to the "truth".
What is the meaning of p values and t values in statistical tests?
What does a "p-value" mean in relation to the hypothesis being tested? In an ontological sense (what is truth?), it means nothing. Any hypothesis testing is based on untested assumptions. This are no
What is the meaning of p values and t values in statistical tests? What does a "p-value" mean in relation to the hypothesis being tested? In an ontological sense (what is truth?), it means nothing. Any hypothesis testing is based on untested assumptions. This are normally part of the test itself, but are also part of whatever model you are using (e.g. in a regression model). Since we are merely assuming these, we cannot know if the reason why the p-value is below our threshold is because the null is false. It is a non sequitur to deduce unconditionally that because of a low p-value we must reject the null. For instance, something in the model could be wrong. In an epistemological sense (what can we learn?), it means something. You gain knowledge conditional on the untested premises being true. Since (at least until now) we cannot prove every edifice of reality, all our knowledge will be necessarily conditional. We will never get to the "truth".
What is the meaning of p values and t values in statistical tests? What does a "p-value" mean in relation to the hypothesis being tested? In an ontological sense (what is truth?), it means nothing. Any hypothesis testing is based on untested assumptions. This are no
367
What is the meaning of p values and t values in statistical tests?
I think that examples involving marbles or coins or height-measuring can be fine for practicing the math, but they aren't good for building intuition. College students like to question society, right? How about using a political example? Say a political candidate ran a campaign promising that some policy will help the economy. She was elected, she got the policy enacted, and 2 years later, the economy is booming. She's up for re-election, and claims that her policy is the reason for everyone's prosperity. Should you re-elect her? The thoughtful citizen should say "well, it's true that the economy is doing well, but can we really attribute that to your policy?" To truly answer this, we must consider the question "would the economy have done well in the last 2 years without it?" If the answer is yes (e.g. the economy is booming because of some new unrelated technological development) then we reject the politician's explanation of the data. That is, to examine one hypothesis (policy helped the economy), we must build a model of the world where that hypothesis is null (the policy was never enacted). We then make a prediction under that model. We call the probability of observing this data in that alternate world the p-value. If the p-value is too high, then we aren't convinced by the hypothesis--the policy made no difference. If the p-value is low then we trust the hypothesis--the policy was essential.
What is the meaning of p values and t values in statistical tests?
I think that examples involving marbles or coins or height-measuring can be fine for practicing the math, but they aren't good for building intuition. College students like to question society, right?
What is the meaning of p values and t values in statistical tests? I think that examples involving marbles or coins or height-measuring can be fine for practicing the math, but they aren't good for building intuition. College students like to question society, right? How about using a political example? Say a political candidate ran a campaign promising that some policy will help the economy. She was elected, she got the policy enacted, and 2 years later, the economy is booming. She's up for re-election, and claims that her policy is the reason for everyone's prosperity. Should you re-elect her? The thoughtful citizen should say "well, it's true that the economy is doing well, but can we really attribute that to your policy?" To truly answer this, we must consider the question "would the economy have done well in the last 2 years without it?" If the answer is yes (e.g. the economy is booming because of some new unrelated technological development) then we reject the politician's explanation of the data. That is, to examine one hypothesis (policy helped the economy), we must build a model of the world where that hypothesis is null (the policy was never enacted). We then make a prediction under that model. We call the probability of observing this data in that alternate world the p-value. If the p-value is too high, then we aren't convinced by the hypothesis--the policy made no difference. If the p-value is low then we trust the hypothesis--the policy was essential.
What is the meaning of p values and t values in statistical tests? I think that examples involving marbles or coins or height-measuring can be fine for practicing the math, but they aren't good for building intuition. College students like to question society, right?
368
What is the meaning of p values and t values in statistical tests?
The p-value isnt as mysterious as most analysts make it out to be. It is a way of not having to calculate the confidence interval for a t-test but simply determining the confidence level with which null hypothesis can be rejected. ILLUSTRATION. You run a test. The p-value comes up as 0.1866 for Q-variable, 0.0023 for R-variable. (These are expressed in %). If you are testing at a 95% confidence level to reject the null hypo; for Q: 100-18.66= 81.34% for R: 100-0.23= 99.77%. At a 95% confidence level, Q gives an 81.34% confidence to reject. This falls below 95% and is unacceptable. ACCEPT NULL. R gives a 99.77% confidence to reject null. Clearly above the desired 95%. We thus reject the null. I just illustrated the reading of the p-value through a 'reverse way' of measuring it up to the confidence level at which we reject the null hypo.
What is the meaning of p values and t values in statistical tests?
The p-value isnt as mysterious as most analysts make it out to be. It is a way of not having to calculate the confidence interval for a t-test but simply determining the confidence level with which nu
What is the meaning of p values and t values in statistical tests? The p-value isnt as mysterious as most analysts make it out to be. It is a way of not having to calculate the confidence interval for a t-test but simply determining the confidence level with which null hypothesis can be rejected. ILLUSTRATION. You run a test. The p-value comes up as 0.1866 for Q-variable, 0.0023 for R-variable. (These are expressed in %). If you are testing at a 95% confidence level to reject the null hypo; for Q: 100-18.66= 81.34% for R: 100-0.23= 99.77%. At a 95% confidence level, Q gives an 81.34% confidence to reject. This falls below 95% and is unacceptable. ACCEPT NULL. R gives a 99.77% confidence to reject null. Clearly above the desired 95%. We thus reject the null. I just illustrated the reading of the p-value through a 'reverse way' of measuring it up to the confidence level at which we reject the null hypo.
What is the meaning of p values and t values in statistical tests? The p-value isnt as mysterious as most analysts make it out to be. It is a way of not having to calculate the confidence interval for a t-test but simply determining the confidence level with which nu
369
What does AUC stand for and what is it?
Abbreviations AUC = Area Under the Curve. AUROC = Area Under the Receiver Operating Characteristic curve. AUC is used most of the time to mean AUROC, which is a bad practice since as Marc Claesen pointed out AUC is ambiguous (could be any curve) while AUROC is not. Interpreting the AUROC The AUROC has several equivalent interpretations: The expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative. The expected proportion of positives ranked before a uniformly drawn random negative. The expected true positive rate if the ranking is split just before a uniformly drawn random negative. The expected proportion of negatives ranked after a uniformly drawn random positive. The expected false positive rate if the ranking is split just after a uniformly drawn random positive. Going further: How to derive the probabilistic interpretation of the AUROC? Computing the AUROC Assume we have a probabilistic, binary classifier such as logistic regression. Before presenting the ROC curve (= Receiver Operating Characteristic curve), the concept of confusion matrix must be understood. When we make a binary prediction, there can be 4 types of outcomes: We predict 0 while the true class is actually 0: this is called a True Negative, i.e. we correctly predict that the class is negative (0). For example, an antivirus did not detect a harmless file as a virus . We predict 0 while the true class is actually 1: this is called a False Negative, i.e. we incorrectly predict that the class is negative (0). For example, an antivirus failed to detect a virus. We predict 1 while the true class is actually 0: this is called a False Positive, i.e. we incorrectly predict that the class is positive (1). For example, an antivirus considered a harmless file to be a virus. We predict 1 while the true class is actually 1: this is called a True Positive, i.e. we correctly predict that the class is positive (1). For example, an antivirus rightfully detected a virus. To get the confusion matrix, we go over all the predictions made by the model, and count how many times each of those 4 types of outcomes occur: In this example of a confusion matrix, among the 50 data points that are classified, 45 are correctly classified and the 5 are misclassified. Since to compare two different models it is often more convenient to have a single metric rather than several ones, we compute two metrics from the confusion matrix, which we will later combine into one: True positive rate (TPR), aka. sensitivity, hit rate, and recall, which is defined as $ \frac{TP}{TP+FN}$. Intuitively this metric corresponds to the proportion of positive data points that are correctly considered as positive, with respect to all positive data points. In other words, the higher TPR, the fewer positive data points we will miss. False positive rate (FPR), aka. fall-out, which is defined as $ \frac{FP}{FP+TN}$. Intuitively this metric corresponds to the proportion of negative data points that are mistakenly considered as positive, with respect to all negative data points. In other words, the higher FPR, the more negative data points will be missclassified. To combine the FPR and the TPR into one single metric, we first compute the two former metrics with many different threshold (for example $0.00; 0.01, 0.02, \dots, 1.00$) for the logistic regression, then plot them on a single graph, with the FPR values on the abscissa and the TPR values on the ordinate. The resulting curve is called ROC curve, and the metric we consider is the AUC of this curve, which we call AUROC. The following figure shows the AUROC graphically: In this figure, the blue area corresponds to the Area Under the curve of the Receiver Operating Characteristic (AUROC). The dashed line in the diagonal we present the ROC curve of a random predictor: it has an AUROC of 0.5. The random predictor is commonly used as a baseline to see whether the model is useful. If you want to get some first-hand experience: Python: http://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html MATLAB: http://www.mathworks.com/help/stats/perfcurve.html
What does AUC stand for and what is it?
Abbreviations AUC = Area Under the Curve. AUROC = Area Under the Receiver Operating Characteristic curve. AUC is used most of the time to mean AUROC, which is a bad practice since as Marc Claesen po
What does AUC stand for and what is it? Abbreviations AUC = Area Under the Curve. AUROC = Area Under the Receiver Operating Characteristic curve. AUC is used most of the time to mean AUROC, which is a bad practice since as Marc Claesen pointed out AUC is ambiguous (could be any curve) while AUROC is not. Interpreting the AUROC The AUROC has several equivalent interpretations: The expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative. The expected proportion of positives ranked before a uniformly drawn random negative. The expected true positive rate if the ranking is split just before a uniformly drawn random negative. The expected proportion of negatives ranked after a uniformly drawn random positive. The expected false positive rate if the ranking is split just after a uniformly drawn random positive. Going further: How to derive the probabilistic interpretation of the AUROC? Computing the AUROC Assume we have a probabilistic, binary classifier such as logistic regression. Before presenting the ROC curve (= Receiver Operating Characteristic curve), the concept of confusion matrix must be understood. When we make a binary prediction, there can be 4 types of outcomes: We predict 0 while the true class is actually 0: this is called a True Negative, i.e. we correctly predict that the class is negative (0). For example, an antivirus did not detect a harmless file as a virus . We predict 0 while the true class is actually 1: this is called a False Negative, i.e. we incorrectly predict that the class is negative (0). For example, an antivirus failed to detect a virus. We predict 1 while the true class is actually 0: this is called a False Positive, i.e. we incorrectly predict that the class is positive (1). For example, an antivirus considered a harmless file to be a virus. We predict 1 while the true class is actually 1: this is called a True Positive, i.e. we correctly predict that the class is positive (1). For example, an antivirus rightfully detected a virus. To get the confusion matrix, we go over all the predictions made by the model, and count how many times each of those 4 types of outcomes occur: In this example of a confusion matrix, among the 50 data points that are classified, 45 are correctly classified and the 5 are misclassified. Since to compare two different models it is often more convenient to have a single metric rather than several ones, we compute two metrics from the confusion matrix, which we will later combine into one: True positive rate (TPR), aka. sensitivity, hit rate, and recall, which is defined as $ \frac{TP}{TP+FN}$. Intuitively this metric corresponds to the proportion of positive data points that are correctly considered as positive, with respect to all positive data points. In other words, the higher TPR, the fewer positive data points we will miss. False positive rate (FPR), aka. fall-out, which is defined as $ \frac{FP}{FP+TN}$. Intuitively this metric corresponds to the proportion of negative data points that are mistakenly considered as positive, with respect to all negative data points. In other words, the higher FPR, the more negative data points will be missclassified. To combine the FPR and the TPR into one single metric, we first compute the two former metrics with many different threshold (for example $0.00; 0.01, 0.02, \dots, 1.00$) for the logistic regression, then plot them on a single graph, with the FPR values on the abscissa and the TPR values on the ordinate. The resulting curve is called ROC curve, and the metric we consider is the AUC of this curve, which we call AUROC. The following figure shows the AUROC graphically: In this figure, the blue area corresponds to the Area Under the curve of the Receiver Operating Characteristic (AUROC). The dashed line in the diagonal we present the ROC curve of a random predictor: it has an AUROC of 0.5. The random predictor is commonly used as a baseline to see whether the model is useful. If you want to get some first-hand experience: Python: http://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html MATLAB: http://www.mathworks.com/help/stats/perfcurve.html
What does AUC stand for and what is it? Abbreviations AUC = Area Under the Curve. AUROC = Area Under the Receiver Operating Characteristic curve. AUC is used most of the time to mean AUROC, which is a bad practice since as Marc Claesen po
370
What does AUC stand for and what is it?
Although I'm a bit late to the party, but here's my 5 cents. @FranckDernoncourt (+1) already mentioned possible interpretations of AUC ROC, and my favorite one is the first on his list (I use different wording, but it's the same): the AUC of a classifier is equal to the probability that the classifier will rank a randomly chosen positive example higher than a randomly chosen negative example, i.e. $P\Big(\text{score}(x^+) > \text{score}(x^-)\Big)$ Consider this example (auc=0.68): Let's try to simulate it: draw random positive and negative examples and then calculate the proportion of cases when positives have greater score than negatives cls = c('P', 'P', 'N', 'P', 'P', 'P', 'N', 'N', 'P', 'N', 'P', 'N', 'P', 'N', 'N', 'N', 'P', 'N', 'P', 'N') score = c(0.9, 0.8, 0.7, 0.6, 0.55, 0.51, 0.49, 0.43, 0.42, 0.39, 0.33, 0.31, 0.23, 0.22, 0.19, 0.15, 0.12, 0.11, 0.04, 0.01) pos = score[cls == 'P'] neg = score[cls == 'N'] set.seed(14) p = replicate(50000, sample(pos, size=1) > sample(neg, size=1)) mean(p) And we get 0.67926. Quite close, isn't it?   By the way, in R I typically use ROCR package for drawing ROC curves and calculating AUC. library('ROCR') pred = prediction(score, cls) roc = performance(pred, "tpr", "fpr") plot(roc, lwd=2, colorize=TRUE) lines(x=c(0, 1), y=c(0, 1), col="black", lwd=1) auc = performance(pred, "auc") auc = unlist(auc@y.values) auc
What does AUC stand for and what is it?
Although I'm a bit late to the party, but here's my 5 cents. @FranckDernoncourt (+1) already mentioned possible interpretations of AUC ROC, and my favorite one is the first on his list (I use differen
What does AUC stand for and what is it? Although I'm a bit late to the party, but here's my 5 cents. @FranckDernoncourt (+1) already mentioned possible interpretations of AUC ROC, and my favorite one is the first on his list (I use different wording, but it's the same): the AUC of a classifier is equal to the probability that the classifier will rank a randomly chosen positive example higher than a randomly chosen negative example, i.e. $P\Big(\text{score}(x^+) > \text{score}(x^-)\Big)$ Consider this example (auc=0.68): Let's try to simulate it: draw random positive and negative examples and then calculate the proportion of cases when positives have greater score than negatives cls = c('P', 'P', 'N', 'P', 'P', 'P', 'N', 'N', 'P', 'N', 'P', 'N', 'P', 'N', 'N', 'N', 'P', 'N', 'P', 'N') score = c(0.9, 0.8, 0.7, 0.6, 0.55, 0.51, 0.49, 0.43, 0.42, 0.39, 0.33, 0.31, 0.23, 0.22, 0.19, 0.15, 0.12, 0.11, 0.04, 0.01) pos = score[cls == 'P'] neg = score[cls == 'N'] set.seed(14) p = replicate(50000, sample(pos, size=1) > sample(neg, size=1)) mean(p) And we get 0.67926. Quite close, isn't it?   By the way, in R I typically use ROCR package for drawing ROC curves and calculating AUC. library('ROCR') pred = prediction(score, cls) roc = performance(pred, "tpr", "fpr") plot(roc, lwd=2, colorize=TRUE) lines(x=c(0, 1), y=c(0, 1), col="black", lwd=1) auc = performance(pred, "auc") auc = unlist(auc@y.values) auc
What does AUC stand for and what is it? Although I'm a bit late to the party, but here's my 5 cents. @FranckDernoncourt (+1) already mentioned possible interpretations of AUC ROC, and my favorite one is the first on his list (I use differen
371
What does AUC stand for and what is it?
Important considerations are not included in any of these discussions. The procedures discussed above invite inappropriate thresholding and utilize improper accuracy scoring rules (proportions) that are optimized by choosing the wrong features and giving them the wrong weights. Dichotomization of continuous predictions flies in the face of optimal decision theory. ROC curves provide no actionable insights. They have become obligatory without researchers examining the benefits. They have a very large ink:information ratio. Optimum decisions don't consider "positives" and "negatives" but rather the estimated probability of the outcome. The utility/cost/loss function, which plays no role in ROC construction hence the uselessness of ROCs, is used to translate the risk estimate to the optimal (e.g., lowest expected loss) decision. The goal of a statistical model is often to make a prediction, and the analyst should often stop there because the analyst may not know the loss function. Key components of the prediction to validate unbiasedly (e.g., using the bootstrap) are the predictive discrimination (one semi-good way to measure this is the concordance probability which happens to equal the area under the ROC but can be more easily understood if you don't draw the ROC) and the calibration curve. Calibration validation is really, really necessary if you are using predictions on an absolute scale. See the Information Loss chapter in Biostatistics for Biomedical Research and other chapters for more information.
What does AUC stand for and what is it?
Important considerations are not included in any of these discussions. The procedures discussed above invite inappropriate thresholding and utilize improper accuracy scoring rules (proportions) that
What does AUC stand for and what is it? Important considerations are not included in any of these discussions. The procedures discussed above invite inappropriate thresholding and utilize improper accuracy scoring rules (proportions) that are optimized by choosing the wrong features and giving them the wrong weights. Dichotomization of continuous predictions flies in the face of optimal decision theory. ROC curves provide no actionable insights. They have become obligatory without researchers examining the benefits. They have a very large ink:information ratio. Optimum decisions don't consider "positives" and "negatives" but rather the estimated probability of the outcome. The utility/cost/loss function, which plays no role in ROC construction hence the uselessness of ROCs, is used to translate the risk estimate to the optimal (e.g., lowest expected loss) decision. The goal of a statistical model is often to make a prediction, and the analyst should often stop there because the analyst may not know the loss function. Key components of the prediction to validate unbiasedly (e.g., using the bootstrap) are the predictive discrimination (one semi-good way to measure this is the concordance probability which happens to equal the area under the ROC but can be more easily understood if you don't draw the ROC) and the calibration curve. Calibration validation is really, really necessary if you are using predictions on an absolute scale. See the Information Loss chapter in Biostatistics for Biomedical Research and other chapters for more information.
What does AUC stand for and what is it? Important considerations are not included in any of these discussions. The procedures discussed above invite inappropriate thresholding and utilize improper accuracy scoring rules (proportions) that
372
What does AUC stand for and what is it?
The answers in this forum are great and I come back here often for reference. However, one thing was always missing. From @Frank's answer, we see the interpretation of AUC as the probability that a positive sample will have a higher score than the negative sample. At the same time, the way to calculate it is to plot out the TPR and FPR as the threshold, $\tau$ is changed and calculate the area under that curve. But, why is this area under the curve the same as this probability? @Alexy showed through simulation that they're close, but can we derive this relationship mathematically? Let's assume the following: $A$ is the distribution of scores the model produces for data points that are actually in the positive class. $B$ is the distribution of scores the model produces for data points that are actually in the negative class (we want this to be to the left of $A$). $\tau$ is the cutoff threshold. If a data point get's a score greater than this, it's predicted as belonging to the positive class. Otherwise, it's predicted to be in the negative class. Note that the TPR (recall) is given by: $P(A>\tau)$ and the FPR (fallout) is given be: $P(B>\tau)$. Now, we plot the TPR on the y-axis and FPR on the x-axis, draw the curve for various $\tau$ and calculate the area under this curve ($AUC$). We get: $$AUC = \int_0^1 TPR(x)dx = \int_0^1 P(A>\tau(x))dx$$ where $x$ is the FPR. Now, one way to calculate this integral is to consider $x$ as belonging to a uniform distribution. In that case, it simply becomes the expectation of the $TPR$ since the PDF of the uniform is 1. $$AUC = E_x[P(A>\tau(x))] \tag{1}$$ if we consider $x \sim U[0,1)$ . Now, $x$ here was just the $FPR$ $$x=FPR = P(B>\tau(x))$$ Since we considered $x$ to be from a uniform distribution, $$P(B>\tau(x)) \sim U$$ $$=> P(B<\tau(x)) \sim (1-U) \sim U$$ \begin{equation}=> F_B(\tau(x)) \sim U \tag{2}\end{equation} But we know from the inverse transform law that for any random variable $X$, if $F_X(Y) \sim U$ then $Y \sim X$. This follows since taking any random variable and applying its own CDF to it leads to the uniform. $$F_X(X) = P(F_X(x)<X) =P(X<F_X^{-1}(X))=F_XF_X^{-1}(X)=X$$ and this only holds for uniform. Using this fact in equation (2) gives us: $$\tau(x) \sim B$$ Substituting this into equation (1) we get: $$AUC=E_x(P(A>B))=P(A>B)$$ In other words, the area under the curve is the probability that a random positive sample will have a higher score than a random negative sample.
What does AUC stand for and what is it?
The answers in this forum are great and I come back here often for reference. However, one thing was always missing. From @Frank's answer, we see the interpretation of AUC as the probability that a po
What does AUC stand for and what is it? The answers in this forum are great and I come back here often for reference. However, one thing was always missing. From @Frank's answer, we see the interpretation of AUC as the probability that a positive sample will have a higher score than the negative sample. At the same time, the way to calculate it is to plot out the TPR and FPR as the threshold, $\tau$ is changed and calculate the area under that curve. But, why is this area under the curve the same as this probability? @Alexy showed through simulation that they're close, but can we derive this relationship mathematically? Let's assume the following: $A$ is the distribution of scores the model produces for data points that are actually in the positive class. $B$ is the distribution of scores the model produces for data points that are actually in the negative class (we want this to be to the left of $A$). $\tau$ is the cutoff threshold. If a data point get's a score greater than this, it's predicted as belonging to the positive class. Otherwise, it's predicted to be in the negative class. Note that the TPR (recall) is given by: $P(A>\tau)$ and the FPR (fallout) is given be: $P(B>\tau)$. Now, we plot the TPR on the y-axis and FPR on the x-axis, draw the curve for various $\tau$ and calculate the area under this curve ($AUC$). We get: $$AUC = \int_0^1 TPR(x)dx = \int_0^1 P(A>\tau(x))dx$$ where $x$ is the FPR. Now, one way to calculate this integral is to consider $x$ as belonging to a uniform distribution. In that case, it simply becomes the expectation of the $TPR$ since the PDF of the uniform is 1. $$AUC = E_x[P(A>\tau(x))] \tag{1}$$ if we consider $x \sim U[0,1)$ . Now, $x$ here was just the $FPR$ $$x=FPR = P(B>\tau(x))$$ Since we considered $x$ to be from a uniform distribution, $$P(B>\tau(x)) \sim U$$ $$=> P(B<\tau(x)) \sim (1-U) \sim U$$ \begin{equation}=> F_B(\tau(x)) \sim U \tag{2}\end{equation} But we know from the inverse transform law that for any random variable $X$, if $F_X(Y) \sim U$ then $Y \sim X$. This follows since taking any random variable and applying its own CDF to it leads to the uniform. $$F_X(X) = P(F_X(x)<X) =P(X<F_X^{-1}(X))=F_XF_X^{-1}(X)=X$$ and this only holds for uniform. Using this fact in equation (2) gives us: $$\tau(x) \sim B$$ Substituting this into equation (1) we get: $$AUC=E_x(P(A>B))=P(A>B)$$ In other words, the area under the curve is the probability that a random positive sample will have a higher score than a random negative sample.
What does AUC stand for and what is it? The answers in this forum are great and I come back here often for reference. However, one thing was always missing. From @Frank's answer, we see the interpretation of AUC as the probability that a po
373
What does AUC stand for and what is it?
AUC is an abbrevation for area under the curve. It is used in classification analysis in order to determine which of the used models predicts the classes best. An example of its application are ROC curves. Here, the true positive rates are plotted against false positive rates. An example is below. The closer AUC for a model comes to 1, the better it is. So models with higher AUCs are preferred over those with lower AUCs. Please note, there are also other methods than ROC curves but they are also related to the true positive and false positive rates, e. g. precision-recall, F1-Score or Lorenz curves.
What does AUC stand for and what is it?
AUC is an abbrevation for area under the curve. It is used in classification analysis in order to determine which of the used models predicts the classes best. An example of its application are ROC c
What does AUC stand for and what is it? AUC is an abbrevation for area under the curve. It is used in classification analysis in order to determine which of the used models predicts the classes best. An example of its application are ROC curves. Here, the true positive rates are plotted against false positive rates. An example is below. The closer AUC for a model comes to 1, the better it is. So models with higher AUCs are preferred over those with lower AUCs. Please note, there are also other methods than ROC curves but they are also related to the true positive and false positive rates, e. g. precision-recall, F1-Score or Lorenz curves.
What does AUC stand for and what is it? AUC is an abbrevation for area under the curve. It is used in classification analysis in order to determine which of the used models predicts the classes best. An example of its application are ROC c
374
What does AUC stand for and what is it?
Very late to respond but after learning from multiple sources I've been able to form my own understanding of the AUC. This response is mainly heuristic in nature and not meant to be rigorous Let's say we have M positive samples and N negative samples and some "score funcion $s(x)$" that assigns a value to sample $x$. For a threshold $T$ if $s(x)>T$ it's "positive" else it is "negative". Let's select a negative sample $x_n$ randomly with equal probability $\frac{1}{N}$. If the threshold $T$ is placed at $s(x_n)$ then the true positive rate $TP(T)$ at threshold $T$ is the probability of ranking a randomly selected positive sample $x_p$ above $x_n$. In other words, this occurrence is $P(X_p>X_n|X_n=x_n)=TP(T)$ for $T=s(x_n)$. If both these events occur ($X_n=x_n$ and $x_p>x_n$) then the probability of this occurrence is $P(X_p>X_n|X_n=x_n)P(X_n=x_n)=P(X_p>X_n\cap X_n=x_n)$. From law of total probability, the sum of all these values over all possible values of $x_n$ gives $P(X_p>X_n)$ $$P(X_p>X_n)=\sum_{i=1}^N{P(X_p>X_n\cap X_n=x_i)}$$ $$= \sum_{i=1}^N{P(X_p>X_n|X_n=x_i)P(X_n=x_i)}$$ $$=\sum_{i=1}^N{TP(s(x_i))\frac{1}{N}} $$ In the ROC curve, each time the curve shifts left or right, it means it has "jumped over" a negative sample. When it moves up or down, it means it has "jumped over" a positive sample, which precisely gives the staircase nature of the curve. For the sum above, in the limit as the number of samples becomes infinite we take this sum at all possible values for the false positive rate $FP(T)$ over all these jumps and we get $$\int_0^1{TP(FP^{-1}(x))dx} $$ for all possible values of threshold $FP^{-1}(x)$ which, using law of total probability, gives the total $$P(X_p>X_n)$$ which is seen to be the area under the ROC curve, AUC
What does AUC stand for and what is it?
Very late to respond but after learning from multiple sources I've been able to form my own understanding of the AUC. This response is mainly heuristic in nature and not meant to be rigorous Let's say
What does AUC stand for and what is it? Very late to respond but after learning from multiple sources I've been able to form my own understanding of the AUC. This response is mainly heuristic in nature and not meant to be rigorous Let's say we have M positive samples and N negative samples and some "score funcion $s(x)$" that assigns a value to sample $x$. For a threshold $T$ if $s(x)>T$ it's "positive" else it is "negative". Let's select a negative sample $x_n$ randomly with equal probability $\frac{1}{N}$. If the threshold $T$ is placed at $s(x_n)$ then the true positive rate $TP(T)$ at threshold $T$ is the probability of ranking a randomly selected positive sample $x_p$ above $x_n$. In other words, this occurrence is $P(X_p>X_n|X_n=x_n)=TP(T)$ for $T=s(x_n)$. If both these events occur ($X_n=x_n$ and $x_p>x_n$) then the probability of this occurrence is $P(X_p>X_n|X_n=x_n)P(X_n=x_n)=P(X_p>X_n\cap X_n=x_n)$. From law of total probability, the sum of all these values over all possible values of $x_n$ gives $P(X_p>X_n)$ $$P(X_p>X_n)=\sum_{i=1}^N{P(X_p>X_n\cap X_n=x_i)}$$ $$= \sum_{i=1}^N{P(X_p>X_n|X_n=x_i)P(X_n=x_i)}$$ $$=\sum_{i=1}^N{TP(s(x_i))\frac{1}{N}} $$ In the ROC curve, each time the curve shifts left or right, it means it has "jumped over" a negative sample. When it moves up or down, it means it has "jumped over" a positive sample, which precisely gives the staircase nature of the curve. For the sum above, in the limit as the number of samples becomes infinite we take this sum at all possible values for the false positive rate $FP(T)$ over all these jumps and we get $$\int_0^1{TP(FP^{-1}(x))dx} $$ for all possible values of threshold $FP^{-1}(x)$ which, using law of total probability, gives the total $$P(X_p>X_n)$$ which is seen to be the area under the ROC curve, AUC
What does AUC stand for and what is it? Very late to respond but after learning from multiple sources I've been able to form my own understanding of the AUC. This response is mainly heuristic in nature and not meant to be rigorous Let's say
375
What does AUC stand for and what is it?
I just wanted to share an animation about the construction of the ROC Curve and so the AUC. One sees clearly that each point of the ROC Curve comes from a different threshold used to classify the output of a binary classifier. The threshold defines which samples are predicted as 1 and which ones as 0. True Positive and False Positive rates are then computed. At every threshold corresponds a point on the ROC Curve. Note: The animation might be more relevant for people working on time-series, but could also help to understand the AUC construction in other cases as a Sample, the input of the binary classifier, could be anything (image, time-serie window, input features,...)
What does AUC stand for and what is it?
I just wanted to share an animation about the construction of the ROC Curve and so the AUC. One sees clearly that each point of the ROC Curve comes from a different threshold used to classify the outp
What does AUC stand for and what is it? I just wanted to share an animation about the construction of the ROC Curve and so the AUC. One sees clearly that each point of the ROC Curve comes from a different threshold used to classify the output of a binary classifier. The threshold defines which samples are predicted as 1 and which ones as 0. True Positive and False Positive rates are then computed. At every threshold corresponds a point on the ROC Curve. Note: The animation might be more relevant for people working on time-series, but could also help to understand the AUC construction in other cases as a Sample, the input of the binary classifier, could be anything (image, time-serie window, input features,...)
What does AUC stand for and what is it? I just wanted to share an animation about the construction of the ROC Curve and so the AUC. One sees clearly that each point of the ROC Curve comes from a different threshold used to classify the outp
376
Is there any reason to prefer the AIC or BIC over the other?
Your question implies that AIC and BIC try to answer the same question, which is not true. The AIC tries to select the model that most adequately describes an unknown, high dimensional reality. This means that reality is never in the set of candidate models that are being considered. On the contrary, BIC tries to find the TRUE model among the set of candidates. I find it quite odd the assumption that reality is instantiated in one of the models that the researchers built along the way. This is a real issue for BIC. Nevertheless, there are a lot of researchers who say BIC is better than AIC, using model recovery simulations as an argument. These simulations consist of generating data from models A and B, and then fitting both datasets with the two models. Overfitting occurs when the wrong model fits the data better than the generating. The point of these simulations is to see how well AIC and BIC correct these overfits. Usually, the results point to the fact that AIC is too liberal and still frequently prefers a more complex, wrong model over a simpler, true model. At first glance these simulations seem to be really good arguments, but the problem with them is that they are meaningless for AIC. As I said before, AIC does not consider that any of the candidate models being tested is actually true. According to AIC, all models are approximations to reality, and reality should never have a low dimensionality. At least lower than some of the candidate models. My recommendation is to use both AIC and BIC. Most of the times they will agree on the preferred model, when they don't, just report it. If you are unhappy with both AIC and BIC and have free time to invest, look up Minimum Description Length (MDL), a totally different approach that overcomes the limitations of AIC and BIC. There are several measures stemming from MDL, like normalized maximum likelihood or the Fisher Information approximation. The problem with MDL is that its mathematically demanding and/or computationally intensive. Still, if you want to stick to simple solutions, a nice way for assessing model flexibility (especially when the number of parameters are equal, rendering AIC and BIC useless) is doing Parametric Bootstrap, which is quite easy to implement. Here is a link to a paper on it. Some people here advocate for the use of cross-validation. I personally have used it and don't have anything against it, but the issue with it is that the choice among the sample-cutting rule (leave-one-out, K-fold, etc) is an unprincipled one.
Is there any reason to prefer the AIC or BIC over the other?
Your question implies that AIC and BIC try to answer the same question, which is not true. The AIC tries to select the model that most adequately describes an unknown, high dimensional reality. This m
Is there any reason to prefer the AIC or BIC over the other? Your question implies that AIC and BIC try to answer the same question, which is not true. The AIC tries to select the model that most adequately describes an unknown, high dimensional reality. This means that reality is never in the set of candidate models that are being considered. On the contrary, BIC tries to find the TRUE model among the set of candidates. I find it quite odd the assumption that reality is instantiated in one of the models that the researchers built along the way. This is a real issue for BIC. Nevertheless, there are a lot of researchers who say BIC is better than AIC, using model recovery simulations as an argument. These simulations consist of generating data from models A and B, and then fitting both datasets with the two models. Overfitting occurs when the wrong model fits the data better than the generating. The point of these simulations is to see how well AIC and BIC correct these overfits. Usually, the results point to the fact that AIC is too liberal and still frequently prefers a more complex, wrong model over a simpler, true model. At first glance these simulations seem to be really good arguments, but the problem with them is that they are meaningless for AIC. As I said before, AIC does not consider that any of the candidate models being tested is actually true. According to AIC, all models are approximations to reality, and reality should never have a low dimensionality. At least lower than some of the candidate models. My recommendation is to use both AIC and BIC. Most of the times they will agree on the preferred model, when they don't, just report it. If you are unhappy with both AIC and BIC and have free time to invest, look up Minimum Description Length (MDL), a totally different approach that overcomes the limitations of AIC and BIC. There are several measures stemming from MDL, like normalized maximum likelihood or the Fisher Information approximation. The problem with MDL is that its mathematically demanding and/or computationally intensive. Still, if you want to stick to simple solutions, a nice way for assessing model flexibility (especially when the number of parameters are equal, rendering AIC and BIC useless) is doing Parametric Bootstrap, which is quite easy to implement. Here is a link to a paper on it. Some people here advocate for the use of cross-validation. I personally have used it and don't have anything against it, but the issue with it is that the choice among the sample-cutting rule (leave-one-out, K-fold, etc) is an unprincipled one.
Is there any reason to prefer the AIC or BIC over the other? Your question implies that AIC and BIC try to answer the same question, which is not true. The AIC tries to select the model that most adequately describes an unknown, high dimensional reality. This m
377
Is there any reason to prefer the AIC or BIC over the other?
Though AIC and BIC are both Maximum Likelihood estimate driven and penalize free parameters in an effort to combat overfitting, they do so in ways that result in significantly different behavior. Lets look at one commonly presented version of the methods (which results form stipulating normally distributed errors and other well behaving assumptions): ${\bf AIC} = -2 \ln\left(\text{likelihood}\right) + 2k$ and ${\bf BIC} = -2\ln\left(\text{likelihood}\right) + k\ln(N)$ where: $k$ = model degrees of freedom $N$ = number of observations The best model in the group compared is the one that minimizes these scores, in both cases. Clearly, AIC does not depend directly on sample size. Moreover, generally speaking, AIC presents the danger that it might overfit, whereas BIC presents the danger that it might underfit, simply in virtue of how they penalize free parameters (2*k in AIC; ln(N)*k in BIC). Diachronically, as data is introduced and the scores are recalculated, at relatively low N (7 and less) BIC is more tolerant of free parameters than AIC, but less tolerant at higher N (as the natural log of N overcomes 2). Additionally, AIC is aimed at finding the best approximating model to the unknown data generating process (via minimizing expected estimated K-L divergence). As such, it fails to converge in probability to the true model (assuming one is present in the group evaluated), whereas BIC does converge as N tends to infinity. So, as in many methodological questions, which is to be preferred depends upon what you are trying to do, what other methods are available, and whether or not any of the features outlined (convergence, relative tolerance for free parameters, minimizing expected K-L divergence), speak to your goals.
Is there any reason to prefer the AIC or BIC over the other?
Though AIC and BIC are both Maximum Likelihood estimate driven and penalize free parameters in an effort to combat overfitting, they do so in ways that result in significantly different behavior. Let
Is there any reason to prefer the AIC or BIC over the other? Though AIC and BIC are both Maximum Likelihood estimate driven and penalize free parameters in an effort to combat overfitting, they do so in ways that result in significantly different behavior. Lets look at one commonly presented version of the methods (which results form stipulating normally distributed errors and other well behaving assumptions): ${\bf AIC} = -2 \ln\left(\text{likelihood}\right) + 2k$ and ${\bf BIC} = -2\ln\left(\text{likelihood}\right) + k\ln(N)$ where: $k$ = model degrees of freedom $N$ = number of observations The best model in the group compared is the one that minimizes these scores, in both cases. Clearly, AIC does not depend directly on sample size. Moreover, generally speaking, AIC presents the danger that it might overfit, whereas BIC presents the danger that it might underfit, simply in virtue of how they penalize free parameters (2*k in AIC; ln(N)*k in BIC). Diachronically, as data is introduced and the scores are recalculated, at relatively low N (7 and less) BIC is more tolerant of free parameters than AIC, but less tolerant at higher N (as the natural log of N overcomes 2). Additionally, AIC is aimed at finding the best approximating model to the unknown data generating process (via minimizing expected estimated K-L divergence). As such, it fails to converge in probability to the true model (assuming one is present in the group evaluated), whereas BIC does converge as N tends to infinity. So, as in many methodological questions, which is to be preferred depends upon what you are trying to do, what other methods are available, and whether or not any of the features outlined (convergence, relative tolerance for free parameters, minimizing expected K-L divergence), speak to your goals.
Is there any reason to prefer the AIC or BIC over the other? Though AIC and BIC are both Maximum Likelihood estimate driven and penalize free parameters in an effort to combat overfitting, they do so in ways that result in significantly different behavior. Let
378
Is there any reason to prefer the AIC or BIC over the other?
My quick explanation is AIC is best for prediction as it is asymptotically equivalent to cross-validation. BIC is best for explanation as it is allows consistent estimation of the underlying data generating process.
Is there any reason to prefer the AIC or BIC over the other?
My quick explanation is AIC is best for prediction as it is asymptotically equivalent to cross-validation. BIC is best for explanation as it is allows consistent estimation of the underlying data gen
Is there any reason to prefer the AIC or BIC over the other? My quick explanation is AIC is best for prediction as it is asymptotically equivalent to cross-validation. BIC is best for explanation as it is allows consistent estimation of the underlying data generating process.
Is there any reason to prefer the AIC or BIC over the other? My quick explanation is AIC is best for prediction as it is asymptotically equivalent to cross-validation. BIC is best for explanation as it is allows consistent estimation of the underlying data gen
379
Is there any reason to prefer the AIC or BIC over the other?
In my experience, BIC results in serious underfitting and AIC typically performs well, when the goal is to maximize predictive discrimination.
Is there any reason to prefer the AIC or BIC over the other?
In my experience, BIC results in serious underfitting and AIC typically performs well, when the goal is to maximize predictive discrimination.
Is there any reason to prefer the AIC or BIC over the other? In my experience, BIC results in serious underfitting and AIC typically performs well, when the goal is to maximize predictive discrimination.
Is there any reason to prefer the AIC or BIC over the other? In my experience, BIC results in serious underfitting and AIC typically performs well, when the goal is to maximize predictive discrimination.
380
Is there any reason to prefer the AIC or BIC over the other?
An informative and accessible "derivation" of AIC and BIC by Brian Ripley can be found here: http://www.stats.ox.ac.uk/~ripley/Nelder80.pdf Ripley provides some remarks on the assumptions behind the mathematical results. Contrary to what some of the other answers indicate, Ripley emphasizes that AIC is based on assuming that the model is true. If the model is not true, a general computation will reveal that the "number of parameters" has to be replaced by a more complicated quantity. Some references are given in Ripleys slides. Note, however, that for linear regression (strictly speaking with a known variance) the, in general, more complicated quantity simplifies to be equal to the number of parameters.
Is there any reason to prefer the AIC or BIC over the other?
An informative and accessible "derivation" of AIC and BIC by Brian Ripley can be found here: http://www.stats.ox.ac.uk/~ripley/Nelder80.pdf Ripley provides some remarks on the assumptions behind the m
Is there any reason to prefer the AIC or BIC over the other? An informative and accessible "derivation" of AIC and BIC by Brian Ripley can be found here: http://www.stats.ox.ac.uk/~ripley/Nelder80.pdf Ripley provides some remarks on the assumptions behind the mathematical results. Contrary to what some of the other answers indicate, Ripley emphasizes that AIC is based on assuming that the model is true. If the model is not true, a general computation will reveal that the "number of parameters" has to be replaced by a more complicated quantity. Some references are given in Ripleys slides. Note, however, that for linear regression (strictly speaking with a known variance) the, in general, more complicated quantity simplifies to be equal to the number of parameters.
Is there any reason to prefer the AIC or BIC over the other? An informative and accessible "derivation" of AIC and BIC by Brian Ripley can be found here: http://www.stats.ox.ac.uk/~ripley/Nelder80.pdf Ripley provides some remarks on the assumptions behind the m
381
Is there any reason to prefer the AIC or BIC over the other?
Indeed the only difference is that BIC is AIC extended to take number of objects (samples) into account. I would say that while both are quite weak (in comparison to for instance cross-validation) it is better to use AIC, than more people will be familiar with the abbreviation -- indeed I have never seen a paper or a program where BIC would be used (still I admit that I'm biased to problems where such criteria simply don't work). Edit: AIC and BIC are equivalent to cross-validation provided two important assumptions -- when they are defined, so when the model is a maximum likelihood one and when you are only interested in model performance on a training data. In case of collapsing some data into some kind of consensus they are perfectly ok. In case of making a prediction machine for some real-world problem the first is false, since your training set represent only a scrap of information about the problem you are dealing with, so you just can't optimize your model; the second is false, because you expect that your model will handle the new data for which you can't even expect that the training set will be representative. And to this end CV was invented; to simulate the behavior of the model when confronted with an independent data. In case of model selection, CV gives you not only the quality approximate, but also quality approximation distribution, so it has this great advantage that it can say "I don't know, whatever the new data will come, either of them can be better."
Is there any reason to prefer the AIC or BIC over the other?
Indeed the only difference is that BIC is AIC extended to take number of objects (samples) into account. I would say that while both are quite weak (in comparison to for instance cross-validation) it
Is there any reason to prefer the AIC or BIC over the other? Indeed the only difference is that BIC is AIC extended to take number of objects (samples) into account. I would say that while both are quite weak (in comparison to for instance cross-validation) it is better to use AIC, than more people will be familiar with the abbreviation -- indeed I have never seen a paper or a program where BIC would be used (still I admit that I'm biased to problems where such criteria simply don't work). Edit: AIC and BIC are equivalent to cross-validation provided two important assumptions -- when they are defined, so when the model is a maximum likelihood one and when you are only interested in model performance on a training data. In case of collapsing some data into some kind of consensus they are perfectly ok. In case of making a prediction machine for some real-world problem the first is false, since your training set represent only a scrap of information about the problem you are dealing with, so you just can't optimize your model; the second is false, because you expect that your model will handle the new data for which you can't even expect that the training set will be representative. And to this end CV was invented; to simulate the behavior of the model when confronted with an independent data. In case of model selection, CV gives you not only the quality approximate, but also quality approximation distribution, so it has this great advantage that it can say "I don't know, whatever the new data will come, either of them can be better."
Is there any reason to prefer the AIC or BIC over the other? Indeed the only difference is that BIC is AIC extended to take number of objects (samples) into account. I would say that while both are quite weak (in comparison to for instance cross-validation) it
382
Is there any reason to prefer the AIC or BIC over the other?
From what I can tell, there isn't much difference between AIC and BIC. They are both mathematically convenient approximations one can make in order to efficiently compare models. If they give you different "best" models, it probably means you have high model uncertainty, which is more important to worry about than whether you should use AIC or BIC. I personally like BIC better because it asks more (less) of a model if it has more (less) data to fit its parameters - kind of like a teacher asking for a higher (lower) standard of performance if their student has more (less) time to learn about the subject. To me this just seems like the intuitive thing to do. But then I am certain there also exists equally intuitive and compelling arguments for AIC as well, given its simple form. Now any time you make an approximation, there will surely be some conditions when those approximations are rubbish. This can be seen certainly for AIC, where there exist many "adjustments" (AICc) to account for certain conditions which make the original approximation bad. This is also present for BIC, because various other more exact (but still efficient) methods exist, such as Fully Laplace Approximations to mixtures of Zellner's g-priors (BIC is an approximation to the Laplace approximation method for integrals). One place where they are both crap is when you have substantial prior information about the parameters within any given model. AIC and BIC unnecessarily penalise models where parameters are partially known compared to models which require parameters to be estimated from the data. one thing I think is important to note is that BIC does not assume a "true" model a) exists, or b) is contained in the model set. BIC is simply an approximation to an integrated likelihood $P(D|M,A)$ (D=Data, M=model, A=assumptions). Only by multiplying by a prior probability and then normalising can you get $P(M|D,A)$. BIC simply represents how likely the data was if the proposition implied by the symbol $M$ is true. So from a logical viewpoint, any proposition which would lead one to BIC as an approximation are equally supported by the data. So if I state $M$ and $A$ to be the propositions $$\begin{array}{l|l} M_{i}:\text{the ith model is the best description of the data} \\ A:\text{out of the set of K models being considered, one of them is the best} \end{array} $$ And then continue to assign the same probability models (same parameters, same data, same approximations, etc.), I will get the same set of BIC values. It is only by attaching some sort of unique meaning to the logical letter "M" that one gets drawn into irrelevant questions about "the true model" (echoes of "the true religion"). The only thing that "defines" M is the mathematical equations which use it in their calculations - and this is hardly ever singles out one and only one definition. I could equally put in a prediction proposition about M ("the ith model will give the best predictions"). I personally can't see how this would change any of the likelihoods, and hence how good or bad BIC will be (AIC for that matter as well - although AIC is based on a different derivation) And besides, what is wrong with the statement If the true model is in the set I am considering, then there is a 57% probability that it is model B. Seems reasonable enough to me, or you could go the more "soft" version there is a 57% probability that model B is the best out of the set being considered One last comment: I think you will find about as many opinions about AIC/BIC as there are people who know about them.
Is there any reason to prefer the AIC or BIC over the other?
From what I can tell, there isn't much difference between AIC and BIC. They are both mathematically convenient approximations one can make in order to efficiently compare models. If they give you di
Is there any reason to prefer the AIC or BIC over the other? From what I can tell, there isn't much difference between AIC and BIC. They are both mathematically convenient approximations one can make in order to efficiently compare models. If they give you different "best" models, it probably means you have high model uncertainty, which is more important to worry about than whether you should use AIC or BIC. I personally like BIC better because it asks more (less) of a model if it has more (less) data to fit its parameters - kind of like a teacher asking for a higher (lower) standard of performance if their student has more (less) time to learn about the subject. To me this just seems like the intuitive thing to do. But then I am certain there also exists equally intuitive and compelling arguments for AIC as well, given its simple form. Now any time you make an approximation, there will surely be some conditions when those approximations are rubbish. This can be seen certainly for AIC, where there exist many "adjustments" (AICc) to account for certain conditions which make the original approximation bad. This is also present for BIC, because various other more exact (but still efficient) methods exist, such as Fully Laplace Approximations to mixtures of Zellner's g-priors (BIC is an approximation to the Laplace approximation method for integrals). One place where they are both crap is when you have substantial prior information about the parameters within any given model. AIC and BIC unnecessarily penalise models where parameters are partially known compared to models which require parameters to be estimated from the data. one thing I think is important to note is that BIC does not assume a "true" model a) exists, or b) is contained in the model set. BIC is simply an approximation to an integrated likelihood $P(D|M,A)$ (D=Data, M=model, A=assumptions). Only by multiplying by a prior probability and then normalising can you get $P(M|D,A)$. BIC simply represents how likely the data was if the proposition implied by the symbol $M$ is true. So from a logical viewpoint, any proposition which would lead one to BIC as an approximation are equally supported by the data. So if I state $M$ and $A$ to be the propositions $$\begin{array}{l|l} M_{i}:\text{the ith model is the best description of the data} \\ A:\text{out of the set of K models being considered, one of them is the best} \end{array} $$ And then continue to assign the same probability models (same parameters, same data, same approximations, etc.), I will get the same set of BIC values. It is only by attaching some sort of unique meaning to the logical letter "M" that one gets drawn into irrelevant questions about "the true model" (echoes of "the true religion"). The only thing that "defines" M is the mathematical equations which use it in their calculations - and this is hardly ever singles out one and only one definition. I could equally put in a prediction proposition about M ("the ith model will give the best predictions"). I personally can't see how this would change any of the likelihoods, and hence how good or bad BIC will be (AIC for that matter as well - although AIC is based on a different derivation) And besides, what is wrong with the statement If the true model is in the set I am considering, then there is a 57% probability that it is model B. Seems reasonable enough to me, or you could go the more "soft" version there is a 57% probability that model B is the best out of the set being considered One last comment: I think you will find about as many opinions about AIC/BIC as there are people who know about them.
Is there any reason to prefer the AIC or BIC over the other? From what I can tell, there isn't much difference between AIC and BIC. They are both mathematically convenient approximations one can make in order to efficiently compare models. If they give you di
383
Is there any reason to prefer the AIC or BIC over the other?
As you mentioned, AIC and BIC are methods to penalize models for having more regressor variables. A penalty function is used in these methods, which is a function of the number of parameters in the model. When applying AIC, the penalty function is z(p) = 2 p. When applying BIC, the penalty function is z(p) = p ln(n), which is based on interpreting the penalty as deriving from prior information (hence the name Bayesian Information Criterion). When n is large the two models will produce quite different results. Then the BIC applies a much larger penalty for complex models, and hence will lead to simpler models than AIC. However, as stated in Wikipedia on BIC: it should be noted that in many applications..., BIC simply reduces to maximum likelihood selection because the number of parameters is equal for the models of interest.
Is there any reason to prefer the AIC or BIC over the other?
As you mentioned, AIC and BIC are methods to penalize models for having more regressor variables. A penalty function is used in these methods, which is a function of the number of parameters in the mo
Is there any reason to prefer the AIC or BIC over the other? As you mentioned, AIC and BIC are methods to penalize models for having more regressor variables. A penalty function is used in these methods, which is a function of the number of parameters in the model. When applying AIC, the penalty function is z(p) = 2 p. When applying BIC, the penalty function is z(p) = p ln(n), which is based on interpreting the penalty as deriving from prior information (hence the name Bayesian Information Criterion). When n is large the two models will produce quite different results. Then the BIC applies a much larger penalty for complex models, and hence will lead to simpler models than AIC. However, as stated in Wikipedia on BIC: it should be noted that in many applications..., BIC simply reduces to maximum likelihood selection because the number of parameters is equal for the models of interest.
Is there any reason to prefer the AIC or BIC over the other? As you mentioned, AIC and BIC are methods to penalize models for having more regressor variables. A penalty function is used in these methods, which is a function of the number of parameters in the mo
384
Is there any reason to prefer the AIC or BIC over the other?
AIC should rarely be used, as it is really only valid asymptotically. It is almost always better to use AICc (AIC with a correction for finite sample size). AIC tends to overparameterize: that problem is greatly lessened with AICc. The main exception to using AICc is when the underlying distributions are heavily leptokurtic. For more on this, see the book Model Selection by Burnham & Anderson.
Is there any reason to prefer the AIC or BIC over the other?
AIC should rarely be used, as it is really only valid asymptotically. It is almost always better to use AICc (AIC with a correction for finite sample size). AIC tends to overparameterize: that probl
Is there any reason to prefer the AIC or BIC over the other? AIC should rarely be used, as it is really only valid asymptotically. It is almost always better to use AICc (AIC with a correction for finite sample size). AIC tends to overparameterize: that problem is greatly lessened with AICc. The main exception to using AICc is when the underlying distributions are heavily leptokurtic. For more on this, see the book Model Selection by Burnham & Anderson.
Is there any reason to prefer the AIC or BIC over the other? AIC should rarely be used, as it is really only valid asymptotically. It is almost always better to use AICc (AIC with a correction for finite sample size). AIC tends to overparameterize: that probl
385
Is there any reason to prefer the AIC or BIC over the other?
AIC and BIC are information criteria for comparing models. Each tries to balance model fit and parsimony and each penalizes differently for number of parameters. AIC is Akaike Information Criterion the formula is $$\text{AIC}= 2k - 2\ln(L)$$ where $k$ is number of parameters and $L$ is maximum likelihood; with this formula, smaller is better. (I recall that some programs output the opposite $2\ln(L) - 2k$, but I don't remember the details) BIC is Bayesian Information Criterion, the formula is $$\text{BIC} = k \ln(n) - 2\ln(L)$$ and it favors more parsimonious models than AIC I haven't heard of KIC.
Is there any reason to prefer the AIC or BIC over the other?
AIC and BIC are information criteria for comparing models. Each tries to balance model fit and parsimony and each penalizes differently for number of parameters. AIC is Akaike Information Criterion th
Is there any reason to prefer the AIC or BIC over the other? AIC and BIC are information criteria for comparing models. Each tries to balance model fit and parsimony and each penalizes differently for number of parameters. AIC is Akaike Information Criterion the formula is $$\text{AIC}= 2k - 2\ln(L)$$ where $k$ is number of parameters and $L$ is maximum likelihood; with this formula, smaller is better. (I recall that some programs output the opposite $2\ln(L) - 2k$, but I don't remember the details) BIC is Bayesian Information Criterion, the formula is $$\text{BIC} = k \ln(n) - 2\ln(L)$$ and it favors more parsimonious models than AIC I haven't heard of KIC.
Is there any reason to prefer the AIC or BIC over the other? AIC and BIC are information criteria for comparing models. Each tries to balance model fit and parsimony and each penalizes differently for number of parameters. AIC is Akaike Information Criterion th
386
Is there any reason to prefer the AIC or BIC over the other?
Very briefly: AIC approximately minimizes the prediction error and is asymptotically equivalent to leave-1-out cross-validation (LOOCV) (Stone 1977). It is not consistent though, which means that even with a very large amount of data ($n$ going to infinity) and if the true model is among the candidate models, the probability of selecting the true model based on the AIC criterion would not approach 1. Instead, it would retain too many features. BIC is an approximation to the integrated marginal likelihood $P(D|M,A) (D=\textrm{Data}, M=\textrm{model}, A=\textrm{assumptions})$, which under a flat prior is equivalent to seeking the model that maximizes $P(M|D,A)$. Its advantage is that it is consistent, which means that with a very large amount of data ($n$ going to infinity) and if the true model is among the candidate models, the probability of selecting the true model based on the BIC criterion would approach 1. This would come at a slight cost to prediction performance though if $n$ were small. BIC is also equivalent to leave-k-out cross-validation (LKOCV) where $k=n[1−1/(\log(n)−1)]$, with $n=$ sample size (Shao 1997). There is many different versions of the BIC though which come down to making different approximations of the marginal likelihood or assuming different priors. E.g. instead of using a prior uniform of all possible models as in the original BIC, EBIC uses a prior uniform of models of fixed size (Chen & Chen 2008) whereas BICq uses a Bernouilli distribution specifying the prior probability for each parameter to be included. Note that within the context of L0-penalized GLMs (where you penalize the log-likelihood of your model based on lambda * the nr of nonzero coefficients, i.e. the L0-norm of your model coefficients) you can optimize the AIC or BIC objective directly, as $\lambda = 2$ for AIC and $\lambda=\log(n)$ for BIC, which is what is done in the l0ara R package. To me this make more sense than what they e.g. do in the case of LASSO or elastic net regression in glmnet, where optimizing one objective (LASSO or elastic net regression) is followed by the tuning of the regularization parameter(s) based on some other objective (which e.g. minimizes cross validation prediction error, AIC or BIC). Syed (2011) on page 10 notes "We can also try to gain an intuitive understanding of the asymptotic equivalence by noting that the AIC minimizes the Kullback-Leibler divergence between the approximate model and the true model. The Kullback-Leibler divergence is not a distance measure between distributions, but really a measure of the information loss when the approximate model is used to model the ground reality. Leave-one-out cross validation uses a maximal amount of data for training to make a prediction for one observation. That is, $n −1$ observations as stand-ins for the approximate model relative to the single observation representing “reality”. We can think of this as learning the maximal amount of information that can be gained from the data in estimating loss. Given independent and identically distributed observations, performing this over $n$ possible validation sets leads to an asymptotically unbiased estimate." Note that the LOOCV error can also be calculated analytically from the residuals and the diagonal of the hat matrix, without having to actually carry out any cross validation. This would always be an alternative to the AIC as an asymptotic approximation of the LOOCV error. References Stone M. (1977) An asymptotic equivalence of choice of model by cross-validation and Akaike’s criterion. Journal of the Royal Statistical Society Series B. 39, 44–7. Shao J. (1997) An asymptotic theory for linear model selection. Statistica Sinica 7, 221-242.
Is there any reason to prefer the AIC or BIC over the other?
Very briefly: AIC approximately minimizes the prediction error and is asymptotically equivalent to leave-1-out cross-validation (LOOCV) (Stone 1977). It is not consistent though, which means that eve
Is there any reason to prefer the AIC or BIC over the other? Very briefly: AIC approximately minimizes the prediction error and is asymptotically equivalent to leave-1-out cross-validation (LOOCV) (Stone 1977). It is not consistent though, which means that even with a very large amount of data ($n$ going to infinity) and if the true model is among the candidate models, the probability of selecting the true model based on the AIC criterion would not approach 1. Instead, it would retain too many features. BIC is an approximation to the integrated marginal likelihood $P(D|M,A) (D=\textrm{Data}, M=\textrm{model}, A=\textrm{assumptions})$, which under a flat prior is equivalent to seeking the model that maximizes $P(M|D,A)$. Its advantage is that it is consistent, which means that with a very large amount of data ($n$ going to infinity) and if the true model is among the candidate models, the probability of selecting the true model based on the BIC criterion would approach 1. This would come at a slight cost to prediction performance though if $n$ were small. BIC is also equivalent to leave-k-out cross-validation (LKOCV) where $k=n[1−1/(\log(n)−1)]$, with $n=$ sample size (Shao 1997). There is many different versions of the BIC though which come down to making different approximations of the marginal likelihood or assuming different priors. E.g. instead of using a prior uniform of all possible models as in the original BIC, EBIC uses a prior uniform of models of fixed size (Chen & Chen 2008) whereas BICq uses a Bernouilli distribution specifying the prior probability for each parameter to be included. Note that within the context of L0-penalized GLMs (where you penalize the log-likelihood of your model based on lambda * the nr of nonzero coefficients, i.e. the L0-norm of your model coefficients) you can optimize the AIC or BIC objective directly, as $\lambda = 2$ for AIC and $\lambda=\log(n)$ for BIC, which is what is done in the l0ara R package. To me this make more sense than what they e.g. do in the case of LASSO or elastic net regression in glmnet, where optimizing one objective (LASSO or elastic net regression) is followed by the tuning of the regularization parameter(s) based on some other objective (which e.g. minimizes cross validation prediction error, AIC or BIC). Syed (2011) on page 10 notes "We can also try to gain an intuitive understanding of the asymptotic equivalence by noting that the AIC minimizes the Kullback-Leibler divergence between the approximate model and the true model. The Kullback-Leibler divergence is not a distance measure between distributions, but really a measure of the information loss when the approximate model is used to model the ground reality. Leave-one-out cross validation uses a maximal amount of data for training to make a prediction for one observation. That is, $n −1$ observations as stand-ins for the approximate model relative to the single observation representing “reality”. We can think of this as learning the maximal amount of information that can be gained from the data in estimating loss. Given independent and identically distributed observations, performing this over $n$ possible validation sets leads to an asymptotically unbiased estimate." Note that the LOOCV error can also be calculated analytically from the residuals and the diagonal of the hat matrix, without having to actually carry out any cross validation. This would always be an alternative to the AIC as an asymptotic approximation of the LOOCV error. References Stone M. (1977) An asymptotic equivalence of choice of model by cross-validation and Akaike’s criterion. Journal of the Royal Statistical Society Series B. 39, 44–7. Shao J. (1997) An asymptotic theory for linear model selection. Statistica Sinica 7, 221-242.
Is there any reason to prefer the AIC or BIC over the other? Very briefly: AIC approximately minimizes the prediction error and is asymptotically equivalent to leave-1-out cross-validation (LOOCV) (Stone 1977). It is not consistent though, which means that eve
387
Is there any reason to prefer the AIC or BIC over the other?
AIC and BIC are both penalized-likelihood criteria. They are usually written in the form [-2logL + kp], where L is the likelihood function, p is the number of parameters in the model, and k is 2 for AIC and log(n) for BIC. AIC is an estimate of a constant plus the relative distance between the unknown true likelihood function of the data and the fitted likelihood function of the model, so that a lower AIC means a model is considered to be closer to the truth. BIC is an estimate of a function of the posterior probability of a model being true, under a certain Bayesian setup, so that a lower BIC means that a model is considered to be more likely to be the true model. Both criteria are based on various assumptions and asymptotic approximations. AIC always has a chance of choosing too big a model, regardless of n. BIC has very little chance of choosing too big a model if n is sufficient, but it has a larger chance than AIC, for any given n, of choosing too small a model. References: https://www.youtube.com/watch?v=75BOMuXBSPI https://www.methodology.psu.edu/resources/AIC-vs-BIC/
Is there any reason to prefer the AIC or BIC over the other?
AIC and BIC are both penalized-likelihood criteria. They are usually written in the form [-2logL + kp], where L is the likelihood function, p is the number of parameters in the model, and k is 2 for A
Is there any reason to prefer the AIC or BIC over the other? AIC and BIC are both penalized-likelihood criteria. They are usually written in the form [-2logL + kp], where L is the likelihood function, p is the number of parameters in the model, and k is 2 for AIC and log(n) for BIC. AIC is an estimate of a constant plus the relative distance between the unknown true likelihood function of the data and the fitted likelihood function of the model, so that a lower AIC means a model is considered to be closer to the truth. BIC is an estimate of a function of the posterior probability of a model being true, under a certain Bayesian setup, so that a lower BIC means that a model is considered to be more likely to be the true model. Both criteria are based on various assumptions and asymptotic approximations. AIC always has a chance of choosing too big a model, regardless of n. BIC has very little chance of choosing too big a model if n is sufficient, but it has a larger chance than AIC, for any given n, of choosing too small a model. References: https://www.youtube.com/watch?v=75BOMuXBSPI https://www.methodology.psu.edu/resources/AIC-vs-BIC/
Is there any reason to prefer the AIC or BIC over the other? AIC and BIC are both penalized-likelihood criteria. They are usually written in the form [-2logL + kp], where L is the likelihood function, p is the number of parameters in the model, and k is 2 for A
388
Is there any reason to prefer the AIC or BIC over the other?
From Shmueli (2010, Statistical Science), on the difference between explanatory and predictive modelling: A popular predictive metric is the in-sample Akaike Information Criterion (AIC). Akaike derived the AIC from a predictive viewpoint, where the model is not intended to accurately infer the “true distribution”, but rather to predict future data as accurately as possible (see, e.g., Berk, 2008; Konishi and Kitagawa, 2007). Some researchers distinguish between AIC and the Bayesian information criterion (BIC) on this ground. Sober (2002) concluded that AIC measures predictive accuracy while BIC measures goodness of fit: In a sense, the AIC and the BIC provide estimates of different things; yet, they almost always are thought to be in competition. If the question of which estimator is better is to make sense, we must decide whether the average likelihood of a family [=BIC] or its predictive accuracy [=AIC] is what we want to estimate Shmueli, Galit. 2010. “To Explain or to Predict?” Statistical Science 25 (3): 289–310. https://doi.org/10.1214/10-STS330.
Is there any reason to prefer the AIC or BIC over the other?
From Shmueli (2010, Statistical Science), on the difference between explanatory and predictive modelling: A popular predictive metric is the in-sample Akaike Information Criterion (AIC). Akaike deriv
Is there any reason to prefer the AIC or BIC over the other? From Shmueli (2010, Statistical Science), on the difference between explanatory and predictive modelling: A popular predictive metric is the in-sample Akaike Information Criterion (AIC). Akaike derived the AIC from a predictive viewpoint, where the model is not intended to accurately infer the “true distribution”, but rather to predict future data as accurately as possible (see, e.g., Berk, 2008; Konishi and Kitagawa, 2007). Some researchers distinguish between AIC and the Bayesian information criterion (BIC) on this ground. Sober (2002) concluded that AIC measures predictive accuracy while BIC measures goodness of fit: In a sense, the AIC and the BIC provide estimates of different things; yet, they almost always are thought to be in competition. If the question of which estimator is better is to make sense, we must decide whether the average likelihood of a family [=BIC] or its predictive accuracy [=AIC] is what we want to estimate Shmueli, Galit. 2010. “To Explain or to Predict?” Statistical Science 25 (3): 289–310. https://doi.org/10.1214/10-STS330.
Is there any reason to prefer the AIC or BIC over the other? From Shmueli (2010, Statistical Science), on the difference between explanatory and predictive modelling: A popular predictive metric is the in-sample Akaike Information Criterion (AIC). Akaike deriv
389
Famous statistical quotations
All models are wrong, but some are useful. (George E. P. Box) Reference: Box & Draper (1987), Empirical model-building and response surfaces, Wiley, p. 424. Also: G.E.P. Box (1979), "Robustness in the Strategy of Scientific Model Building" in Robustness in Statistics (Launer & Wilkinson eds.), p. 202.
Famous statistical quotations
All models are wrong, but some are useful. (George E. P. Box) Reference: Box & Draper (1987), Empirical model-building and response surfaces, Wiley, p. 424. Also: G.E.P. Box (1979), "Robustness in th
Famous statistical quotations All models are wrong, but some are useful. (George E. P. Box) Reference: Box & Draper (1987), Empirical model-building and response surfaces, Wiley, p. 424. Also: G.E.P. Box (1979), "Robustness in the Strategy of Scientific Model Building" in Robustness in Statistics (Launer & Wilkinson eds.), p. 202.
Famous statistical quotations All models are wrong, but some are useful. (George E. P. Box) Reference: Box & Draper (1987), Empirical model-building and response surfaces, Wiley, p. 424. Also: G.E.P. Box (1979), "Robustness in th
390
Famous statistical quotations
"An approximate answer to the right problem is worth a good deal more than an exact answer to an approximate problem." -- John Tukey
Famous statistical quotations
"An approximate answer to the right problem is worth a good deal more than an exact answer to an approximate problem." -- John Tukey
Famous statistical quotations "An approximate answer to the right problem is worth a good deal more than an exact answer to an approximate problem." -- John Tukey
Famous statistical quotations "An approximate answer to the right problem is worth a good deal more than an exact answer to an approximate problem." -- John Tukey
391
Famous statistical quotations
"To call in the statistician after the experiment is done may be no more than asking him to perform a post-mortem examination: he may be able to say what the experiment died of." -- Ronald Fisher (1938) The quotation can be read on page 17 of the article. R. A. Fisher. Presidential Address by Professor R. A. Fisher, Sc.D., F.R.S. Sankhyā: The Indian Journal of Statistics (1933-1960), Vol. 4, No. 1 (1938), pp. 14-17. http://www.jstor.org/stable/40383882
Famous statistical quotations
"To call in the statistician after the experiment is done may be no more than asking him to perform a post-mortem examination: he may be able to say what the experiment died of." -- Ronald Fisher (19
Famous statistical quotations "To call in the statistician after the experiment is done may be no more than asking him to perform a post-mortem examination: he may be able to say what the experiment died of." -- Ronald Fisher (1938) The quotation can be read on page 17 of the article. R. A. Fisher. Presidential Address by Professor R. A. Fisher, Sc.D., F.R.S. Sankhyā: The Indian Journal of Statistics (1933-1960), Vol. 4, No. 1 (1938), pp. 14-17. http://www.jstor.org/stable/40383882
Famous statistical quotations "To call in the statistician after the experiment is done may be no more than asking him to perform a post-mortem examination: he may be able to say what the experiment died of." -- Ronald Fisher (19
392
Famous statistical quotations
87% of statistics are made up on the spot -Unknown Dilbert.com
Famous statistical quotations
87% of statistics are made up on the spot -Unknown Dilbert.com
Famous statistical quotations 87% of statistics are made up on the spot -Unknown Dilbert.com
Famous statistical quotations 87% of statistics are made up on the spot -Unknown Dilbert.com
393
Famous statistical quotations
In God we trust. All others must bring data. (W. Edwards Deming)
Famous statistical quotations
In God we trust. All others must bring data. (W. Edwards Deming)
Famous statistical quotations In God we trust. All others must bring data. (W. Edwards Deming)
Famous statistical quotations In God we trust. All others must bring data. (W. Edwards Deming)
394
Famous statistical quotations
Statisticians, like artists, have the bad habit of falling in love with their models. -- George Box
Famous statistical quotations
Statisticians, like artists, have the bad habit of falling in love with their models. -- George Box
Famous statistical quotations Statisticians, like artists, have the bad habit of falling in love with their models. -- George Box
Famous statistical quotations Statisticians, like artists, have the bad habit of falling in love with their models. -- George Box
395
Famous statistical quotations
Statistics are like bikinis. What they reveal is suggestive, but what they conceal is vital. -Aaron Levenstein
Famous statistical quotations
Statistics are like bikinis. What they reveal is suggestive, but what they conceal is vital. -Aaron Levenstein
Famous statistical quotations Statistics are like bikinis. What they reveal is suggestive, but what they conceal is vital. -Aaron Levenstein
Famous statistical quotations Statistics are like bikinis. What they reveal is suggestive, but what they conceal is vital. -Aaron Levenstein
396
Famous statistical quotations
Prediction is very difficult, especially about the future. -- Niels Bohr
Famous statistical quotations
Prediction is very difficult, especially about the future. -- Niels Bohr
Famous statistical quotations Prediction is very difficult, especially about the future. -- Niels Bohr
Famous statistical quotations Prediction is very difficult, especially about the future. -- Niels Bohr
397
Famous statistical quotations
If you torture the data enough, nature will always confess. --Ronald Coase (quoted from Coase, R. H. 1982. How should economists chose? American Enterprise Institute, Washington, D. C.). I think most who hear this quote misunderstand its profound message against data dredging.
Famous statistical quotations
If you torture the data enough, nature will always confess. --Ronald Coase (quoted from Coase, R. H. 1982. How should economists chose? American Enterprise Institute, Washington, D. C.). I think m
Famous statistical quotations If you torture the data enough, nature will always confess. --Ronald Coase (quoted from Coase, R. H. 1982. How should economists chose? American Enterprise Institute, Washington, D. C.). I think most who hear this quote misunderstand its profound message against data dredging.
Famous statistical quotations If you torture the data enough, nature will always confess. --Ronald Coase (quoted from Coase, R. H. 1982. How should economists chose? American Enterprise Institute, Washington, D. C.). I think m
398
Famous statistical quotations
All generalizations are false, including this one. Mark Twain
Famous statistical quotations
All generalizations are false, including this one. Mark Twain
Famous statistical quotations All generalizations are false, including this one. Mark Twain
Famous statistical quotations All generalizations are false, including this one. Mark Twain
399
Famous statistical quotations
A big computer, a complex algorithm and a long time does not equal science. -- Robert Gentleman
Famous statistical quotations
A big computer, a complex algorithm and a long time does not equal science. -- Robert Gentleman
Famous statistical quotations A big computer, a complex algorithm and a long time does not equal science. -- Robert Gentleman
Famous statistical quotations A big computer, a complex algorithm and a long time does not equal science. -- Robert Gentleman
400
Famous statistical quotations
Statistical thinking will one day be as necessary a qualification for efficient citizenship as the ability to read and write. --H.G. Wells
Famous statistical quotations
Statistical thinking will one day be as necessary a qualification for efficient citizenship as the ability to read and write. --H.G. Wells
Famous statistical quotations Statistical thinking will one day be as necessary a qualification for efficient citizenship as the ability to read and write. --H.G. Wells
Famous statistical quotations Statistical thinking will one day be as necessary a qualification for efficient citizenship as the ability to read and write. --H.G. Wells