idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
βŒ€
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
βŒ€
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
55,901
Why does the Null Hypothesis have to be "equals to" and not "greater than or equal to"?
The reason for using $H_0: \mu = 12$ is that, among the set of values that correspond to $\mu \geq 12$, $\mu = 12$ is the most conservative (also called least favorable) configuration. Let us be more precise what conservative means here. Say we set certain value of the observed statistic $\hat{\mu}$ at which we are willing to consider the null hypothesis as false (also called critical value $c$). $c$ should naturally be smaller than 12 to provide evidence against the null. Since $\hat{\mu}$ is just one of many possible realizations of the statistic, there is always some possibility of observing a value of $\hat{\mu}$ even if $\mu \geq 12$. Luckily, if we know the distribution of the test statistic, wecan compute the probability of observing a value of $\hat{\mu}$ that is smaller than or equal to $c$. This probability is called the probability of Type 1 error. You can compute the probability of Type 1 error for all configurations that correspond to the hypothesis $\mu \geq 12$. In the figure I plot the distributions of the test statistics under two different such configurations $C_1: \mu = 12$ and $C_2: \mu = 13$. I also plot the probabilities of Type 1 error under the critical value $10.36$ for the two hypotheses (the shaded area under the respective curve) . It is easy to see that the probability of Type 1 error is always bigger for the configuration $C_1: \mu = 12$ than for any other $C_2$ that would also correspond to the hypothesis $\mu \geq 12$. I assumed normality here, but this result holds for any distribution that the test statistic can take. To sum up, the common practice (which also makes a lot of sense!) is to choose, within the set of configurations of the test that correspond to the null hypothesis, the one that gives you the highest probability of Type 1 error.
Why does the Null Hypothesis have to be "equals to" and not "greater than or equal to"?
The reason for using $H_0: \mu = 12$ is that, among the set of values that correspond to $\mu \geq 12$, $\mu = 12$ is the most conservative (also called least favorable) configuration. Let us be more
Why does the Null Hypothesis have to be "equals to" and not "greater than or equal to"? The reason for using $H_0: \mu = 12$ is that, among the set of values that correspond to $\mu \geq 12$, $\mu = 12$ is the most conservative (also called least favorable) configuration. Let us be more precise what conservative means here. Say we set certain value of the observed statistic $\hat{\mu}$ at which we are willing to consider the null hypothesis as false (also called critical value $c$). $c$ should naturally be smaller than 12 to provide evidence against the null. Since $\hat{\mu}$ is just one of many possible realizations of the statistic, there is always some possibility of observing a value of $\hat{\mu}$ even if $\mu \geq 12$. Luckily, if we know the distribution of the test statistic, wecan compute the probability of observing a value of $\hat{\mu}$ that is smaller than or equal to $c$. This probability is called the probability of Type 1 error. You can compute the probability of Type 1 error for all configurations that correspond to the hypothesis $\mu \geq 12$. In the figure I plot the distributions of the test statistics under two different such configurations $C_1: \mu = 12$ and $C_2: \mu = 13$. I also plot the probabilities of Type 1 error under the critical value $10.36$ for the two hypotheses (the shaded area under the respective curve) . It is easy to see that the probability of Type 1 error is always bigger for the configuration $C_1: \mu = 12$ than for any other $C_2$ that would also correspond to the hypothesis $\mu \geq 12$. I assumed normality here, but this result holds for any distribution that the test statistic can take. To sum up, the common practice (which also makes a lot of sense!) is to choose, within the set of configurations of the test that correspond to the null hypothesis, the one that gives you the highest probability of Type 1 error.
Why does the Null Hypothesis have to be "equals to" and not "greater than or equal to"? The reason for using $H_0: \mu = 12$ is that, among the set of values that correspond to $\mu \geq 12$, $\mu = 12$ is the most conservative (also called least favorable) configuration. Let us be more
55,902
"Proof?" of Bias/Variance trade-off
First write the statement mathematically: define $\mathcal{F}$ as a function space, $\hat{f}_{n,\mathcal{F}} = \arg\min_{\hat{f}\in\mathcal{F}}\sum_{i=1}^n (y_i - \hat{f}(x_i))^2$ as the optimal regression in $\mathcal{F}$, $Bias^2(\hat{f}_{n,\mathcal{F}}(x_0)) = [E(\hat{f}_{n,\mathcal{F}}(x_0)) - f(x_0)]^2$ and $Variance(\hat{f}_{n,\mathcal{F}}(x_0)) = Var[\hat{f}_{n,\mathcal{F}}(x_0)]$ as you defined, where the expectation and variance are taken on the training data. You asked that whether a more complex model must have lower bias but greater variance, which can be written as the statement: if $\mathcal{F_1} \subset \mathcal{F_2}$, $Bias^2(\hat{f}_{n, \mathcal{F}_1}(x_0)) \ge Bias^2(\hat{f}_{n, \mathcal{F}_2}(x_0))$ and $Variance(\hat{f}_{n, \mathcal{F}_1}(x_0)) \le Variance^2(\hat{f}_{n, \mathcal{F}_2}(x_0))$. I can find a counterexample as following: assume the true $f(x) = 1$ with $\sigma_\epsilon = 0$, and consider $\mathcal{F_1} = \{ax\}$, $\mathcal{F_2} = \{ax+b\}$, number of training data $n = 2$. It can be computed that $\hat{f}_{2, \mathcal{F}_1}(x_0) = \frac{x_1 + x_2}{x_1^2 + x_2^2}x_0$, $\hat{f}_{2, \mathcal{F}_2}(x_0) = 1$. The second has zero bias and variance, which are both lower than the first. It shows that a more complex model may have both lower bias and variance.
"Proof?" of Bias/Variance trade-off
First write the statement mathematically: define $\mathcal{F}$ as a function space, $\hat{f}_{n,\mathcal{F}} = \arg\min_{\hat{f}\in\mathcal{F}}\sum_{i=1}^n (y_i - \hat{f}(x_i))^2$ as the optimal regre
"Proof?" of Bias/Variance trade-off First write the statement mathematically: define $\mathcal{F}$ as a function space, $\hat{f}_{n,\mathcal{F}} = \arg\min_{\hat{f}\in\mathcal{F}}\sum_{i=1}^n (y_i - \hat{f}(x_i))^2$ as the optimal regression in $\mathcal{F}$, $Bias^2(\hat{f}_{n,\mathcal{F}}(x_0)) = [E(\hat{f}_{n,\mathcal{F}}(x_0)) - f(x_0)]^2$ and $Variance(\hat{f}_{n,\mathcal{F}}(x_0)) = Var[\hat{f}_{n,\mathcal{F}}(x_0)]$ as you defined, where the expectation and variance are taken on the training data. You asked that whether a more complex model must have lower bias but greater variance, which can be written as the statement: if $\mathcal{F_1} \subset \mathcal{F_2}$, $Bias^2(\hat{f}_{n, \mathcal{F}_1}(x_0)) \ge Bias^2(\hat{f}_{n, \mathcal{F}_2}(x_0))$ and $Variance(\hat{f}_{n, \mathcal{F}_1}(x_0)) \le Variance^2(\hat{f}_{n, \mathcal{F}_2}(x_0))$. I can find a counterexample as following: assume the true $f(x) = 1$ with $\sigma_\epsilon = 0$, and consider $\mathcal{F_1} = \{ax\}$, $\mathcal{F_2} = \{ax+b\}$, number of training data $n = 2$. It can be computed that $\hat{f}_{2, \mathcal{F}_1}(x_0) = \frac{x_1 + x_2}{x_1^2 + x_2^2}x_0$, $\hat{f}_{2, \mathcal{F}_2}(x_0) = 1$. The second has zero bias and variance, which are both lower than the first. It shows that a more complex model may have both lower bias and variance.
"Proof?" of Bias/Variance trade-off First write the statement mathematically: define $\mathcal{F}$ as a function space, $\hat{f}_{n,\mathcal{F}} = \arg\min_{\hat{f}\in\mathcal{F}}\sum_{i=1}^n (y_i - \hat{f}(x_i))^2$ as the optimal regre
55,903
"Proof?" of Bias/Variance trade-off
The reason is that there is sort of a proof. The initial important paper is, "Stein, C. (1956). "Inadmissibility of the usual estimator for the mean of a multivariate distribution". Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability. 1. pp. 197–206." What is important about it is that it shows that you can construct a simple estimator that always stochastically dominates both the maximum likelihood estimator and minimum variance unbiased estimator for the mean of a multivariate Gaussian. What is interesting is that this estimator can also always be dominated. It does not appear that there is an estimator that you can construct using this method that you cannot then dominate yet again. This proves that the bias that is created for this one case stochastically dominates the usual estimator. What is interesting about this method is that it maps to a Bayesian solution with an empirical prior distribution. There are an infinite number of priors and if you played improper games, then you could construct dominating priors although this would violate Bayesian theory. The prior should describe prior knowledge and not improve the value of the sample estimator. In Bayesian theory, the job of the prior is to normalize the data against prior information. Because Stein estimators map to a constrained Bayesian solution, it gives a hint as to what is really going on. Bayesian estimators condition on the information in a sample, whereas non-Bayesian estimators do not. An empirical prior uses a sample statistic that you are not interested in to inform you about a parameter you are interested in. The argument against this is that you are using the same data twice. In the case of a Stein estimator, you condition each individual mean on the grand mean. The argument for this is that you are gaining information about the overall system through the grand mean. The bias is information based and not just random bias. The variance is declining because you are including more information rather than less. The danger in this is that a Stein estimator does not require the means have any relationship whereas the Bayesian method would require it under many axiom systems. Jaynes also discusses this in his book Probability: The Language of Science, in his discussion on the problems with using the word "unbiased." I don't think it is all sources of bias that reduce variability in the estimator, rather it is those take advantage of sample properties that would normally be missed in standard unbiased methods and which are the equivalent to Bayesian information extraction. As to model selection, Bayesian model selection methods are intrinsically admissible, which would deal with your problem explicitly. An admissible model stochastically dominates an inadmissible model. The topic you are looking for is called admissibility and there are two flavors. The Bayesian version is slightly different from the non-Bayesian version because the former operates in the parameter space and the latter in the sample space. Nonetheless, the reason it solves your problem is that it solves a more profound general problem. A statistic is any function of the data, so that $\sum\sin(x_i)$ is a statistic. It is a useless statistic for almost any if not all problems, but how do you prove it? There are an infinite number of statistics that are possible, but how do you prove that almost every one is useless. Indeed, there are an infinite number of unbiased estimators for the mean, yet why use $\bar{x}$? If there is one admissible estimator, for example, then all others with all other combinations of bias and variability are excluded. In fact you can create a partial ordering of all possible rules. Do note though that an admissible estimator when there are multiple admissible estimators is not intrinsically a good estimator, where "good" might be some common sense understanding of good. Inadmissible estimators are bad, but not all admissible ones are good. It merely reduces your set down that you have to evaluate. There are pointless admissible estimators that no one would use. The base paper for admissibility is at https://projecteuclid.org/euclid.aoms/1177730345 It is open source.
"Proof?" of Bias/Variance trade-off
The reason is that there is sort of a proof. The initial important paper is, "Stein, C. (1956). "Inadmissibility of the usual estimator for the mean of a multivariate distribution". Proceedings of th
"Proof?" of Bias/Variance trade-off The reason is that there is sort of a proof. The initial important paper is, "Stein, C. (1956). "Inadmissibility of the usual estimator for the mean of a multivariate distribution". Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability. 1. pp. 197–206." What is important about it is that it shows that you can construct a simple estimator that always stochastically dominates both the maximum likelihood estimator and minimum variance unbiased estimator for the mean of a multivariate Gaussian. What is interesting is that this estimator can also always be dominated. It does not appear that there is an estimator that you can construct using this method that you cannot then dominate yet again. This proves that the bias that is created for this one case stochastically dominates the usual estimator. What is interesting about this method is that it maps to a Bayesian solution with an empirical prior distribution. There are an infinite number of priors and if you played improper games, then you could construct dominating priors although this would violate Bayesian theory. The prior should describe prior knowledge and not improve the value of the sample estimator. In Bayesian theory, the job of the prior is to normalize the data against prior information. Because Stein estimators map to a constrained Bayesian solution, it gives a hint as to what is really going on. Bayesian estimators condition on the information in a sample, whereas non-Bayesian estimators do not. An empirical prior uses a sample statistic that you are not interested in to inform you about a parameter you are interested in. The argument against this is that you are using the same data twice. In the case of a Stein estimator, you condition each individual mean on the grand mean. The argument for this is that you are gaining information about the overall system through the grand mean. The bias is information based and not just random bias. The variance is declining because you are including more information rather than less. The danger in this is that a Stein estimator does not require the means have any relationship whereas the Bayesian method would require it under many axiom systems. Jaynes also discusses this in his book Probability: The Language of Science, in his discussion on the problems with using the word "unbiased." I don't think it is all sources of bias that reduce variability in the estimator, rather it is those take advantage of sample properties that would normally be missed in standard unbiased methods and which are the equivalent to Bayesian information extraction. As to model selection, Bayesian model selection methods are intrinsically admissible, which would deal with your problem explicitly. An admissible model stochastically dominates an inadmissible model. The topic you are looking for is called admissibility and there are two flavors. The Bayesian version is slightly different from the non-Bayesian version because the former operates in the parameter space and the latter in the sample space. Nonetheless, the reason it solves your problem is that it solves a more profound general problem. A statistic is any function of the data, so that $\sum\sin(x_i)$ is a statistic. It is a useless statistic for almost any if not all problems, but how do you prove it? There are an infinite number of statistics that are possible, but how do you prove that almost every one is useless. Indeed, there are an infinite number of unbiased estimators for the mean, yet why use $\bar{x}$? If there is one admissible estimator, for example, then all others with all other combinations of bias and variability are excluded. In fact you can create a partial ordering of all possible rules. Do note though that an admissible estimator when there are multiple admissible estimators is not intrinsically a good estimator, where "good" might be some common sense understanding of good. Inadmissible estimators are bad, but not all admissible ones are good. It merely reduces your set down that you have to evaluate. There are pointless admissible estimators that no one would use. The base paper for admissibility is at https://projecteuclid.org/euclid.aoms/1177730345 It is open source.
"Proof?" of Bias/Variance trade-off The reason is that there is sort of a proof. The initial important paper is, "Stein, C. (1956). "Inadmissibility of the usual estimator for the mean of a multivariate distribution". Proceedings of th
55,904
"Proof?" of Bias/Variance trade-off
One argument could be like this - In any setting where there is uncertainty about the true value of the parameter, error is inevitable. We dont have an algorithm or known way of deriving the value of the parameter right? This would automatically mean that the Quadratic risk would have to be greater than zero. This means that it shouldn't be possible to simultaneously reduce the bias and variance to arbitrarily low values and hence the trade off.
"Proof?" of Bias/Variance trade-off
One argument could be like this - In any setting where there is uncertainty about the true value of the parameter, error is inevitable. We dont have an algorithm or known way of deriving the value of
"Proof?" of Bias/Variance trade-off One argument could be like this - In any setting where there is uncertainty about the true value of the parameter, error is inevitable. We dont have an algorithm or known way of deriving the value of the parameter right? This would automatically mean that the Quadratic risk would have to be greater than zero. This means that it shouldn't be possible to simultaneously reduce the bias and variance to arbitrarily low values and hence the trade off.
"Proof?" of Bias/Variance trade-off One argument could be like this - In any setting where there is uncertainty about the true value of the parameter, error is inevitable. We dont have an algorithm or known way of deriving the value of
55,905
How to derive 2x2 cell counts from contingency table margins and the odds ratio
Write $\rho$ for the odds ratio, $\beta=\Pr(E)$, $\gamma=\Pr(O)$. Four independent equations are $$\cases{a+b=\beta \\ a+c=\gamma \\ a+b+c+d=1 \\ ad = \rho bc.}$$ Adding the first two shows $$b+c = \beta + \gamma - 2a. \tag{1}$$ Multiplying the first two equations and using $(1)$ yields $$bc = \beta\gamma - (\beta+\gamma)a + a^2.\tag{2}$$ Multiplying the third equation by $a$, using the fourth to re-express $ad$ and plugging in $(1)$ and $(2)$ gives $$a^2 + a(\beta+\gamma-2a) + \rho(\beta\gamma - (\beta+\gamma)a + a^2) = a.$$ In a more standard form, this a zero of the quadratic $$(\rho-1)a^2 + [(\beta+\gamma)(1-\rho)-1]a + \rho\beta\gamma.\tag{3}$$ Provided $\beta, \gamma,\rho$ are consistent with some $2\times 2$ table, there will be at least one real zero of $(3)$, easily found using any quadratic formula. For either zero, solutions for the remaining entries are readily found from the first three of the original equations as $$\cases{b=\beta-a\\c=\gamma-a\\d = 1+a-\beta-\gamma.}$$ There will be at most one valid solution for $a$, determined by the non-negativity of the coefficients. Here is an R implementation of the solution as the function f along with a test using randomly generated tables. The test outputs the random table, its reconstructed value from f, and a measure of the difference between them. By wrapping the test in replicate, I have run it 10,000 times. The final output gives the largest difference found: up to floating point error it equals zero, demonstrating the correctness of this approach. f <- function(beta, gamma, rho, eps=1e-15) { a <- rho-1 b <- (beta+gamma)*(1-rho)-1 c_ <- rho*beta*gamma if (abs(a) < eps) { z <- -c_ / b } else { d <- b^2 - 4*a*c_ if (d < eps*eps) s <- 0 else s <- c(-1,1) z <- (-b + s*sqrt(max(0, d))) / (2*a) } y <- vapply(z, function(a) zapsmall(matrix(c(a, gamma-a, beta-a, 1+a-beta-gamma), 2, 2)), matrix(0.0, 2, 2)) i <- apply(y, 3, function(u) all(u >= 0)) return(y[,,i]) } set.seed(17) i<-0 sim <- replicate(1e4, { while(TRUE) { x <- matrix(round(rexp(4), 2), 2, 2) if(all(rowSums(x) > 0) && all(colSums(x) > 0) && x[1,2]*x[2,1] > 0) break } x <- x / sum(x) beta <- rowSums(x)[1] gamma <- colSums(x)[1] rho <- x[1,1]*x[2,2] / (x[1,2]*x[2,1]) y<- f(beta, gamma, rho) delta <- try(zapsmall(c(1, sqrt(crossprod(as.vector(x-y)))))[2]) if ("try-error" %in% class(delta)) cat("Error processing ", x, "\n") delta }) max(sim)
How to derive 2x2 cell counts from contingency table margins and the odds ratio
Write $\rho$ for the odds ratio, $\beta=\Pr(E)$, $\gamma=\Pr(O)$. Four independent equations are $$\cases{a+b=\beta \\ a+c=\gamma \\ a+b+c+d=1 \\ ad = \rho bc.}$$ Adding the first two shows $$b+c =
How to derive 2x2 cell counts from contingency table margins and the odds ratio Write $\rho$ for the odds ratio, $\beta=\Pr(E)$, $\gamma=\Pr(O)$. Four independent equations are $$\cases{a+b=\beta \\ a+c=\gamma \\ a+b+c+d=1 \\ ad = \rho bc.}$$ Adding the first two shows $$b+c = \beta + \gamma - 2a. \tag{1}$$ Multiplying the first two equations and using $(1)$ yields $$bc = \beta\gamma - (\beta+\gamma)a + a^2.\tag{2}$$ Multiplying the third equation by $a$, using the fourth to re-express $ad$ and plugging in $(1)$ and $(2)$ gives $$a^2 + a(\beta+\gamma-2a) + \rho(\beta\gamma - (\beta+\gamma)a + a^2) = a.$$ In a more standard form, this a zero of the quadratic $$(\rho-1)a^2 + [(\beta+\gamma)(1-\rho)-1]a + \rho\beta\gamma.\tag{3}$$ Provided $\beta, \gamma,\rho$ are consistent with some $2\times 2$ table, there will be at least one real zero of $(3)$, easily found using any quadratic formula. For either zero, solutions for the remaining entries are readily found from the first three of the original equations as $$\cases{b=\beta-a\\c=\gamma-a\\d = 1+a-\beta-\gamma.}$$ There will be at most one valid solution for $a$, determined by the non-negativity of the coefficients. Here is an R implementation of the solution as the function f along with a test using randomly generated tables. The test outputs the random table, its reconstructed value from f, and a measure of the difference between them. By wrapping the test in replicate, I have run it 10,000 times. The final output gives the largest difference found: up to floating point error it equals zero, demonstrating the correctness of this approach. f <- function(beta, gamma, rho, eps=1e-15) { a <- rho-1 b <- (beta+gamma)*(1-rho)-1 c_ <- rho*beta*gamma if (abs(a) < eps) { z <- -c_ / b } else { d <- b^2 - 4*a*c_ if (d < eps*eps) s <- 0 else s <- c(-1,1) z <- (-b + s*sqrt(max(0, d))) / (2*a) } y <- vapply(z, function(a) zapsmall(matrix(c(a, gamma-a, beta-a, 1+a-beta-gamma), 2, 2)), matrix(0.0, 2, 2)) i <- apply(y, 3, function(u) all(u >= 0)) return(y[,,i]) } set.seed(17) i<-0 sim <- replicate(1e4, { while(TRUE) { x <- matrix(round(rexp(4), 2), 2, 2) if(all(rowSums(x) > 0) && all(colSums(x) > 0) && x[1,2]*x[2,1] > 0) break } x <- x / sum(x) beta <- rowSums(x)[1] gamma <- colSums(x)[1] rho <- x[1,1]*x[2,2] / (x[1,2]*x[2,1]) y<- f(beta, gamma, rho) delta <- try(zapsmall(c(1, sqrt(crossprod(as.vector(x-y)))))[2]) if ("try-error" %in% class(delta)) cat("Error processing ", x, "\n") delta }) max(sim)
How to derive 2x2 cell counts from contingency table margins and the odds ratio Write $\rho$ for the odds ratio, $\beta=\Pr(E)$, $\gamma=\Pr(O)$. Four independent equations are $$\cases{a+b=\beta \\ a+c=\gamma \\ a+b+c+d=1 \\ ad = \rho bc.}$$ Adding the first two shows $$b+c =
55,906
Inequality involving joint cumulative and marginal distributions
Hint for the right hand side: $F_X(x) = P(X < x) \ge P(X < x \wedge Y < y) = F_{X, Y}(x, y)$. Hint for the left hand side: The following figure shows the $XY$ plane, with the horizontal and vertical lines intersecting at $(x, y)$. The region $C$ is the area where $X < x \wedge Y < y$ - you need to integrate over it to obtain $F_{X, Y}(x, y)$. Over which regions do you need to integrate to obtain $F_X(x)$, $F_Y(y)$, and $1$?
Inequality involving joint cumulative and marginal distributions
Hint for the right hand side: $F_X(x) = P(X < x) \ge P(X < x \wedge Y < y) = F_{X, Y}(x, y)$. Hint for the left hand side: The following figure shows the $XY$ plane, with the horizontal and vertical
Inequality involving joint cumulative and marginal distributions Hint for the right hand side: $F_X(x) = P(X < x) \ge P(X < x \wedge Y < y) = F_{X, Y}(x, y)$. Hint for the left hand side: The following figure shows the $XY$ plane, with the horizontal and vertical lines intersecting at $(x, y)$. The region $C$ is the area where $X < x \wedge Y < y$ - you need to integrate over it to obtain $F_{X, Y}(x, y)$. Over which regions do you need to integrate to obtain $F_X(x)$, $F_Y(y)$, and $1$?
Inequality involving joint cumulative and marginal distributions Hint for the right hand side: $F_X(x) = P(X < x) \ge P(X < x \wedge Y < y) = F_{X, Y}(x, y)$. Hint for the left hand side: The following figure shows the $XY$ plane, with the horizontal and vertical
55,907
Inequality involving joint cumulative and marginal distributions
It is just the elementary inequality $$P(A)+P(B)-1\le P(A\cap B)\le \sqrt{P(A)P(B)}$$ for events $A=\{X\le x\}$ and $B=\{Y\le y\}$. There is no need to go into distributions. Let $I_A$ be the indicator of $A$, i.e. $I_A=1$ if $A$ occurs and $I_A=0$ if $A$ does not occur. Then by Cauchy-Schwarz inequality, $$\left(E\left[I_AI_B\right]\right)^2 \le E\left[I_A^2\right]E\left[I_B^2\right] \implies (P(A\cap B))^2\le P(A)P(B)$$ And $P(A^c\cup B^c)\le P(A^c)+P(B^c)=1-P(A)+1-P(B)$ implies $$P(A\cap B)=1-P(A^c\cup B^c)\ge P(A)+P(B)-1$$
Inequality involving joint cumulative and marginal distributions
It is just the elementary inequality $$P(A)+P(B)-1\le P(A\cap B)\le \sqrt{P(A)P(B)}$$ for events $A=\{X\le x\}$ and $B=\{Y\le y\}$. There is no need to go into distributions. Let $I_A$ be the indicato
Inequality involving joint cumulative and marginal distributions It is just the elementary inequality $$P(A)+P(B)-1\le P(A\cap B)\le \sqrt{P(A)P(B)}$$ for events $A=\{X\le x\}$ and $B=\{Y\le y\}$. There is no need to go into distributions. Let $I_A$ be the indicator of $A$, i.e. $I_A=1$ if $A$ occurs and $I_A=0$ if $A$ does not occur. Then by Cauchy-Schwarz inequality, $$\left(E\left[I_AI_B\right]\right)^2 \le E\left[I_A^2\right]E\left[I_B^2\right] \implies (P(A\cap B))^2\le P(A)P(B)$$ And $P(A^c\cup B^c)\le P(A^c)+P(B^c)=1-P(A)+1-P(B)$ implies $$P(A\cap B)=1-P(A^c\cup B^c)\ge P(A)+P(B)-1$$
Inequality involving joint cumulative and marginal distributions It is just the elementary inequality $$P(A)+P(B)-1\le P(A\cap B)\le \sqrt{P(A)P(B)}$$ for events $A=\{X\le x\}$ and $B=\{Y\le y\}$. There is no need to go into distributions. Let $I_A$ be the indicato
55,908
Error bars in logarithmic scale
The error bar appears to be shorter because the same range requires less space higher up on the graph (where the divisions are closer together). Divisions on the logarithm scale come together as $\frac{1}{y}$ (the gradient of the logarithm) which will decrease the apparent length of the error bars (which in your case increase with $\sqrt{y}$) So - the error doesn't get smaller, but it is represented by a shorter bar because everything is closer together high up in the log plot. The distance between 1 and 2 is the same as the distance between 100 and 200. That's just how log plots work.
Error bars in logarithmic scale
The error bar appears to be shorter because the same range requires less space higher up on the graph (where the divisions are closer together). Divisions on the logarithm scale come together as $\fra
Error bars in logarithmic scale The error bar appears to be shorter because the same range requires less space higher up on the graph (where the divisions are closer together). Divisions on the logarithm scale come together as $\frac{1}{y}$ (the gradient of the logarithm) which will decrease the apparent length of the error bars (which in your case increase with $\sqrt{y}$) So - the error doesn't get smaller, but it is represented by a shorter bar because everything is closer together high up in the log plot. The distance between 1 and 2 is the same as the distance between 100 and 200. That's just how log plots work.
Error bars in logarithmic scale The error bar appears to be shorter because the same range requires less space higher up on the graph (where the divisions are closer together). Divisions on the logarithm scale come together as $\fra
55,909
Error bars in logarithmic scale
If you start with $x$ counts and you want to display $ x \pm \sqrt{x}$ then your error bar is of length $2\sqrt{x}$. If instead your count is $kx$ counts and you want to display $ kx \pm \sqrt{kx}$ then your error bar is of length $2\sqrt{kx}$. So your error bar on the larger count is longer than the original error bar by a factor of $\sqrt{k}$, even though it is a smaller proportion of the new larger count also by a factor of $\sqrt{k}$. And it is that smaller proportion which is being shown on your log scale. Your original error bar would have length (proportional to) $\log_e( x+\sqrt{x})-\log_e( x-\sqrt{x}) \approx \frac{2}{\sqrt{x}}$ on the log scale while for the larger count the length is (proportional to) about $\frac{2}{\sqrt{kx}}$, now actually shorter by a factor of about $\sqrt{k}$
Error bars in logarithmic scale
If you start with $x$ counts and you want to display $ x \pm \sqrt{x}$ then your error bar is of length $2\sqrt{x}$. If instead your count is $kx$ counts and you want to display $ kx \pm \sqrt{kx}$
Error bars in logarithmic scale If you start with $x$ counts and you want to display $ x \pm \sqrt{x}$ then your error bar is of length $2\sqrt{x}$. If instead your count is $kx$ counts and you want to display $ kx \pm \sqrt{kx}$ then your error bar is of length $2\sqrt{kx}$. So your error bar on the larger count is longer than the original error bar by a factor of $\sqrt{k}$, even though it is a smaller proportion of the new larger count also by a factor of $\sqrt{k}$. And it is that smaller proportion which is being shown on your log scale. Your original error bar would have length (proportional to) $\log_e( x+\sqrt{x})-\log_e( x-\sqrt{x}) \approx \frac{2}{\sqrt{x}}$ on the log scale while for the larger count the length is (proportional to) about $\frac{2}{\sqrt{kx}}$, now actually shorter by a factor of about $\sqrt{k}$
Error bars in logarithmic scale If you start with $x$ counts and you want to display $ x \pm \sqrt{x}$ then your error bar is of length $2\sqrt{x}$. If instead your count is $kx$ counts and you want to display $ kx \pm \sqrt{kx}$
55,910
How do errors in variables affect the R2?
Yes, it's possible, but it requires a dearth of simpifying assumptions, not always likely to hold in practice. Let's assume the following model $$y=\alpha+\beta x^*+\epsilon$$ As usual, we assume $E[x^*\epsilon]=0$. Since you didn't mention measurement error for $y$, I won't include it. However, we do have measurement error on the predictor, i.e., we don't observe $x^*$ but $x$, which is related to $x^*$ by the model $$x=x^*+\eta$$ Now we assume that $x^*$ and $\eta$ are independent, that $E[\eta]=0$ and that $E[\eta^2]=\sigma^2_{\eta}$ is known. Note that all this is quite restrictive! For most measurement instruments, the RMSE isn't constant across all the scale (i.e., for each value of the measured variable inside the non-saturated range), and even if it were, the RMSE wouldn't be known exactly. However, if the instrument has been calibrated multiple times by a certified laboratory, we may assume that $\sigma^2_{\eta}$ is reasonably well-known. Also, there are instruments such that, when taking measurements of $x^*$ sufficiently far from the ends of the scale, the RMSE stays the same, whatever the value of $x^*$ in this range. Thus, there are (few) real situations where these hypotheses are reasonable. Finally, we assume that $\epsilon$ and $\eta$ are independent. Note also that this model leads to $$y=\alpha+\beta (x-\eta)+\epsilon=\alpha+\beta x+\epsilon-\beta\eta=\alpha+\beta x+u$$ Enough of the background: when we observe $\{(x_i,y_i)\}_{i=1,\dots,N}$, the estimate of $R^2$ is $$\hat{R}^2=1-\frac{\sum_{i=1}^N(\hat{\alpha}+\hat{\beta} x_i-y_i)^2}{\sum_{i=1}^N(y_i-\bar{y})^2}$$ Now we're in for a bit of calculus: $$\text{plim}\frac{N}{\sum_{i=1}^N(y_i-\bar{y})^2}=\frac{1}{\sigma^2_y}$$ $$\text{plim}\hat{\beta}=\text{plim}\frac{\sum_{i=1}^N(x_i-\bar{x})(y_i-\bar{y})}{\sum_{i=1}^N(x_i-\bar{x})^2}=\frac{\sigma_{xy}}{\sigma^2_x}=\frac{\text{Cov}[(x^*+\eta)(\alpha+\beta x^* +\epsilon)]}{\sigma^2_x}=0+\beta\frac{\sigma^2_{x^*}}{\sigma^2_x}+\beta\frac{\text{Cov}[\eta x^*]}{\sigma^2_x}+\beta\frac{\text{Cov}[\eta\epsilon]}{\sigma^2_x}=\beta\frac{\sigma^2_*}{\sigma^2_x}$$ where in the last step we used the fact that $\text{Cov}[\eta x^*]=0$ (since $x^*$ and $\eta$ are independent and $E[\eta]=0$) and we used the hypothesis that $\eta$ and $\epsilon$ are independent with 0 mean. We also used the short-hand notation $\sigma^2_*$ for $\sigma^2_{x^*}$. Finally, since $$\sigma^2_x=\text{Var}[(x^*+\eta)^2]=\sigma^2_*+2\text{Cov}[\eta x^*]+\sigma^2_{\eta}=\sigma^2_*+\sigma^2_{\eta}$$ we get the well-known expression $$\text{plim}\hat{\beta}=\beta\frac{\sigma^2_*}{\sigma^2_*+\sigma^2_{\eta}}$$ Using the fact that $\hat{\alpha}=\bar{y}-\hat{\beta}\bar{x}$, we can simplify the numerator in the expression of $\hat{R}^2$: $$\sum_{i=1}^N(\hat{\alpha}+\hat{\beta} x_i-y_i)^2=\sum_{i=1}^N(\bar{y}-\hat{\beta}\bar{x}+\hat{\beta} x_i-y_i)^2=\sum_{i=1}^N(\hat{\beta}(x_i-\bar{x}) -(y_i-\bar{y}))^2$$ Now, we can use 2. to compute the following probability limit: $$\text{plim}\frac{\sum_{i=1}^N(\hat{\beta}(x_i-\bar{x})-(y_i-\bar{y}))^2}{N}=\text{plim}\frac{\sum_{i=1}^N \hat{\beta}^2(x_i-\bar{x})^2+(y_i-\bar{y})^2-2\hat{\beta}(x_i-\bar{x})(y_i-\bar{y})}{N}=\beta^2\frac{\sigma^4_*}{\sigma^4_x}\sigma^2_x+\sigma^2_y-2\beta\frac{\sigma^2_*}{\sigma^2_x}\sigma_{xy}$$ In 2. we've already seen that $\sigma_{xy}=\beta\sigma^2_*$, thus $$\beta^2\frac{\sigma^4_*}{\sigma^4_x}\sigma^2_x+\sigma^2_y-2\beta\frac{\sigma^2_*}{\sigma^2_x}\sigma_{xy}=\beta^2\frac{\sigma^4_*}{\sigma^2_x}+\sigma^2_y-2\beta^2\frac{\sigma^4_*}{\sigma^2_x}=\sigma^2_y-\beta^2\frac{\sigma^4_*}{\sigma^2_x}$$ Putting this all together: $$\text{plim}\hat{R}^2=1-\frac{\sigma^2_y}{\sigma^2_y}+\beta^2\frac{\sigma^4_*}{\sigma^2_x\sigma^2_y}=\beta^2\frac{\sigma^4_*\sigma^2_*}{\sigma^2_x\sigma^2_*\sigma^2_y}=R^2\frac{\sigma^2_*}{\sigma^2_x}$$ because, only for simple linear regression, $$R^2=r^2=\beta^2\frac{\sigma^4_*}{\sigma^2_*\sigma^2_y}$$ Thus we (finally!!) conclude that $$\text{plim}\hat{R}^2=R^2\frac{\sigma^2_x-\sigma^2_\eta}{\sigma^2_x}$$ which is the desired relationship between the R squared of our regression with errors in the predictor, and the true $R^2$. Note that, as well-known, in the classic error-in-variables model the OLS estimator of $R^2$ converges to a limit which is always smaller than the true $R^2$. Finally, if an estimate of $\sigma^2_{\eta}$ is known (by independent calibration of the instrument: you cannot get it from the same data used for the regression), then, since the data $\{(x_i,y_i)\}_{i=1,\dots,N}$ allow an estimate of ${\sigma^2_x}$ and ${\sigma^2_y}$, you see that we can get an estimate of $\beta$.
How do errors in variables affect the R2?
Yes, it's possible, but it requires a dearth of simpifying assumptions, not always likely to hold in practice. Let's assume the following model $$y=\alpha+\beta x^*+\epsilon$$ As usual, we assume $E[x
How do errors in variables affect the R2? Yes, it's possible, but it requires a dearth of simpifying assumptions, not always likely to hold in practice. Let's assume the following model $$y=\alpha+\beta x^*+\epsilon$$ As usual, we assume $E[x^*\epsilon]=0$. Since you didn't mention measurement error for $y$, I won't include it. However, we do have measurement error on the predictor, i.e., we don't observe $x^*$ but $x$, which is related to $x^*$ by the model $$x=x^*+\eta$$ Now we assume that $x^*$ and $\eta$ are independent, that $E[\eta]=0$ and that $E[\eta^2]=\sigma^2_{\eta}$ is known. Note that all this is quite restrictive! For most measurement instruments, the RMSE isn't constant across all the scale (i.e., for each value of the measured variable inside the non-saturated range), and even if it were, the RMSE wouldn't be known exactly. However, if the instrument has been calibrated multiple times by a certified laboratory, we may assume that $\sigma^2_{\eta}$ is reasonably well-known. Also, there are instruments such that, when taking measurements of $x^*$ sufficiently far from the ends of the scale, the RMSE stays the same, whatever the value of $x^*$ in this range. Thus, there are (few) real situations where these hypotheses are reasonable. Finally, we assume that $\epsilon$ and $\eta$ are independent. Note also that this model leads to $$y=\alpha+\beta (x-\eta)+\epsilon=\alpha+\beta x+\epsilon-\beta\eta=\alpha+\beta x+u$$ Enough of the background: when we observe $\{(x_i,y_i)\}_{i=1,\dots,N}$, the estimate of $R^2$ is $$\hat{R}^2=1-\frac{\sum_{i=1}^N(\hat{\alpha}+\hat{\beta} x_i-y_i)^2}{\sum_{i=1}^N(y_i-\bar{y})^2}$$ Now we're in for a bit of calculus: $$\text{plim}\frac{N}{\sum_{i=1}^N(y_i-\bar{y})^2}=\frac{1}{\sigma^2_y}$$ $$\text{plim}\hat{\beta}=\text{plim}\frac{\sum_{i=1}^N(x_i-\bar{x})(y_i-\bar{y})}{\sum_{i=1}^N(x_i-\bar{x})^2}=\frac{\sigma_{xy}}{\sigma^2_x}=\frac{\text{Cov}[(x^*+\eta)(\alpha+\beta x^* +\epsilon)]}{\sigma^2_x}=0+\beta\frac{\sigma^2_{x^*}}{\sigma^2_x}+\beta\frac{\text{Cov}[\eta x^*]}{\sigma^2_x}+\beta\frac{\text{Cov}[\eta\epsilon]}{\sigma^2_x}=\beta\frac{\sigma^2_*}{\sigma^2_x}$$ where in the last step we used the fact that $\text{Cov}[\eta x^*]=0$ (since $x^*$ and $\eta$ are independent and $E[\eta]=0$) and we used the hypothesis that $\eta$ and $\epsilon$ are independent with 0 mean. We also used the short-hand notation $\sigma^2_*$ for $\sigma^2_{x^*}$. Finally, since $$\sigma^2_x=\text{Var}[(x^*+\eta)^2]=\sigma^2_*+2\text{Cov}[\eta x^*]+\sigma^2_{\eta}=\sigma^2_*+\sigma^2_{\eta}$$ we get the well-known expression $$\text{plim}\hat{\beta}=\beta\frac{\sigma^2_*}{\sigma^2_*+\sigma^2_{\eta}}$$ Using the fact that $\hat{\alpha}=\bar{y}-\hat{\beta}\bar{x}$, we can simplify the numerator in the expression of $\hat{R}^2$: $$\sum_{i=1}^N(\hat{\alpha}+\hat{\beta} x_i-y_i)^2=\sum_{i=1}^N(\bar{y}-\hat{\beta}\bar{x}+\hat{\beta} x_i-y_i)^2=\sum_{i=1}^N(\hat{\beta}(x_i-\bar{x}) -(y_i-\bar{y}))^2$$ Now, we can use 2. to compute the following probability limit: $$\text{plim}\frac{\sum_{i=1}^N(\hat{\beta}(x_i-\bar{x})-(y_i-\bar{y}))^2}{N}=\text{plim}\frac{\sum_{i=1}^N \hat{\beta}^2(x_i-\bar{x})^2+(y_i-\bar{y})^2-2\hat{\beta}(x_i-\bar{x})(y_i-\bar{y})}{N}=\beta^2\frac{\sigma^4_*}{\sigma^4_x}\sigma^2_x+\sigma^2_y-2\beta\frac{\sigma^2_*}{\sigma^2_x}\sigma_{xy}$$ In 2. we've already seen that $\sigma_{xy}=\beta\sigma^2_*$, thus $$\beta^2\frac{\sigma^4_*}{\sigma^4_x}\sigma^2_x+\sigma^2_y-2\beta\frac{\sigma^2_*}{\sigma^2_x}\sigma_{xy}=\beta^2\frac{\sigma^4_*}{\sigma^2_x}+\sigma^2_y-2\beta^2\frac{\sigma^4_*}{\sigma^2_x}=\sigma^2_y-\beta^2\frac{\sigma^4_*}{\sigma^2_x}$$ Putting this all together: $$\text{plim}\hat{R}^2=1-\frac{\sigma^2_y}{\sigma^2_y}+\beta^2\frac{\sigma^4_*}{\sigma^2_x\sigma^2_y}=\beta^2\frac{\sigma^4_*\sigma^2_*}{\sigma^2_x\sigma^2_*\sigma^2_y}=R^2\frac{\sigma^2_*}{\sigma^2_x}$$ because, only for simple linear regression, $$R^2=r^2=\beta^2\frac{\sigma^4_*}{\sigma^2_*\sigma^2_y}$$ Thus we (finally!!) conclude that $$\text{plim}\hat{R}^2=R^2\frac{\sigma^2_x-\sigma^2_\eta}{\sigma^2_x}$$ which is the desired relationship between the R squared of our regression with errors in the predictor, and the true $R^2$. Note that, as well-known, in the classic error-in-variables model the OLS estimator of $R^2$ converges to a limit which is always smaller than the true $R^2$. Finally, if an estimate of $\sigma^2_{\eta}$ is known (by independent calibration of the instrument: you cannot get it from the same data used for the regression), then, since the data $\{(x_i,y_i)\}_{i=1,\dots,N}$ allow an estimate of ${\sigma^2_x}$ and ${\sigma^2_y}$, you see that we can get an estimate of $\beta$.
How do errors in variables affect the R2? Yes, it's possible, but it requires a dearth of simpifying assumptions, not always likely to hold in practice. Let's assume the following model $$y=\alpha+\beta x^*+\epsilon$$ As usual, we assume $E[x
55,911
Why do Deep Learning libraries force the cost function to output a scalar?
All machine learning is about minimizing cost of some model. The most elementary thing when you try to find minimum value is ability to compare two values. You can do it only with scalar values. For example, given two vectors [0,2], [2,2] how would you compare those tuples? You have to define some norm function. Euclidean, max, Manhattan, or your own fancy one. Whatever you use it must output scalar values.
Why do Deep Learning libraries force the cost function to output a scalar?
All machine learning is about minimizing cost of some model. The most elementary thing when you try to find minimum value is ability to compare two values. You can do it only with scalar values. For e
Why do Deep Learning libraries force the cost function to output a scalar? All machine learning is about minimizing cost of some model. The most elementary thing when you try to find minimum value is ability to compare two values. You can do it only with scalar values. For example, given two vectors [0,2], [2,2] how would you compare those tuples? You have to define some norm function. Euclidean, max, Manhattan, or your own fancy one. Whatever you use it must output scalar values.
Why do Deep Learning libraries force the cost function to output a scalar? All machine learning is about minimizing cost of some model. The most elementary thing when you try to find minimum value is ability to compare two values. You can do it only with scalar values. For e
55,912
Why do Deep Learning libraries force the cost function to output a scalar?
3 output neurons The loss function in most applications is chosen such that it calculates a combined loss for these three neurons (e.g. cross entropy loss). This defines the tradeoff between better matching of the target value of neuron 1 at the expense of worse matching of their respective target values of the other two neurons. You can of course define a different loss function, e.g. one that reflects that matching the target value is twice as important for neuron 1 than for the other two output neurons etc. For example, T.mean(T.pow(T-Y, 2)) . Why is this? This is the average error over your (training) sample. Usually all samples of the training sample are treated equal so matching the network outputs to the target values for sample 1 is as important as matching the target values for sample 2 etc. There are situations where some elements of the training sample are treated as more important than others, this can be modeled by adding weight terms in the loss function. In my example, shouldn't Theano backprop M, not a scalar value? This would correspond to doing backpropagation M times for a single sample where the network weights are adjusted M times, potentially in opposite directions at each of the M backpropagation operations. Plain stochastic gradient descent works like this for example. What you usually want to minimize (i.e. the original loss function) is the average over the entire training sample. You would get one weight update after calculating the loss over the entire training sample. It turns out that in practice you can get faster close to the optimum by using stochastic gradient descent. On the other hand, modern computing hardware (CPUs or GPUs) is vectorized (i.e. support running the same code on multiple values) so minibatching allows to take into account more than one sample per upgrade at similar execution speed. when I'm using minibatches, are these libraries just backpropping scalars? They backpropagate the derivatives with respect to the average loss over the entire minibatch (which is a scalar value). If you have a network with three output neurons contributing to a single loss function, each neuron gets its share of the correction according how badly it contributed to the loss, averaged over the M rows in the minibatch.
Why do Deep Learning libraries force the cost function to output a scalar?
3 output neurons The loss function in most applications is chosen such that it calculates a combined loss for these three neurons (e.g. cross entropy loss). This defines the tradeoff between better
Why do Deep Learning libraries force the cost function to output a scalar? 3 output neurons The loss function in most applications is chosen such that it calculates a combined loss for these three neurons (e.g. cross entropy loss). This defines the tradeoff between better matching of the target value of neuron 1 at the expense of worse matching of their respective target values of the other two neurons. You can of course define a different loss function, e.g. one that reflects that matching the target value is twice as important for neuron 1 than for the other two output neurons etc. For example, T.mean(T.pow(T-Y, 2)) . Why is this? This is the average error over your (training) sample. Usually all samples of the training sample are treated equal so matching the network outputs to the target values for sample 1 is as important as matching the target values for sample 2 etc. There are situations where some elements of the training sample are treated as more important than others, this can be modeled by adding weight terms in the loss function. In my example, shouldn't Theano backprop M, not a scalar value? This would correspond to doing backpropagation M times for a single sample where the network weights are adjusted M times, potentially in opposite directions at each of the M backpropagation operations. Plain stochastic gradient descent works like this for example. What you usually want to minimize (i.e. the original loss function) is the average over the entire training sample. You would get one weight update after calculating the loss over the entire training sample. It turns out that in practice you can get faster close to the optimum by using stochastic gradient descent. On the other hand, modern computing hardware (CPUs or GPUs) is vectorized (i.e. support running the same code on multiple values) so minibatching allows to take into account more than one sample per upgrade at similar execution speed. when I'm using minibatches, are these libraries just backpropping scalars? They backpropagate the derivatives with respect to the average loss over the entire minibatch (which is a scalar value). If you have a network with three output neurons contributing to a single loss function, each neuron gets its share of the correction according how badly it contributed to the loss, averaged over the M rows in the minibatch.
Why do Deep Learning libraries force the cost function to output a scalar? 3 output neurons The loss function in most applications is chosen such that it calculates a combined loss for these three neurons (e.g. cross entropy loss). This defines the tradeoff between better
55,913
Why do Deep Learning libraries force the cost function to output a scalar?
You ask in a comment: Let's say we backprop the matrix. If we do that we will get 32 gradient updates for each parameter in the neural net. For each parameter, can't we just take the average of the 32 gradients and use that in gradient descent? Well, suppose you have 1,000,000 parameters. You're suggesting that we calculate 32,000,000 partial derivatives and then average them, 32 at a time, in order to get 1,000,000 partial derivatives that we can then apply to the parameters. If the cost is a scalar, on the other hand, then we only need to calculate 1,000,000 partial derivatives in the first place, and then we can apply them to the parameters immediately, without needing to do any averaging first. So, you're asking if we can't "just" do 32 times as much work. And the answer is yes, we can... but it's 32 times as much work. Isn't doing that different than just taking a mean of the output activations and backpropping the scalar? No, it's the same. Suppose you have several loss functions $L_1$, $L_2$, $L_3$, $L_4$ that you want to optimize by changing some parameter $p$. Then the average of the derivatives, $$\frac14 \left (\frac{\partial L_1}{\partial p} + \frac{\partial L_2}{\partial p} + \frac{\partial L_3}{\partial p} + \frac{\partial L_4}{\partial p} \right),$$ is simply the derivative of the average, $$\frac14 \cdot \frac{\partial}{\partial p} (L_1 + L_2 + L_3 + L_4).$$ So, taking the average first and then doing backpropagation is equivalent to first doing backpropagation and then taking the average. It's also much faster.
Why do Deep Learning libraries force the cost function to output a scalar?
You ask in a comment: Let's say we backprop the matrix. If we do that we will get 32 gradient updates for each parameter in the neural net. For each parameter, can't we just take the average of the 3
Why do Deep Learning libraries force the cost function to output a scalar? You ask in a comment: Let's say we backprop the matrix. If we do that we will get 32 gradient updates for each parameter in the neural net. For each parameter, can't we just take the average of the 32 gradients and use that in gradient descent? Well, suppose you have 1,000,000 parameters. You're suggesting that we calculate 32,000,000 partial derivatives and then average them, 32 at a time, in order to get 1,000,000 partial derivatives that we can then apply to the parameters. If the cost is a scalar, on the other hand, then we only need to calculate 1,000,000 partial derivatives in the first place, and then we can apply them to the parameters immediately, without needing to do any averaging first. So, you're asking if we can't "just" do 32 times as much work. And the answer is yes, we can... but it's 32 times as much work. Isn't doing that different than just taking a mean of the output activations and backpropping the scalar? No, it's the same. Suppose you have several loss functions $L_1$, $L_2$, $L_3$, $L_4$ that you want to optimize by changing some parameter $p$. Then the average of the derivatives, $$\frac14 \left (\frac{\partial L_1}{\partial p} + \frac{\partial L_2}{\partial p} + \frac{\partial L_3}{\partial p} + \frac{\partial L_4}{\partial p} \right),$$ is simply the derivative of the average, $$\frac14 \cdot \frac{\partial}{\partial p} (L_1 + L_2 + L_3 + L_4).$$ So, taking the average first and then doing backpropagation is equivalent to first doing backpropagation and then taking the average. It's also much faster.
Why do Deep Learning libraries force the cost function to output a scalar? You ask in a comment: Let's say we backprop the matrix. If we do that we will get 32 gradient updates for each parameter in the neural net. For each parameter, can't we just take the average of the 3
55,914
Consistency of estimators in simple linear regression
We'll look at $\hat{\beta}_0 = \bar{y} - \hat{\beta}_1 \bar{x}$ first. The law of large numbers says that $\bar{y}$ converges to $\text{E}(y) = \beta_0 + \beta_1 \text{E}(x)$ and if $\hat{\beta}_1 \to \beta_1$ then $\hat{\beta}_1 \bar{x}$ converges to $\beta_1 \text{E}(x)$. This means $\hat{\beta}_0$ will be consistent if $\hat{\beta}_1$ is. Now looking at $\hat{\beta}_1$ and assuming all variances and covariances are finite and well-defined we have \begin{align} \hat{\beta}_1 &= \frac{\sum_{i=1}^{n} (y_i - \bar{y})(x_i - \bar{x})}{\sum_{i=1}^{n}(x_i - \bar{x})^2} \\ &\to \frac{\text{Cov}(y, x)}{\text{Var}(x)} \\ &= \frac{\text{Cov}(\beta_0 + \beta_1 x + \epsilon, x)}{\text{Var}(x)} \\ &= \beta_1 + \frac{\text{Cov}(\epsilon, x)}{\text{Var}(x)} \end{align} which equals $\beta_1$ so long as $\text{Cov}(\epsilon, x) = 0$. To prove the stronger claim that the estimators are consistent in mean square we can start with the variance covariance matrix for $(\hat{\beta}_0, \hat{\beta}_1)$ which equals $\sigma^2 (X^T X)^{-1}$. Here $X$ is the data matrix and for simple linear regression this is just $[1 ; x]$ where $1$ is a vector of ones and $x = (x_1, x_2, \ldots, x_n)$ is the predictor set. If we go through the linear algebra we get $$ (X^T X)^{-1} = \begin{bmatrix} n^{-1} \sum_{i=1}^{n} x_i^2 & - \bar{x} \\ - \bar{x} & 1 \end{bmatrix} \frac{1}{\sum_{i=1}^{n} x_i^2 - n \bar{x}^2} $$ and the denominator $\sum_{i=1}^{n} x_i^2 - n \bar{x}^2$ is nothing but the sum of squares for $x$. This means that as long as $\sum_{i=1}^{n} (x_i - \bar{x})^2 \to \infty$ as $n \to \infty$ every element of this matrix goes to zero, including the diagonal elements.
Consistency of estimators in simple linear regression
We'll look at $\hat{\beta}_0 = \bar{y} - \hat{\beta}_1 \bar{x}$ first. The law of large numbers says that $\bar{y}$ converges to $\text{E}(y) = \beta_0 + \beta_1 \text{E}(x)$ and if $\hat{\beta}_1 \t
Consistency of estimators in simple linear regression We'll look at $\hat{\beta}_0 = \bar{y} - \hat{\beta}_1 \bar{x}$ first. The law of large numbers says that $\bar{y}$ converges to $\text{E}(y) = \beta_0 + \beta_1 \text{E}(x)$ and if $\hat{\beta}_1 \to \beta_1$ then $\hat{\beta}_1 \bar{x}$ converges to $\beta_1 \text{E}(x)$. This means $\hat{\beta}_0$ will be consistent if $\hat{\beta}_1$ is. Now looking at $\hat{\beta}_1$ and assuming all variances and covariances are finite and well-defined we have \begin{align} \hat{\beta}_1 &= \frac{\sum_{i=1}^{n} (y_i - \bar{y})(x_i - \bar{x})}{\sum_{i=1}^{n}(x_i - \bar{x})^2} \\ &\to \frac{\text{Cov}(y, x)}{\text{Var}(x)} \\ &= \frac{\text{Cov}(\beta_0 + \beta_1 x + \epsilon, x)}{\text{Var}(x)} \\ &= \beta_1 + \frac{\text{Cov}(\epsilon, x)}{\text{Var}(x)} \end{align} which equals $\beta_1$ so long as $\text{Cov}(\epsilon, x) = 0$. To prove the stronger claim that the estimators are consistent in mean square we can start with the variance covariance matrix for $(\hat{\beta}_0, \hat{\beta}_1)$ which equals $\sigma^2 (X^T X)^{-1}$. Here $X$ is the data matrix and for simple linear regression this is just $[1 ; x]$ where $1$ is a vector of ones and $x = (x_1, x_2, \ldots, x_n)$ is the predictor set. If we go through the linear algebra we get $$ (X^T X)^{-1} = \begin{bmatrix} n^{-1} \sum_{i=1}^{n} x_i^2 & - \bar{x} \\ - \bar{x} & 1 \end{bmatrix} \frac{1}{\sum_{i=1}^{n} x_i^2 - n \bar{x}^2} $$ and the denominator $\sum_{i=1}^{n} x_i^2 - n \bar{x}^2$ is nothing but the sum of squares for $x$. This means that as long as $\sum_{i=1}^{n} (x_i - \bar{x})^2 \to \infty$ as $n \to \infty$ every element of this matrix goes to zero, including the diagonal elements.
Consistency of estimators in simple linear regression We'll look at $\hat{\beta}_0 = \bar{y} - \hat{\beta}_1 \bar{x}$ first. The law of large numbers says that $\bar{y}$ converges to $\text{E}(y) = \beta_0 + \beta_1 \text{E}(x)$ and if $\hat{\beta}_1 \t
55,915
Consistency of estimators in simple linear regression
Consider the classic situation in which you assume that the true model is of the form: $y = \beta_0 + \beta_1x + u$, with $E[u] = 0$ . In this case, the OLS estimator will asymptotically converge to $\beta_0$ and $\beta_1$ provided that $E[xu] = 0$ If you mean consistency in the sense of convergence to the parameters of the best linear approximation to $E[y|x]$, then all you need is that the data you get, $(y_i,x_i)$ is iid.
Consistency of estimators in simple linear regression
Consider the classic situation in which you assume that the true model is of the form: $y = \beta_0 + \beta_1x + u$, with $E[u] = 0$ . In this case, the OLS estimator will asymptotically converge to $
Consistency of estimators in simple linear regression Consider the classic situation in which you assume that the true model is of the form: $y = \beta_0 + \beta_1x + u$, with $E[u] = 0$ . In this case, the OLS estimator will asymptotically converge to $\beta_0$ and $\beta_1$ provided that $E[xu] = 0$ If you mean consistency in the sense of convergence to the parameters of the best linear approximation to $E[y|x]$, then all you need is that the data you get, $(y_i,x_i)$ is iid.
Consistency of estimators in simple linear regression Consider the classic situation in which you assume that the true model is of the form: $y = \beta_0 + \beta_1x + u$, with $E[u] = 0$ . In this case, the OLS estimator will asymptotically converge to $
55,916
How is a cohort study defined?
A cohort study is an observational study wherein each study participant is observed/measured on the dependent variable at two or more points in time. Any explanatory variable(s) may or may not also be observed/measured at each time of observation/measurement. The observation of the dependent variable across time allows measurement of it's rate of change (over time) in individual participants, and estimation of average rate of change in the target population. The analytic question of interest in a cohort study is does estimated rate of change differ across values of, or rate of change of, an explanatory variable? Some points on nomenclature The dependent variable is typically referred to as the outcome in epidemiology. The explanatory variable is typically referred to as the exposure in epidemiology. A defined cohort means that the participants share some value in the dependent variable, the explanatory variable, or both. For example, an exposure defined cohort might be one where all participants have no exposure when first observed/measured; some of them will become exposed at different levels across the study. An outcome defined cohort might be one where all participants do not have the outcome when first observed/measured. Cohort studies may be defined on both dependent variable and explanatory variable. When the dependent variable takes only two values (0 or 1), and the cohort is defined on the dependent variable by all participants having 0 at the start of the study, the change rate is called an incidence rate. Observational study means that exposure and changes in exposure are not randomly assigned by the researchers in the sense that they are in randomized control trials and other experimental designs.
How is a cohort study defined?
A cohort study is an observational study wherein each study participant is observed/measured on the dependent variable at two or more points in time. Any explanatory variable(s) may or may not also be
How is a cohort study defined? A cohort study is an observational study wherein each study participant is observed/measured on the dependent variable at two or more points in time. Any explanatory variable(s) may or may not also be observed/measured at each time of observation/measurement. The observation of the dependent variable across time allows measurement of it's rate of change (over time) in individual participants, and estimation of average rate of change in the target population. The analytic question of interest in a cohort study is does estimated rate of change differ across values of, or rate of change of, an explanatory variable? Some points on nomenclature The dependent variable is typically referred to as the outcome in epidemiology. The explanatory variable is typically referred to as the exposure in epidemiology. A defined cohort means that the participants share some value in the dependent variable, the explanatory variable, or both. For example, an exposure defined cohort might be one where all participants have no exposure when first observed/measured; some of them will become exposed at different levels across the study. An outcome defined cohort might be one where all participants do not have the outcome when first observed/measured. Cohort studies may be defined on both dependent variable and explanatory variable. When the dependent variable takes only two values (0 or 1), and the cohort is defined on the dependent variable by all participants having 0 at the start of the study, the change rate is called an incidence rate. Observational study means that exposure and changes in exposure are not randomly assigned by the researchers in the sense that they are in randomized control trials and other experimental designs.
How is a cohort study defined? A cohort study is an observational study wherein each study participant is observed/measured on the dependent variable at two or more points in time. Any explanatory variable(s) may or may not also be
55,917
How is a cohort study defined?
Here is an example of a cohort in the context of a website-based business (but it generalizes to many other kinds). Imagine you sell a product online. You care about how many people convert (i.e. subscribe) to your website every day. Say in month 1, 100 people visited your website and 50 people subscribed. Then in month 2, 150 people visited your website and 100 people subscribed. Then in month 3, 125 people visited your website and 50 people subscribed. Great. Your overall conversion rate is $\frac{50+100+50}{100+150+125} = 53\%$. But that treats people who signed up in the first month the same as people who signed up in the third month. Does that make sense to you? Maybe people in the first month have since canceled their subscription. Maybe between months 1 and months 3 you've changed your website and that impacts your conversion rate. Wouldn't you like to know that? To answer these kinds of question, we split up our users into cohorts. One natural way (in the online business) is to split by time of visit to the site. In this case, we'd have 3 cohorts - month 1 (conversion rate 50%) month 2 (66%) and month 3 (40%). If I track these groups of users separately, I can get a better sense of how changes in my product affect my users. For example, maybe people in month 1 all signed up because they saw a great advertisement. Then in month 2 I removed that ad, and my month 2 cohort had a poor conversion rate. Cohort analysis plays a key role in understanding your business.
How is a cohort study defined?
Here is an example of a cohort in the context of a website-based business (but it generalizes to many other kinds). Imagine you sell a product online. You care about how many people convert (i.e. subs
How is a cohort study defined? Here is an example of a cohort in the context of a website-based business (but it generalizes to many other kinds). Imagine you sell a product online. You care about how many people convert (i.e. subscribe) to your website every day. Say in month 1, 100 people visited your website and 50 people subscribed. Then in month 2, 150 people visited your website and 100 people subscribed. Then in month 3, 125 people visited your website and 50 people subscribed. Great. Your overall conversion rate is $\frac{50+100+50}{100+150+125} = 53\%$. But that treats people who signed up in the first month the same as people who signed up in the third month. Does that make sense to you? Maybe people in the first month have since canceled their subscription. Maybe between months 1 and months 3 you've changed your website and that impacts your conversion rate. Wouldn't you like to know that? To answer these kinds of question, we split up our users into cohorts. One natural way (in the online business) is to split by time of visit to the site. In this case, we'd have 3 cohorts - month 1 (conversion rate 50%) month 2 (66%) and month 3 (40%). If I track these groups of users separately, I can get a better sense of how changes in my product affect my users. For example, maybe people in month 1 all signed up because they saw a great advertisement. Then in month 2 I removed that ad, and my month 2 cohort had a poor conversion rate. Cohort analysis plays a key role in understanding your business.
How is a cohort study defined? Here is an example of a cohort in the context of a website-based business (but it generalizes to many other kinds). Imagine you sell a product online. You care about how many people convert (i.e. subs
55,918
How is a cohort study defined?
Let's first look at the term cohort. Originally it has meant a group of people that were born during a specific period, in specific place and identified by period of birth. This definition made it useful to determine things like death rates as people in the cohort aged over time. Cohort is understood much more broadly though, to mean any group of people that are defined by a specific time period and place. A cohort study is a type of longitudinal study, examining a cohort retrospectively or prospectively. It's defining features start by identifying a cohort of interest (usually lots of people), over a long period of time (usually years). A retrospective study design would examine a cohort from the past, for example, all of the female employees that worked at Chemical Company X during the 1970s. A prospective study would examine a cohort defined in present time and followed in the future, such as everyone born in Canada this year. At the time of cohort definition, the cohort is assessed for exposure to to a risk factor (e.g., working with a mutagenic chemical) and as they are followed in time they are monitored for an outcome of interest (e.g., developing cancer). A comparison can then be made between the sub-groups that were exposed and non-exposed (e.g., cancer incidence rates).
How is a cohort study defined?
Let's first look at the term cohort. Originally it has meant a group of people that were born during a specific period, in specific place and identified by period of birth. This definition made it use
How is a cohort study defined? Let's first look at the term cohort. Originally it has meant a group of people that were born during a specific period, in specific place and identified by period of birth. This definition made it useful to determine things like death rates as people in the cohort aged over time. Cohort is understood much more broadly though, to mean any group of people that are defined by a specific time period and place. A cohort study is a type of longitudinal study, examining a cohort retrospectively or prospectively. It's defining features start by identifying a cohort of interest (usually lots of people), over a long period of time (usually years). A retrospective study design would examine a cohort from the past, for example, all of the female employees that worked at Chemical Company X during the 1970s. A prospective study would examine a cohort defined in present time and followed in the future, such as everyone born in Canada this year. At the time of cohort definition, the cohort is assessed for exposure to to a risk factor (e.g., working with a mutagenic chemical) and as they are followed in time they are monitored for an outcome of interest (e.g., developing cancer). A comparison can then be made between the sub-groups that were exposed and non-exposed (e.g., cancer incidence rates).
How is a cohort study defined? Let's first look at the term cohort. Originally it has meant a group of people that were born during a specific period, in specific place and identified by period of birth. This definition made it use
55,919
Probability of at least $k$ Bernoulli successes with varying probabilities conditional on an event
The probability of mating at any given year is $\small \Pr(\text{mate})=m$, and the probability of offspring given a mate has been found is $\small \Pr( \text{single offspring} \vert \text{mate} ) =o$ and it does not change after the mate is found. The probabilities of getting $k$ offspring after $x$ years depends on the year at which the mate came into the picture. If we label the beginning of the "experiment" as year zero, $\small \text{Yr}=0$, the probability of having a single offspring at year $0$ is simply $\small \Pr(\text{mate } \cap \text{ offspring})= \Pr(\text{offspring}\vert \text{mate})\Pr(\text{mate})=o\times m.$ At year $1$ (one year later) the probability of a single offspring is simply going to be $\small\Pr(\text{offspring}\vert \text{mate})=o$: the participation of a mate is now guaranteed. So we just need to treat every year after the appearance of the mate as a binomial, focusing on the year the mate is "realized": the probability that year $1$ is the first year with a mate is calculated based on the probability of absence of mate at year zero, $1-m$, as $\small \Pr_{\text{Yr=1}}(\text{mate})=(1-m)\,m$. At year two, $\small \Pr_{\text{Yr=2}}(\text{mate})=(1-m)^2\,m$; and in general, $\small \Pr_{\text{Yr=yr}}(\text{mate})=(1-m)^{\text{yr}}\,m.$ So we have that the probability of having $k$ offspring by year $x$ and that the mate appears at year $\small \text{Yr=yr}$ is: \begin{align} \small \Pr(\text{Off= }k\text{ by time } x \cap \text{mate @ Yr=yr})&=\small \Pr(\text{Off= }k\,\vert \,\text{mate @ Yr=yr})\,\Pr(\text{mate @ Yr=yr})\\ &=\small\left({x-\text{yr}+1\choose k}\,o^k\,(1-o)^{x-\text{yr}+1 -k}\right)\, m(1-m)^{\text{yr}}\tag{*} \end{align} Now, the probability of getting $k$ offspring by year $x$ can happen in any of these scenarios where the mate appears in some year or other, so: $$\small \Pr(\text{Off}=k\text{ by time } x)=\sum_{\text{Yr=0}}^{x-k+1} m(1-m)^{\text{yr}}{x-\text{yr}+1\choose k}\,o^k\,(1-o)^{x-\text{yr}+1 -k}$$ Notice that the mate cannot appear any later than $x -k + 1$ years if we need $k$ offspring, explaining the upper limit of the sum. Finally, your question is $\text{at least }k$, inviting another summation: $$\small\Pr(\text{Off}\geq k\text{ by time } x)= \sum_{\text{Yr=0}}^{x-k+1}\,\, \sum_{\text{K}=k}^{x-\text{yr}+1} \,\, m(1-m)^{\text{yr}}{x-\text{yr}+1\choose k}\,o^k\,(1-o)^{x-\text{yr}+1 -k}$$ This answers the first part of the question in the OP: Is there an analytical solution to this problem when f,g are constants? However, there was a second part, which I tried to resolve slop... quickly... leading to too much poetic license, so poorly tolerated in math circles. So after being called on it, that part is now erased, and I'm trying again. First the second part of this question: What is a suitable numerical method for solving this problem when f,g are arbitrary functions? Specifically (see initial comments) the functions Alexandre has in mind are linearly changing probabilities over time as: $\color{blue}{m_t} = f(\text{yr})= \text{max} \{0, 1βˆ’a\times\text{yr}\}$ and $\color{blue}{o_t} = g(\text{yr})= \text{max} \{0, 1βˆ’b\times \text{yr}\}.$ The problem then becomes apparent in, for example, the expression above for $\small \Pr(\text{Off= }k\,\vert \,\text{mate @ Yr=yr})$: treated as a binomial, it prompts to select any combinations of $k$ successes (offspring), and treats each one with fixed probabilities of success and failure. Unfortunately, the summations over years that follow don't correct for this (my oversight). So back to the drawing board... The hope is that generalizing $\text{Eq. } *$ will do the trick for the rest of the post. This equation is the multiplication of two probabilities. Starting with the least challenging term: $$\Pr(\text{mate @ Yr=yr})= m_{\small t=\text{yr}}\,\prod_{t=0}^\text{yr}(1-m_t)$$ This is probably clear, although technically, it could be labeled as a geometric distribution with varying probability values. The challenge, then, is in $\Pr(\text{Off= }k\,\vert \,\text{mate @ Yr=yr})$. This turns out to be a Poisson binomial distribution, and adapting the notation to our case would result in a beautiful expression: $$\Pr(\text{Off= }k\text{ by time } x\,\vert \,\text{mate @ Yr=yr})=\sum_{A\in F_k}\,\,\prod_{i\in A}\,o_{t\in i}\,\prod_{j\in A^c}(1-o_{t\in j})$$ As in the Wikipedia link, $F_k$ is the set of all subsets of $k$ integers selected from $\{\text{yr}, \text{yr}+1, \cdots, x\}.$ And just when I was searching for a link to The Scream imagining the process of actually inserting this thing into the other two equations, I realized that the actual challenge would be the numerical calculations, at which point @Zen came to save the day, together with @wolfies. So just for fun, the final equation formulating the probability of a certain number of offspring ($k$) by a given age ($x$, capped at 10 years in the code formulations below), and having found a mate at year $\text{yr}$, would now look something along the lines of: $$\Pr(\text{Off by time } x\geq k)= \sum_{\text{Yr=0}}^{x-k+1}\,\, \sum_{\text{K}=k}^{x-\text{yr}+1} \,\,\small \left( m_{\small t=\text{yr}}\,\prod_{t=0}^\text{yr}(1-m_t)\right) \left(\sum_{A\in F_k}\,\,\prod_{i\in A}\,o_{t\in i}\,\prod_{j\in A^c}(1-o_{t\in j})\right)$$ Finally, what would this look like in R: Yr = 1:11 # Years zero to 10. x corresponds to 10, which is Yr[11]. a = .055 # Arbritarily chosen slope of the function for p(mating at time = yr) (m_t = ifelse((1 - a * Yr) > 0, 1 - a * Yr, 0)) # Ifelse to reject potential negative prob. values # [1] 0.945 0.890 0.835 0.780 0.725 0.670 0.615 0.560 0.505 0.450 0.395 b = .09 # Doing the same for the slope to calculate P(offspring | mate) (o_t = ifelse((1 - b * Yr) > 0, 1 - b * Yr, 0)) # Same trick to avoid neg values # [1] 0.91 0.82 0.73 0.64 0.55 0.46 0.37 0.28 0.19 0.10 0.01 Prob_Off_k_and_mate_yr = function(k=1:11, yr= 0:10){ # Probability Offspring = k AND mating at Yr = yr #k needs to be between yr and x if(k > (length(Yr) - yr + 1)){stop('Number of spring selected is impossible')}else{ #Probability to mate at year Yr = yr: P_mate_at_yr = ifelse(yr==0, m_t[1], prod(1 - (m_t)[1:yr]) * m_t[yr + 1]) #Probability of Offspring = k having mated at Yr = yr S = seq(yr, length(Yr) - 1) # All the years remaning to choose from, including the mating year. A = combn(S, k) # All possible combinations of k times from S P_off_k_having_mated_yr = 0 # Starting an empty vector for (i in 1:ncol(A)) { # For all subsets of k elements from the years "available" P_off_k_having_mated_yr = P_off_k_having_mated_yr + prod(o_t[A[,i] + 1], 1 - o_t[setdiff(S, A[,i]) + 1]) # Poisson binomial } Prob_Off_k_and_mate_yr = P_mate_at_yr * P_off_k_having_mated_yr return(Prob_Off_k_and_mate_yr) } } # Trying the function for Offspring = 6 and mating at year 1: k = 6 yr= 1 Prob_Off_k_and_mate_yr(k,yr) # [1] 0.005674715 # What about the probability of Offspring = 6 # regardless of the mating year (summation over years): Prob_Off_k = 0 for(i in 0:(length(Yr) - k)){ Prob_Off_k = Prob_Off_k + Prob_Off_k_and_mate_yr(k, i) } Prob_Off_k # [1] 0.2238927 # Finally, the actual question in the OP: AT LEAST 3 Offspring (for example): k=3 Prob_at_least_k = 0 # Starting empty vector for(i in 0:(length(Yr)- k)){ # Loop over mating year, which can't go beyond len(Yr) - k Prob_Off_k = 0 # Probability of k and any max allowable k depending on the year of mating (i) for(j in k:(length(Yr) - i)){ # Index for k's Prob_Off_k = Prob_Off_k + Prob_Off_k_and_mate_yr(j, i) } Prob_at_least_k = Prob_at_least_k + Prob_Off_k } Prob_at_least_k # [1] 0.9682951 A more elegant way of coding this process, thanks, so many thanks to the wisdom of whuber, in this instance on this post, could be achieved using a convolution with the R function convolve(), which calculates it using a Fast Fourier Transform (FFT). This would be the modified Prob_Off_k_and_mate_yr function: Prob_Off_k_and_mate_yr_convolution = function(k=1:11, yr= 0:10){ # Probability Offspring = k AND mating at Yr = yr #k needs to be between yr and x if(k > (length(Yr) - yr + 1)){stop('Number of spring selected is impossible')}else{ #Probability to mate at year Yr = yr: P_mate_at_yr = ifelse(yr==0, m_t[1], prod(1 - (m_t)[1:yr]) * m_t[yr + 1]) #Probability of Offspring = k having mated at Yr = yr z = 1 for (u in sort(o_t[yr+1:length(o_t)])) z <- convolve(z, c(u, 1 - u), type = "open") Prob_Offspring = z * P_mate_at_yr return(Prob_Offspring[k + 1]) } } It is shorter in coding lines, more elegant mathematically, and so much faster: microbenchmark(Prob_Off_k_and_mate_yr_convolution(k,yr), Prob_Off_k_and_mate_yr(k,yr)) Unit: microseconds expr min lq mean median uq max neval Prob_Off_k_and_mate_yr_convolution(k, yr) 281.220 288.7165 298.6452 294.5675 301.333 376.300 100 Prob_Off_k_and_mate_yr(k, yr) 3959.012 4046.9615 4236.5416 4111.1405 4187.023 6195.602 100
Probability of at least $k$ Bernoulli successes with varying probabilities conditional on an event
The probability of mating at any given year is $\small \Pr(\text{mate})=m$, and the probability of offspring given a mate has been found is $\small \Pr( \text{single offspring} \vert \text{mate} ) =o$
Probability of at least $k$ Bernoulli successes with varying probabilities conditional on an event The probability of mating at any given year is $\small \Pr(\text{mate})=m$, and the probability of offspring given a mate has been found is $\small \Pr( \text{single offspring} \vert \text{mate} ) =o$ and it does not change after the mate is found. The probabilities of getting $k$ offspring after $x$ years depends on the year at which the mate came into the picture. If we label the beginning of the "experiment" as year zero, $\small \text{Yr}=0$, the probability of having a single offspring at year $0$ is simply $\small \Pr(\text{mate } \cap \text{ offspring})= \Pr(\text{offspring}\vert \text{mate})\Pr(\text{mate})=o\times m.$ At year $1$ (one year later) the probability of a single offspring is simply going to be $\small\Pr(\text{offspring}\vert \text{mate})=o$: the participation of a mate is now guaranteed. So we just need to treat every year after the appearance of the mate as a binomial, focusing on the year the mate is "realized": the probability that year $1$ is the first year with a mate is calculated based on the probability of absence of mate at year zero, $1-m$, as $\small \Pr_{\text{Yr=1}}(\text{mate})=(1-m)\,m$. At year two, $\small \Pr_{\text{Yr=2}}(\text{mate})=(1-m)^2\,m$; and in general, $\small \Pr_{\text{Yr=yr}}(\text{mate})=(1-m)^{\text{yr}}\,m.$ So we have that the probability of having $k$ offspring by year $x$ and that the mate appears at year $\small \text{Yr=yr}$ is: \begin{align} \small \Pr(\text{Off= }k\text{ by time } x \cap \text{mate @ Yr=yr})&=\small \Pr(\text{Off= }k\,\vert \,\text{mate @ Yr=yr})\,\Pr(\text{mate @ Yr=yr})\\ &=\small\left({x-\text{yr}+1\choose k}\,o^k\,(1-o)^{x-\text{yr}+1 -k}\right)\, m(1-m)^{\text{yr}}\tag{*} \end{align} Now, the probability of getting $k$ offspring by year $x$ can happen in any of these scenarios where the mate appears in some year or other, so: $$\small \Pr(\text{Off}=k\text{ by time } x)=\sum_{\text{Yr=0}}^{x-k+1} m(1-m)^{\text{yr}}{x-\text{yr}+1\choose k}\,o^k\,(1-o)^{x-\text{yr}+1 -k}$$ Notice that the mate cannot appear any later than $x -k + 1$ years if we need $k$ offspring, explaining the upper limit of the sum. Finally, your question is $\text{at least }k$, inviting another summation: $$\small\Pr(\text{Off}\geq k\text{ by time } x)= \sum_{\text{Yr=0}}^{x-k+1}\,\, \sum_{\text{K}=k}^{x-\text{yr}+1} \,\, m(1-m)^{\text{yr}}{x-\text{yr}+1\choose k}\,o^k\,(1-o)^{x-\text{yr}+1 -k}$$ This answers the first part of the question in the OP: Is there an analytical solution to this problem when f,g are constants? However, there was a second part, which I tried to resolve slop... quickly... leading to too much poetic license, so poorly tolerated in math circles. So after being called on it, that part is now erased, and I'm trying again. First the second part of this question: What is a suitable numerical method for solving this problem when f,g are arbitrary functions? Specifically (see initial comments) the functions Alexandre has in mind are linearly changing probabilities over time as: $\color{blue}{m_t} = f(\text{yr})= \text{max} \{0, 1βˆ’a\times\text{yr}\}$ and $\color{blue}{o_t} = g(\text{yr})= \text{max} \{0, 1βˆ’b\times \text{yr}\}.$ The problem then becomes apparent in, for example, the expression above for $\small \Pr(\text{Off= }k\,\vert \,\text{mate @ Yr=yr})$: treated as a binomial, it prompts to select any combinations of $k$ successes (offspring), and treats each one with fixed probabilities of success and failure. Unfortunately, the summations over years that follow don't correct for this (my oversight). So back to the drawing board... The hope is that generalizing $\text{Eq. } *$ will do the trick for the rest of the post. This equation is the multiplication of two probabilities. Starting with the least challenging term: $$\Pr(\text{mate @ Yr=yr})= m_{\small t=\text{yr}}\,\prod_{t=0}^\text{yr}(1-m_t)$$ This is probably clear, although technically, it could be labeled as a geometric distribution with varying probability values. The challenge, then, is in $\Pr(\text{Off= }k\,\vert \,\text{mate @ Yr=yr})$. This turns out to be a Poisson binomial distribution, and adapting the notation to our case would result in a beautiful expression: $$\Pr(\text{Off= }k\text{ by time } x\,\vert \,\text{mate @ Yr=yr})=\sum_{A\in F_k}\,\,\prod_{i\in A}\,o_{t\in i}\,\prod_{j\in A^c}(1-o_{t\in j})$$ As in the Wikipedia link, $F_k$ is the set of all subsets of $k$ integers selected from $\{\text{yr}, \text{yr}+1, \cdots, x\}.$ And just when I was searching for a link to The Scream imagining the process of actually inserting this thing into the other two equations, I realized that the actual challenge would be the numerical calculations, at which point @Zen came to save the day, together with @wolfies. So just for fun, the final equation formulating the probability of a certain number of offspring ($k$) by a given age ($x$, capped at 10 years in the code formulations below), and having found a mate at year $\text{yr}$, would now look something along the lines of: $$\Pr(\text{Off by time } x\geq k)= \sum_{\text{Yr=0}}^{x-k+1}\,\, \sum_{\text{K}=k}^{x-\text{yr}+1} \,\,\small \left( m_{\small t=\text{yr}}\,\prod_{t=0}^\text{yr}(1-m_t)\right) \left(\sum_{A\in F_k}\,\,\prod_{i\in A}\,o_{t\in i}\,\prod_{j\in A^c}(1-o_{t\in j})\right)$$ Finally, what would this look like in R: Yr = 1:11 # Years zero to 10. x corresponds to 10, which is Yr[11]. a = .055 # Arbritarily chosen slope of the function for p(mating at time = yr) (m_t = ifelse((1 - a * Yr) > 0, 1 - a * Yr, 0)) # Ifelse to reject potential negative prob. values # [1] 0.945 0.890 0.835 0.780 0.725 0.670 0.615 0.560 0.505 0.450 0.395 b = .09 # Doing the same for the slope to calculate P(offspring | mate) (o_t = ifelse((1 - b * Yr) > 0, 1 - b * Yr, 0)) # Same trick to avoid neg values # [1] 0.91 0.82 0.73 0.64 0.55 0.46 0.37 0.28 0.19 0.10 0.01 Prob_Off_k_and_mate_yr = function(k=1:11, yr= 0:10){ # Probability Offspring = k AND mating at Yr = yr #k needs to be between yr and x if(k > (length(Yr) - yr + 1)){stop('Number of spring selected is impossible')}else{ #Probability to mate at year Yr = yr: P_mate_at_yr = ifelse(yr==0, m_t[1], prod(1 - (m_t)[1:yr]) * m_t[yr + 1]) #Probability of Offspring = k having mated at Yr = yr S = seq(yr, length(Yr) - 1) # All the years remaning to choose from, including the mating year. A = combn(S, k) # All possible combinations of k times from S P_off_k_having_mated_yr = 0 # Starting an empty vector for (i in 1:ncol(A)) { # For all subsets of k elements from the years "available" P_off_k_having_mated_yr = P_off_k_having_mated_yr + prod(o_t[A[,i] + 1], 1 - o_t[setdiff(S, A[,i]) + 1]) # Poisson binomial } Prob_Off_k_and_mate_yr = P_mate_at_yr * P_off_k_having_mated_yr return(Prob_Off_k_and_mate_yr) } } # Trying the function for Offspring = 6 and mating at year 1: k = 6 yr= 1 Prob_Off_k_and_mate_yr(k,yr) # [1] 0.005674715 # What about the probability of Offspring = 6 # regardless of the mating year (summation over years): Prob_Off_k = 0 for(i in 0:(length(Yr) - k)){ Prob_Off_k = Prob_Off_k + Prob_Off_k_and_mate_yr(k, i) } Prob_Off_k # [1] 0.2238927 # Finally, the actual question in the OP: AT LEAST 3 Offspring (for example): k=3 Prob_at_least_k = 0 # Starting empty vector for(i in 0:(length(Yr)- k)){ # Loop over mating year, which can't go beyond len(Yr) - k Prob_Off_k = 0 # Probability of k and any max allowable k depending on the year of mating (i) for(j in k:(length(Yr) - i)){ # Index for k's Prob_Off_k = Prob_Off_k + Prob_Off_k_and_mate_yr(j, i) } Prob_at_least_k = Prob_at_least_k + Prob_Off_k } Prob_at_least_k # [1] 0.9682951 A more elegant way of coding this process, thanks, so many thanks to the wisdom of whuber, in this instance on this post, could be achieved using a convolution with the R function convolve(), which calculates it using a Fast Fourier Transform (FFT). This would be the modified Prob_Off_k_and_mate_yr function: Prob_Off_k_and_mate_yr_convolution = function(k=1:11, yr= 0:10){ # Probability Offspring = k AND mating at Yr = yr #k needs to be between yr and x if(k > (length(Yr) - yr + 1)){stop('Number of spring selected is impossible')}else{ #Probability to mate at year Yr = yr: P_mate_at_yr = ifelse(yr==0, m_t[1], prod(1 - (m_t)[1:yr]) * m_t[yr + 1]) #Probability of Offspring = k having mated at Yr = yr z = 1 for (u in sort(o_t[yr+1:length(o_t)])) z <- convolve(z, c(u, 1 - u), type = "open") Prob_Offspring = z * P_mate_at_yr return(Prob_Offspring[k + 1]) } } It is shorter in coding lines, more elegant mathematically, and so much faster: microbenchmark(Prob_Off_k_and_mate_yr_convolution(k,yr), Prob_Off_k_and_mate_yr(k,yr)) Unit: microseconds expr min lq mean median uq max neval Prob_Off_k_and_mate_yr_convolution(k, yr) 281.220 288.7165 298.6452 294.5675 301.333 376.300 100 Prob_Off_k_and_mate_yr(k, yr) 3959.012 4046.9615 4236.5416 4111.1405 4187.023 6195.602 100
Probability of at least $k$ Bernoulli successes with varying probabilities conditional on an event The probability of mating at any given year is $\small \Pr(\text{mate})=m$, and the probability of offspring given a mate has been found is $\small \Pr( \text{single offspring} \vert \text{mate} ) =o$
55,920
How to obtain the functional derivative in variational inference?
Let's streamline the notation by fixing a function $f$ and considering a functional $$\mathcal{L}[q] = \int (q(z) f(z) - q(z) \log(q(z))) dz.$$ A variation $h$ is a function for which $q+h$ is still the same kind of function as $q$ (e.g., continuous or non-negative or whatever you need). The effect of changing $q$ to $q+h$ is found in the usual way: we compare the results of $\mathcal{L}$ by subtracting the original value from the new one: $$\eqalign{ \frac{\delta\mathcal{L}}{\delta q}[h] &= \mathcal{L}[q+h] - \mathcal{L}[h] \\ &= \int \left((q+h)(z) f(z) - (q+h)(z) \log((q+h)(z)) - \left(q(z) f(z) - q(z) \log(q(z))\right)\right) dz.}$$ Let's now restrict attention to variations $h$ that have infinitesimally small values so that we may be free to neglect second-order terms. Specifically, that means we plan to use a product-like differentiation rule to approximate the product difference $$(q+h)(z)\log((q+h)(z)) - q(z)\log(q(z)) \approx h(z)\log(q(z)) + q(z) \frac{h(z)}{q(z)}.\tag{1}$$ To see where this came from, compare it to taking the differential of the function (not functional) $$d(x \log(x)) = (dx) \log(x) + x \left(\frac{dx}{x}\right).$$ Plugging $(1)$ into $\frac{\delta\mathcal{L}}{\delta q}[h]$, simplifying, and factoring out the common factors of $h(z)$ yields $$\frac{\delta\mathcal{L}}{\delta q}[h] = \int\left(f(z) - \log(q(z)) - 1\right) h(z) dz.$$ If this is to vanish (to second order in $h$, anyway) for all infinitesimal $h$, then--assuming that at each $z$ in the domain of integration there exist some variations $h$ that are nonzero in some neighborhood of $z$--the right hand side can vanish only when the coefficient of $h$ itself vanishes for all $z$; that is, $$f(z) - \log(q(z)) - 1 = 0.$$ That's Equation 23 in the referenced notes. It means the functional $\mathcal{L}$ is stationary at $q$. That's a necessary (but not sufficient) condition for $\mathcal{L}$ to have a local extremum at $q$.
How to obtain the functional derivative in variational inference?
Let's streamline the notation by fixing a function $f$ and considering a functional $$\mathcal{L}[q] = \int (q(z) f(z) - q(z) \log(q(z))) dz.$$ A variation $h$ is a function for which $q+h$ is still t
How to obtain the functional derivative in variational inference? Let's streamline the notation by fixing a function $f$ and considering a functional $$\mathcal{L}[q] = \int (q(z) f(z) - q(z) \log(q(z))) dz.$$ A variation $h$ is a function for which $q+h$ is still the same kind of function as $q$ (e.g., continuous or non-negative or whatever you need). The effect of changing $q$ to $q+h$ is found in the usual way: we compare the results of $\mathcal{L}$ by subtracting the original value from the new one: $$\eqalign{ \frac{\delta\mathcal{L}}{\delta q}[h] &= \mathcal{L}[q+h] - \mathcal{L}[h] \\ &= \int \left((q+h)(z) f(z) - (q+h)(z) \log((q+h)(z)) - \left(q(z) f(z) - q(z) \log(q(z))\right)\right) dz.}$$ Let's now restrict attention to variations $h$ that have infinitesimally small values so that we may be free to neglect second-order terms. Specifically, that means we plan to use a product-like differentiation rule to approximate the product difference $$(q+h)(z)\log((q+h)(z)) - q(z)\log(q(z)) \approx h(z)\log(q(z)) + q(z) \frac{h(z)}{q(z)}.\tag{1}$$ To see where this came from, compare it to taking the differential of the function (not functional) $$d(x \log(x)) = (dx) \log(x) + x \left(\frac{dx}{x}\right).$$ Plugging $(1)$ into $\frac{\delta\mathcal{L}}{\delta q}[h]$, simplifying, and factoring out the common factors of $h(z)$ yields $$\frac{\delta\mathcal{L}}{\delta q}[h] = \int\left(f(z) - \log(q(z)) - 1\right) h(z) dz.$$ If this is to vanish (to second order in $h$, anyway) for all infinitesimal $h$, then--assuming that at each $z$ in the domain of integration there exist some variations $h$ that are nonzero in some neighborhood of $z$--the right hand side can vanish only when the coefficient of $h$ itself vanishes for all $z$; that is, $$f(z) - \log(q(z)) - 1 = 0.$$ That's Equation 23 in the referenced notes. It means the functional $\mathcal{L}$ is stationary at $q$. That's a necessary (but not sufficient) condition for $\mathcal{L}$ to have a local extremum at $q$.
How to obtain the functional derivative in variational inference? Let's streamline the notation by fixing a function $f$ and considering a functional $$\mathcal{L}[q] = \int (q(z) f(z) - q(z) \log(q(z))) dz.$$ A variation $h$ is a function for which $q+h$ is still t
55,921
How to weight a Spearman rank correlation by statistical errors?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. This paper might help you. Here its abstract: This manuscript describes a number of easily implemented, Monte Carlo based methods to estimate the uncertainty on the Spearman’s rank correlation coefficient, or more precisely to estimate its probability distribution. Basically, the idea is the following: Simulate many samples from the original data, using the "error bars" in your data ($X_{err}$) to introduce noise to the samples. Then, for each sample, take the Spearman correlation. You get so many Spearman "values" as the number of samples you took on the first step. Use these many points to calculate a "distribution" of Spearman, rather than a point estimate. PS: @Cadnr has given the same advice in a previous answer.
How to weight a Spearman rank correlation by statistical errors?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
How to weight a Spearman rank correlation by statistical errors? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. This paper might help you. Here its abstract: This manuscript describes a number of easily implemented, Monte Carlo based methods to estimate the uncertainty on the Spearman’s rank correlation coefficient, or more precisely to estimate its probability distribution. Basically, the idea is the following: Simulate many samples from the original data, using the "error bars" in your data ($X_{err}$) to introduce noise to the samples. Then, for each sample, take the Spearman correlation. You get so many Spearman "values" as the number of samples you took on the first step. Use these many points to calculate a "distribution" of Spearman, rather than a point estimate. PS: @Cadnr has given the same advice in a previous answer.
How to weight a Spearman rank correlation by statistical errors? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
55,922
How to weight a Spearman rank correlation by statistical errors?
I hate to be another brainless advocate of Monte Carlo methods, but one solution would be to build up a distribution of p values by taking a large number of samples of your data error distributions. For each data point, generate random errors in x and y (within the envelope defined by the measurement errors for that data point), and once that's been done for all the data points, generate the p value for the synthetic dataset. Hopefully, as you repeat this process many times, your distribution of synthetic p values will approach a well-defined functional form (such as a Gaussian), for which you can find the median and useful limits by doing a (Gaussian) fit or taking e.g. the median and +-67% percentiles. You'll then end up with a p value and +- errors, from which you'll be able to tell if the correlation is significant. I'm not aware of any off-the-shelf software to help you accomplish this, but it shouldn't be hard to code.
How to weight a Spearman rank correlation by statistical errors?
I hate to be another brainless advocate of Monte Carlo methods, but one solution would be to build up a distribution of p values by taking a large number of samples of your data error distributions. F
How to weight a Spearman rank correlation by statistical errors? I hate to be another brainless advocate of Monte Carlo methods, but one solution would be to build up a distribution of p values by taking a large number of samples of your data error distributions. For each data point, generate random errors in x and y (within the envelope defined by the measurement errors for that data point), and once that's been done for all the data points, generate the p value for the synthetic dataset. Hopefully, as you repeat this process many times, your distribution of synthetic p values will approach a well-defined functional form (such as a Gaussian), for which you can find the median and useful limits by doing a (Gaussian) fit or taking e.g. the median and +-67% percentiles. You'll then end up with a p value and +- errors, from which you'll be able to tell if the correlation is significant. I'm not aware of any off-the-shelf software to help you accomplish this, but it shouldn't be hard to code.
How to weight a Spearman rank correlation by statistical errors? I hate to be another brainless advocate of Monte Carlo methods, but one solution would be to build up a distribution of p values by taking a large number of samples of your data error distributions. F
55,923
How to weight a Spearman rank correlation by statistical errors?
You can construct a Spearman-like correlation that takes into account weights. Let's say, we have two rankings, $Q$ and $R$ and two set of weights $W_q$ and $W_r$ (you can have one of these be all ones if you have only one set of weights). You would have to compute these from your errors. All of these have $n$ elements. Now we want to compute the weighted rank correlation in a Spearman-like manner, i.e. the correlation coefficiont should be a function of $\sum_i^n w^Q_iw^R_iD_i^2=\sum_i^n w^Q_iw^R_i(R_i-Q_i)^2$. We assume that this function will have the form $A+B\sum_i^nw^Q_iw^R_iD_i^2$, just like the original Spearman-function. Since we want it to be equal to one if the rank orders are identical, i.e. $D_i = 0$ for all $i$, it follows that $A=1$. If the rankings are reversed, we want the result to be $-1$. In this case the $D_i$ will be equal to $n-1, n-3,...,-(n-1)=n-2i+1$. This allows us to compute $B$: $$ B\sum_i^nw^Q_iw^R_i(n-2i+1)^2 = -2 \\ B = \frac{-2}{\sum_i^nw^Q_iw^R_i(n-2i+1)^2} $$ Our function is therefore: $$ r(R,Q,W_r,W_q) = 1 - \frac{2\sum_i^nw^Q_iw^R_i(R_i-Q_i)^2}{\sum_i^nw^Q_iw^R_i(n-2i+1)^2} $$ I'm not 100% sure about this, but I believe your weights must be nonnegative, because otherwise your function might (as I said, I'm not completely sure) leave $[-1,1]$. Also, of course, your weights must be on the same scale. Also, for your the function to behave correctly, the weights need to be monotonous with increasing rank. EDIT: I just realized that this answer probably won't help in your case, since the type of weights you are looking for are something different than what I wrote my function for. Sorry. I'll leave this answer here if it helps anybody else though.
How to weight a Spearman rank correlation by statistical errors?
You can construct a Spearman-like correlation that takes into account weights. Let's say, we have two rankings, $Q$ and $R$ and two set of weights $W_q$ and $W_r$ (you can have one of these be all one
How to weight a Spearman rank correlation by statistical errors? You can construct a Spearman-like correlation that takes into account weights. Let's say, we have two rankings, $Q$ and $R$ and two set of weights $W_q$ and $W_r$ (you can have one of these be all ones if you have only one set of weights). You would have to compute these from your errors. All of these have $n$ elements. Now we want to compute the weighted rank correlation in a Spearman-like manner, i.e. the correlation coefficiont should be a function of $\sum_i^n w^Q_iw^R_iD_i^2=\sum_i^n w^Q_iw^R_i(R_i-Q_i)^2$. We assume that this function will have the form $A+B\sum_i^nw^Q_iw^R_iD_i^2$, just like the original Spearman-function. Since we want it to be equal to one if the rank orders are identical, i.e. $D_i = 0$ for all $i$, it follows that $A=1$. If the rankings are reversed, we want the result to be $-1$. In this case the $D_i$ will be equal to $n-1, n-3,...,-(n-1)=n-2i+1$. This allows us to compute $B$: $$ B\sum_i^nw^Q_iw^R_i(n-2i+1)^2 = -2 \\ B = \frac{-2}{\sum_i^nw^Q_iw^R_i(n-2i+1)^2} $$ Our function is therefore: $$ r(R,Q,W_r,W_q) = 1 - \frac{2\sum_i^nw^Q_iw^R_i(R_i-Q_i)^2}{\sum_i^nw^Q_iw^R_i(n-2i+1)^2} $$ I'm not 100% sure about this, but I believe your weights must be nonnegative, because otherwise your function might (as I said, I'm not completely sure) leave $[-1,1]$. Also, of course, your weights must be on the same scale. Also, for your the function to behave correctly, the weights need to be monotonous with increasing rank. EDIT: I just realized that this answer probably won't help in your case, since the type of weights you are looking for are something different than what I wrote my function for. Sorry. I'll leave this answer here if it helps anybody else though.
How to weight a Spearman rank correlation by statistical errors? You can construct a Spearman-like correlation that takes into account weights. Let's say, we have two rankings, $Q$ and $R$ and two set of weights $W_q$ and $W_r$ (you can have one of these be all one
55,924
Why do we trust the p-value when fitting a regression on a single sample?
I assume that you talk about the p-value on the estimated coefficient $\hat{\beta}_1$. (but the reasoning would be similar for $\hat{\beta}_0$). The theory on linear regression tells us that, if the necessary conditions are fulfilled, then we know the distribution of that estimator namely, it is normal, it has mean equal to the ''true'' (but onknown) $\beta_1$ and we can estimate the variance $\sigma_{\hat{\beta}_1}$. I.e. $\hat{\beta}_1 \sim N(\beta_1, \sigma_{\hat{\beta}_1})$ If you want to ''demonstrate'' (see What follows if we fail to reject the null hypothesis? for more detail) that the true $\beta_1$ is non-zero, then you assume the opposite is true, i.e. $H_0: \beta_1=0$. Then by the above, you know that, if $H_0$ is true that $\hat{\beta}_1 \sim N(\beta_1=0, \sigma_{\hat{\beta}_1})$. In your regression result you observe a value for $\hat{\beta_1}$ and you can compute its p-value. If that p-value is smaller than the significance level that you decide (e.g. 5%) then you reject $H_0$ en consider $H_1$ as ''proven''. In your case the ''true'' $\beta_1$ is $\beta_1=0.5$, so obviously $H_0$ is false, so you expect p-values to be below 0.05. However, if you look at the theory on hyptothesis testing, then they define ''type-II'' errors, i.e. accepting $H_0$ when it is false. So in some cases you may accept $H_0$ even though it is false, so you may have p-values above 0.05 even though $H_0$ is false. Therefore, even if in your true model $\beta_1=0.5$ it can be that you accept the $H_0: \beta_1=0$, or that you make a type-II error. Of course you want to minimize the probability of making such type-II errors where you accept that $H_0: \beta_1=0$ holds while in reality it holds that $\beta=0.5$. The size of the type-II error is linked to the power of your test. Minimizing the type-II error means maximising the power of the test. You can simulate the type-II error as in the R-code below: Note that: if you take $\beta_1$ further from the value under $H_0$ (zero) then the type II error decreases (execute the R-code with e.g. beta_1=2) which means that the power increases. If you put beta_1 equal to the value under $H_0$ then you find $1-\alpha$. R-code: x = rnorm(100,5,1) b = 0.5 beta_0= 2.5 beta_1= 0.5 nIter<-10000 alpha<-0.05 accept.h0<-0 for ( i in 1:nIter) { e = rnorm(100,0,3) y = beta_0 + beta_1*x + e m1 = lm(y~x) p.value<-summary(m1)$coefficients["x",4] if ( p.value > alpha) accept.h0<- accept.h0+1 } cat(paste("type II error probability: ", accept.h0/nIter))
Why do we trust the p-value when fitting a regression on a single sample?
I assume that you talk about the p-value on the estimated coefficient $\hat{\beta}_1$. (but the reasoning would be similar for $\hat{\beta}_0$). The theory on linear regression tells us that, if the n
Why do we trust the p-value when fitting a regression on a single sample? I assume that you talk about the p-value on the estimated coefficient $\hat{\beta}_1$. (but the reasoning would be similar for $\hat{\beta}_0$). The theory on linear regression tells us that, if the necessary conditions are fulfilled, then we know the distribution of that estimator namely, it is normal, it has mean equal to the ''true'' (but onknown) $\beta_1$ and we can estimate the variance $\sigma_{\hat{\beta}_1}$. I.e. $\hat{\beta}_1 \sim N(\beta_1, \sigma_{\hat{\beta}_1})$ If you want to ''demonstrate'' (see What follows if we fail to reject the null hypothesis? for more detail) that the true $\beta_1$ is non-zero, then you assume the opposite is true, i.e. $H_0: \beta_1=0$. Then by the above, you know that, if $H_0$ is true that $\hat{\beta}_1 \sim N(\beta_1=0, \sigma_{\hat{\beta}_1})$. In your regression result you observe a value for $\hat{\beta_1}$ and you can compute its p-value. If that p-value is smaller than the significance level that you decide (e.g. 5%) then you reject $H_0$ en consider $H_1$ as ''proven''. In your case the ''true'' $\beta_1$ is $\beta_1=0.5$, so obviously $H_0$ is false, so you expect p-values to be below 0.05. However, if you look at the theory on hyptothesis testing, then they define ''type-II'' errors, i.e. accepting $H_0$ when it is false. So in some cases you may accept $H_0$ even though it is false, so you may have p-values above 0.05 even though $H_0$ is false. Therefore, even if in your true model $\beta_1=0.5$ it can be that you accept the $H_0: \beta_1=0$, or that you make a type-II error. Of course you want to minimize the probability of making such type-II errors where you accept that $H_0: \beta_1=0$ holds while in reality it holds that $\beta=0.5$. The size of the type-II error is linked to the power of your test. Minimizing the type-II error means maximising the power of the test. You can simulate the type-II error as in the R-code below: Note that: if you take $\beta_1$ further from the value under $H_0$ (zero) then the type II error decreases (execute the R-code with e.g. beta_1=2) which means that the power increases. If you put beta_1 equal to the value under $H_0$ then you find $1-\alpha$. R-code: x = rnorm(100,5,1) b = 0.5 beta_0= 2.5 beta_1= 0.5 nIter<-10000 alpha<-0.05 accept.h0<-0 for ( i in 1:nIter) { e = rnorm(100,0,3) y = beta_0 + beta_1*x + e m1 = lm(y~x) p.value<-summary(m1)$coefficients["x",4] if ( p.value > alpha) accept.h0<- accept.h0+1 } cat(paste("type II error probability: ", accept.h0/nIter))
Why do we trust the p-value when fitting a regression on a single sample? I assume that you talk about the p-value on the estimated coefficient $\hat{\beta}_1$. (but the reasoning would be similar for $\hat{\beta}_0$). The theory on linear regression tells us that, if the n
55,925
Why do we trust the p-value when fitting a regression on a single sample?
"Trusting" the p-value may very well mean misunderstanding it. You make up a model with considerable error and sometimes the regression will detect the linear relation, some times not. The risk is determined by choosing the p-value-threshold alpha. In the case you have proposed. Each p-value under 0.05 is "right", and each above 0.05 lacks observations. Try larger samples then n=100 and with increasing numbers you will find decreasing occurence of p-values above 0.05. So your question is essentially about the power of the test. To find a significant correlation between x and y with a power of 90% there has to be a correlation of at least r=0.31 > library(pwr) > pwr.r.test(n=100, sig.level = 0.05, power=0.9) approximate correlation power calculation (arctangh transformation) n = 100 r = 0.3164205 sig.level = 0.05 power = 0.9 alternative = two.sided The correlation of your data is somewhere around 0.16. So the problem is not the trust in p-values but that your "study" is massively underpowered. Find a sample of n=500 to see "wrong" p-values about one in twenty: > pwr.r.test(r=0.16, power=.95) approximate correlation power calculation (arctangh transformation) n = 501.0081 r = 0.16 sig.level = 0.05 power = 0.95 alternative = two.sided Lesson learned: Never trust a not-significant p-value without a sound power analysis.
Why do we trust the p-value when fitting a regression on a single sample?
"Trusting" the p-value may very well mean misunderstanding it. You make up a model with considerable error and sometimes the regression will detect the linear relation, some times not. The risk is det
Why do we trust the p-value when fitting a regression on a single sample? "Trusting" the p-value may very well mean misunderstanding it. You make up a model with considerable error and sometimes the regression will detect the linear relation, some times not. The risk is determined by choosing the p-value-threshold alpha. In the case you have proposed. Each p-value under 0.05 is "right", and each above 0.05 lacks observations. Try larger samples then n=100 and with increasing numbers you will find decreasing occurence of p-values above 0.05. So your question is essentially about the power of the test. To find a significant correlation between x and y with a power of 90% there has to be a correlation of at least r=0.31 > library(pwr) > pwr.r.test(n=100, sig.level = 0.05, power=0.9) approximate correlation power calculation (arctangh transformation) n = 100 r = 0.3164205 sig.level = 0.05 power = 0.9 alternative = two.sided The correlation of your data is somewhere around 0.16. So the problem is not the trust in p-values but that your "study" is massively underpowered. Find a sample of n=500 to see "wrong" p-values about one in twenty: > pwr.r.test(r=0.16, power=.95) approximate correlation power calculation (arctangh transformation) n = 501.0081 r = 0.16 sig.level = 0.05 power = 0.95 alternative = two.sided Lesson learned: Never trust a not-significant p-value without a sound power analysis.
Why do we trust the p-value when fitting a regression on a single sample? "Trusting" the p-value may very well mean misunderstanding it. You make up a model with considerable error and sometimes the regression will detect the linear relation, some times not. The risk is det
55,926
Logistic regression gets better but classification gets worse?
With 318 cases in each group you can examine about 20 predictors without too much risk of overfitting. Your second and third sets of variables combine for 23; a big problem is counting each of your neighborhoods in variable set 1 as a fixed effect, using up another 29 degrees of freedom. The simplest short-term solution might be to treat neighborhoods as random effects instead of as fixed effects in your logistic regression, using for example the glmer function in the R lme4 package. That takes into account the differences among neighborhoods, as you have been instructed, but only uses up 1 degree of freedom in the analysis as you are modeling the distribution of effects among neighborhoods rather than the individual neighborhood effects. That might allow a straightforward analysis of all the other variables in a single model without the dangers of stepwise selection. LASSO would certainly be a useful way to further select among the remaining predictors if necessary. You also, however, must be open to the possibility that the predictors you measured bear no relation to the choice of participation.
Logistic regression gets better but classification gets worse?
With 318 cases in each group you can examine about 20 predictors without too much risk of overfitting. Your second and third sets of variables combine for 23; a big problem is counting each of your ne
Logistic regression gets better but classification gets worse? With 318 cases in each group you can examine about 20 predictors without too much risk of overfitting. Your second and third sets of variables combine for 23; a big problem is counting each of your neighborhoods in variable set 1 as a fixed effect, using up another 29 degrees of freedom. The simplest short-term solution might be to treat neighborhoods as random effects instead of as fixed effects in your logistic regression, using for example the glmer function in the R lme4 package. That takes into account the differences among neighborhoods, as you have been instructed, but only uses up 1 degree of freedom in the analysis as you are modeling the distribution of effects among neighborhoods rather than the individual neighborhood effects. That might allow a straightforward analysis of all the other variables in a single model without the dangers of stepwise selection. LASSO would certainly be a useful way to further select among the remaining predictors if necessary. You also, however, must be open to the possibility that the predictors you measured bear no relation to the choice of participation.
Logistic regression gets better but classification gets worse? With 318 cases in each group you can examine about 20 predictors without too much risk of overfitting. Your second and third sets of variables combine for 23; a big problem is counting each of your ne
55,927
interpretation of slope estimate of Poisson regression
No. There are two problems, one is an arithmetic to english translation, and one is philosophical. The phrase "decrease by 0.033 units" is to be interpreted as "subtract 0.033 units from y", which is incorrect. Better is either One unit increase in year corresponds to multiplication of y by 0.966. or One unit increase in year corresponds to a $3.3\%$ decrease in y. The percentage sign is very important, it carries the information that the decrease is multiplicative, not additive. The other is your use of the word causes. That is a heavy word, you should not use it without serious consideration. Certainly a regression cannot demonstrate causation alone, it must be combined with either a randomly assigned experiment, or some scientific reasoning to believe that causation exists (in which case the regression is but estimating its effect size).
interpretation of slope estimate of Poisson regression
No. There are two problems, one is an arithmetic to english translation, and one is philosophical. The phrase "decrease by 0.033 units" is to be interpreted as "subtract 0.033 units from y", which is
interpretation of slope estimate of Poisson regression No. There are two problems, one is an arithmetic to english translation, and one is philosophical. The phrase "decrease by 0.033 units" is to be interpreted as "subtract 0.033 units from y", which is incorrect. Better is either One unit increase in year corresponds to multiplication of y by 0.966. or One unit increase in year corresponds to a $3.3\%$ decrease in y. The percentage sign is very important, it carries the information that the decrease is multiplicative, not additive. The other is your use of the word causes. That is a heavy word, you should not use it without serious consideration. Certainly a regression cannot demonstrate causation alone, it must be combined with either a randomly assigned experiment, or some scientific reasoning to believe that causation exists (in which case the regression is but estimating its effect size).
interpretation of slope estimate of Poisson regression No. There are two problems, one is an arithmetic to english translation, and one is philosophical. The phrase "decrease by 0.033 units" is to be interpreted as "subtract 0.033 units from y", which is
55,928
Why does the glm function does not return an R^2 value?
The glm function uses a maximum likelihood estimator (or restricted maximum likelihood). Maximum likelihood does not minimize the squared error (this is called [ordinary] least squares). Sometimes both estimators give the same results (in the linear/ordinary case for normal distributed error terms, see here) but this does not hold in general. Since the coefficient of determination $R^2$ is calculated by ordinary least-squares regression and not by maximum likelihood, there is no reason to display this measure. PS: Also regard Nick Cox very valid comment below: $R^2$ may be also well-definied and interesting for GLM. My personal experience is that (as so often) some people like/accept it, while others do not.
Why does the glm function does not return an R^2 value?
The glm function uses a maximum likelihood estimator (or restricted maximum likelihood). Maximum likelihood does not minimize the squared error (this is called [ordinary] least squares). Sometimes bot
Why does the glm function does not return an R^2 value? The glm function uses a maximum likelihood estimator (or restricted maximum likelihood). Maximum likelihood does not minimize the squared error (this is called [ordinary] least squares). Sometimes both estimators give the same results (in the linear/ordinary case for normal distributed error terms, see here) but this does not hold in general. Since the coefficient of determination $R^2$ is calculated by ordinary least-squares regression and not by maximum likelihood, there is no reason to display this measure. PS: Also regard Nick Cox very valid comment below: $R^2$ may be also well-definied and interesting for GLM. My personal experience is that (as so often) some people like/accept it, while others do not.
Why does the glm function does not return an R^2 value? The glm function uses a maximum likelihood estimator (or restricted maximum likelihood). Maximum likelihood does not minimize the squared error (this is called [ordinary] least squares). Sometimes bot
55,929
Can we calculate the probability that a null hypothesis is true, in general?
The term "null hypothesis" is usually used in a frequentist setting, where characteristics of the population, such as its mean, are regarded as fixed, not random. There, it makes no sense to talk about the probability of the null hypothesis. In a Bayesian setting, these characteristics are regarded as random and we can talk about things like the probability of a population mean equalling 0. However, a typical Bayesian would give a prior probability of 0 to many common frequentist null hypotheses, such as the hypothesis that the mean of a normal distribution exactly equals a prespecified value.
Can we calculate the probability that a null hypothesis is true, in general?
The term "null hypothesis" is usually used in a frequentist setting, where characteristics of the population, such as its mean, are regarded as fixed, not random. There, it makes no sense to talk abou
Can we calculate the probability that a null hypothesis is true, in general? The term "null hypothesis" is usually used in a frequentist setting, where characteristics of the population, such as its mean, are regarded as fixed, not random. There, it makes no sense to talk about the probability of the null hypothesis. In a Bayesian setting, these characteristics are regarded as random and we can talk about things like the probability of a population mean equalling 0. However, a typical Bayesian would give a prior probability of 0 to many common frequentist null hypotheses, such as the hypothesis that the mean of a normal distribution exactly equals a prespecified value.
Can we calculate the probability that a null hypothesis is true, in general? The term "null hypothesis" is usually used in a frequentist setting, where characteristics of the population, such as its mean, are regarded as fixed, not random. There, it makes no sense to talk abou
55,930
How to calculate bias when we have an estimation using simple linear regression?
Bias is the difference between the value of the (population) parameter and the expected value of the estimate of that parameter. As @matthew-drury points out, unless one knows the population, we cannot calculate the bias. Unless your data is from a complete census of the population or from simulation (when the data is simulated, one sets the parameter for the simulation), the parameters will not be known. Expected value of the estimator itself will require some understanding of the sampling distribution of the estimator and the associated parameters. Having said that, you can estimate the bias possibly via a bootstrap approach. See for example: When is the bootstrap estimate of bias valid?
How to calculate bias when we have an estimation using simple linear regression?
Bias is the difference between the value of the (population) parameter and the expected value of the estimate of that parameter. As @matthew-drury points out, unless one knows the population, we canno
How to calculate bias when we have an estimation using simple linear regression? Bias is the difference between the value of the (population) parameter and the expected value of the estimate of that parameter. As @matthew-drury points out, unless one knows the population, we cannot calculate the bias. Unless your data is from a complete census of the population or from simulation (when the data is simulated, one sets the parameter for the simulation), the parameters will not be known. Expected value of the estimator itself will require some understanding of the sampling distribution of the estimator and the associated parameters. Having said that, you can estimate the bias possibly via a bootstrap approach. See for example: When is the bootstrap estimate of bias valid?
How to calculate bias when we have an estimation using simple linear regression? Bias is the difference between the value of the (population) parameter and the expected value of the estimate of that parameter. As @matthew-drury points out, unless one knows the population, we canno
55,931
Minimizing the median absolute deviation or median absolute error
The shortest half is the shortest interval containing half the distribution or data (when dealing with populations or samples respectively). [Some authors call this interval of the shortest half the shorth, though the term seems to have been coined by Andrews et al (1972) who used it to refer to the mean of the observations in the shortest half, so it would more properly refer to that. Probably best to just explicitly say shortest half and mean of the shortest half to avoid that potential confusion] The midpoint of the shortest half should minimize the median of the absolute deviations; you sometimes see it called "the midpoint of the shortest half", but it has another name (see below). This is a one-dimensional version of a minimum volume estimator. Because quantiles are equivariant to monotonic-increasing transformation, in one dimension we can see that minimizing the median of the absolute deviations is equivalent to minimizing the median of the squared deviations [or any other monotonic increasing function of them -- at least if we keep our definition of medians as interval-valued when they don't fall exactly at observations, otherwise they'll differ slightly but always lie between the same observations]. So the literature on least median of squares (LMS) estimation will probably be of some use to you here. e.g. see Rousseeuw & Leroy, 1987 [1], for example There's often explicit code for LMS estimators (especially for regression, but if you only fit an intercept ... you should get the original thing you asked about) and sometimes code for producing estimators based on the shortest half (e.g. Nick Cox seems to have written one for Stata, for example) So the alternative name I referred to earlier would be the "least median of squares estimate of location". Sorry both terms seem to be such a mouthful; off the top of my head I don't know any reasonably unambiguous names that are shorter. [1] Rousseeuw, P.J. and Leroy, A.M. (1987), Robust Regression and Outlier Detection, Wiley, New York.
Minimizing the median absolute deviation or median absolute error
The shortest half is the shortest interval containing half the distribution or data (when dealing with populations or samples respectively). [Some authors call this interval of the shortest half the s
Minimizing the median absolute deviation or median absolute error The shortest half is the shortest interval containing half the distribution or data (when dealing with populations or samples respectively). [Some authors call this interval of the shortest half the shorth, though the term seems to have been coined by Andrews et al (1972) who used it to refer to the mean of the observations in the shortest half, so it would more properly refer to that. Probably best to just explicitly say shortest half and mean of the shortest half to avoid that potential confusion] The midpoint of the shortest half should minimize the median of the absolute deviations; you sometimes see it called "the midpoint of the shortest half", but it has another name (see below). This is a one-dimensional version of a minimum volume estimator. Because quantiles are equivariant to monotonic-increasing transformation, in one dimension we can see that minimizing the median of the absolute deviations is equivalent to minimizing the median of the squared deviations [or any other monotonic increasing function of them -- at least if we keep our definition of medians as interval-valued when they don't fall exactly at observations, otherwise they'll differ slightly but always lie between the same observations]. So the literature on least median of squares (LMS) estimation will probably be of some use to you here. e.g. see Rousseeuw & Leroy, 1987 [1], for example There's often explicit code for LMS estimators (especially for regression, but if you only fit an intercept ... you should get the original thing you asked about) and sometimes code for producing estimators based on the shortest half (e.g. Nick Cox seems to have written one for Stata, for example) So the alternative name I referred to earlier would be the "least median of squares estimate of location". Sorry both terms seem to be such a mouthful; off the top of my head I don't know any reasonably unambiguous names that are shorter. [1] Rousseeuw, P.J. and Leroy, A.M. (1987), Robust Regression and Outlier Detection, Wiley, New York.
Minimizing the median absolute deviation or median absolute error The shortest half is the shortest interval containing half the distribution or data (when dealing with populations or samples respectively). [Some authors call this interval of the shortest half the s
55,932
Poisson Distribution: Estimating rate parameter and the interval length
Let $t=T_F$. Conditional on the number of occurences $N=n$, the arrival times $t_1,t_2,\dots,t_N$ are known to have the same distribution as the order statstics of $n$ iid unif$(0,t)$ random variables. Hence, the likelihood becomes \begin{align} L(\lambda,t) &= P(N=n) f(t_1,t_2,\dots,t_N|N=n) \\ &= \frac{e^{-\lambda t}(\lambda t)^n}{n!}\frac{n!}{t^n} \\ &= e^{-\lambda t}\lambda^n. \end{align} for $t\ge t_n$ and zero elsewhere. This is maximised for $\hat t=t_n$ and $\hat\lambda=n/t_n$. These MLEs don't exist if there are no occurrences $N=0$, however. Conditional on $N=n$, again using the fact that $t_n$ can be viewed as an order statistic (the maximum) of $n$ iid unif$(0,t)$ random variables, $E(t_N|N=n)=\frac n{n+1} t$. Hence, the estimator $t^*=\frac {n+1}n t_n$ is unbiased for $t$ conditional on $N=n$ and hence also conditional on $N\ge 1$. A reasonable frequentist estimator of $\lambda$ might be $\lambda^* = n/t^* = \frac{n^2}{(n+1)t_n}$ but this does not have finite expectation when $N=1$ so assessing its bias is even more troublesome. Bayesian inference using independent, non-informative scale priors on $\lambda$ and $t$ on the other hand leads to a posterior $$ f(\lambda,t|t_1,\dots,t_N) \propto e^{-\lambda t}\lambda^{n-1}t^{-1}. $$ for $t>t_n,\lambda>0$. Integrating out $\lambda$, the marginal posterior of $t$ becomes $$ f(t|t_1,\dots,t_N) = \frac{n t_n^n}{t^{n+1}}, t>t_n, $$ and the posterior mean $E(t|t_1,\dots,t_N)=\frac n{n-1} t_n$. A $(1-\alpha)$-credible interval for $t$ is given by $\left(\frac{t_n}{(1-\alpha/2)^{1/n}}, \frac{t_n}{(\alpha/2)^{1/n}}\right)$. The marginal posterior of $\lambda$, \begin{align} f(\lambda|t_1,\dots,t_N) &\propto \int_{t_\text{max}}^\infty e^{-\lambda t}\lambda^{n-1}t^{-1} dt \\ &= \lambda^{n-1}\Gamma(0,\lambda t_n) \end{align} where $\Gamma$ is the incomplete gamma function.
Poisson Distribution: Estimating rate parameter and the interval length
Let $t=T_F$. Conditional on the number of occurences $N=n$, the arrival times $t_1,t_2,\dots,t_N$ are known to have the same distribution as the order statstics of $n$ iid unif$(0,t)$ random variable
Poisson Distribution: Estimating rate parameter and the interval length Let $t=T_F$. Conditional on the number of occurences $N=n$, the arrival times $t_1,t_2,\dots,t_N$ are known to have the same distribution as the order statstics of $n$ iid unif$(0,t)$ random variables. Hence, the likelihood becomes \begin{align} L(\lambda,t) &= P(N=n) f(t_1,t_2,\dots,t_N|N=n) \\ &= \frac{e^{-\lambda t}(\lambda t)^n}{n!}\frac{n!}{t^n} \\ &= e^{-\lambda t}\lambda^n. \end{align} for $t\ge t_n$ and zero elsewhere. This is maximised for $\hat t=t_n$ and $\hat\lambda=n/t_n$. These MLEs don't exist if there are no occurrences $N=0$, however. Conditional on $N=n$, again using the fact that $t_n$ can be viewed as an order statistic (the maximum) of $n$ iid unif$(0,t)$ random variables, $E(t_N|N=n)=\frac n{n+1} t$. Hence, the estimator $t^*=\frac {n+1}n t_n$ is unbiased for $t$ conditional on $N=n$ and hence also conditional on $N\ge 1$. A reasonable frequentist estimator of $\lambda$ might be $\lambda^* = n/t^* = \frac{n^2}{(n+1)t_n}$ but this does not have finite expectation when $N=1$ so assessing its bias is even more troublesome. Bayesian inference using independent, non-informative scale priors on $\lambda$ and $t$ on the other hand leads to a posterior $$ f(\lambda,t|t_1,\dots,t_N) \propto e^{-\lambda t}\lambda^{n-1}t^{-1}. $$ for $t>t_n,\lambda>0$. Integrating out $\lambda$, the marginal posterior of $t$ becomes $$ f(t|t_1,\dots,t_N) = \frac{n t_n^n}{t^{n+1}}, t>t_n, $$ and the posterior mean $E(t|t_1,\dots,t_N)=\frac n{n-1} t_n$. A $(1-\alpha)$-credible interval for $t$ is given by $\left(\frac{t_n}{(1-\alpha/2)^{1/n}}, \frac{t_n}{(\alpha/2)^{1/n}}\right)$. The marginal posterior of $\lambda$, \begin{align} f(\lambda|t_1,\dots,t_N) &\propto \int_{t_\text{max}}^\infty e^{-\lambda t}\lambda^{n-1}t^{-1} dt \\ &= \lambda^{n-1}\Gamma(0,\lambda t_n) \end{align} where $\Gamma$ is the incomplete gamma function.
Poisson Distribution: Estimating rate parameter and the interval length Let $t=T_F$. Conditional on the number of occurences $N=n$, the arrival times $t_1,t_2,\dots,t_N$ are known to have the same distribution as the order statstics of $n$ iid unif$(0,t)$ random variable
55,933
Neural network not i.i.d
There are several ways to have independency assumptions in neural nets. One is that all your samples are independent, i.e. if you have a data base of 10'000 cat pictures, you assume they have all be taken independently of each other. Another is if you want to regress on certain values. Say you want to regress from the cat picture on its size and its weight. If you just minimize the sum of squares of your predictions for both values, you introduce the assumption that weight and size are independent–which is not the case. The first case can be mathematically seen that your objective functions typically are sums of terms, one for each sample. $$ \mathcal{L}(\theta) = \sum_i \ell_i $$ The sum is nothing but a product in the log domain after observing that $\log (a \cdot b) = \log a + \log b$. $$ \mathcal{L}(\theta) = \log \prod_i \exp(\ell_i), $$ which implies that the data samples factorize–they are independent. The definition of independence: $p(a)p(b) = p(a, b) \Leftrightarrow \text{a is independend of b}$. For the second case, note that a sum of squares is the log density of a Normal. If you have a large sum over such, this also implies a product over many Normally distributed random variables, and hence implies an independency assumption.
Neural network not i.i.d
There are several ways to have independency assumptions in neural nets. One is that all your samples are independent, i.e. if you have a data base of 10'000 cat pictures, you assume they have all be t
Neural network not i.i.d There are several ways to have independency assumptions in neural nets. One is that all your samples are independent, i.e. if you have a data base of 10'000 cat pictures, you assume they have all be taken independently of each other. Another is if you want to regress on certain values. Say you want to regress from the cat picture on its size and its weight. If you just minimize the sum of squares of your predictions for both values, you introduce the assumption that weight and size are independent–which is not the case. The first case can be mathematically seen that your objective functions typically are sums of terms, one for each sample. $$ \mathcal{L}(\theta) = \sum_i \ell_i $$ The sum is nothing but a product in the log domain after observing that $\log (a \cdot b) = \log a + \log b$. $$ \mathcal{L}(\theta) = \log \prod_i \exp(\ell_i), $$ which implies that the data samples factorize–they are independent. The definition of independence: $p(a)p(b) = p(a, b) \Leftrightarrow \text{a is independend of b}$. For the second case, note that a sum of squares is the log density of a Normal. If you have a large sum over such, this also implies a product over many Normally distributed random variables, and hence implies an independency assumption.
Neural network not i.i.d There are several ways to have independency assumptions in neural nets. One is that all your samples are independent, i.e. if you have a data base of 10'000 cat pictures, you assume they have all be t
55,934
Neural network not i.i.d
Without aiming for the Math: using non independent and/or unequally distributed variables is possible with ANN. You might trigger some side effects by doing so, like with unequally distributed variables having an overly long training phase, getting stuck in (other) local optima (though this is less of an issue in the application case), or not obtaining the optimal error during minimization from a Math point of view. But technically, it is definitely not a hard requirement for input variables of ANN to fulfil those properties - ANNs work fine with such too. Think e.g. of all the deep learning approaches, where people tend to (more or less) throw in whatever information they get hold of, mostly without sophisticated preprocessing.
Neural network not i.i.d
Without aiming for the Math: using non independent and/or unequally distributed variables is possible with ANN. You might trigger some side effects by doing so, like with unequally distributed variabl
Neural network not i.i.d Without aiming for the Math: using non independent and/or unequally distributed variables is possible with ANN. You might trigger some side effects by doing so, like with unequally distributed variables having an overly long training phase, getting stuck in (other) local optima (though this is less of an issue in the application case), or not obtaining the optimal error during minimization from a Math point of view. But technically, it is definitely not a hard requirement for input variables of ANN to fulfil those properties - ANNs work fine with such too. Think e.g. of all the deep learning approaches, where people tend to (more or less) throw in whatever information they get hold of, mostly without sophisticated preprocessing.
Neural network not i.i.d Without aiming for the Math: using non independent and/or unequally distributed variables is possible with ANN. You might trigger some side effects by doing so, like with unequally distributed variabl
55,935
Creating clusters for binary data
Latent class modeling would be one approach to finding underlying, "hidden" partitions or groupings of diseases. LC is a very flexible method with two broad approaches: replications based on repeated measures across subjects vs replications based on cross-classifying a set of categorical variables with no repeated measures. Your data would fit the second type. All LC models have 2 stages: in stage 1, a dependent or target variable is identified and a regression model is built. In stage 2, the residual (a single "latent" vector) from the stage 1 model is analyzed and partitions are created capturing the variability (or heterogeneity) in that vector -- these are the "latent classes." Freeware is out there for downloading that would probably work pretty well for you. One of these is an R module called polCA available here. Note that this approach is to be used only with binary data such as yours: http://www.jstatsoft.org/article/view/v042i10 If you have about $1,000 to spend on a commercial product, Latent Gold is available from www.statisticalinnovations.com Having used on Latent Gold for years, I'm a big fan of that product for its analytic power and range of solutions. For instance, polCA is only useful for LC models with categorical information whereas LG works for true mixtures...plus, their developers are always adding new modules. The most recent addition builds LC models using hidden Markov chains. Bear in mind that LG is not an "end-to-end" data platform, i.e., it is not good for heavy data manipulation or lifting. Mplus is another commercially available product for this class of models with pricing similar to LG.
Creating clusters for binary data
Latent class modeling would be one approach to finding underlying, "hidden" partitions or groupings of diseases. LC is a very flexible method with two broad approaches: replications based on repeated
Creating clusters for binary data Latent class modeling would be one approach to finding underlying, "hidden" partitions or groupings of diseases. LC is a very flexible method with two broad approaches: replications based on repeated measures across subjects vs replications based on cross-classifying a set of categorical variables with no repeated measures. Your data would fit the second type. All LC models have 2 stages: in stage 1, a dependent or target variable is identified and a regression model is built. In stage 2, the residual (a single "latent" vector) from the stage 1 model is analyzed and partitions are created capturing the variability (or heterogeneity) in that vector -- these are the "latent classes." Freeware is out there for downloading that would probably work pretty well for you. One of these is an R module called polCA available here. Note that this approach is to be used only with binary data such as yours: http://www.jstatsoft.org/article/view/v042i10 If you have about $1,000 to spend on a commercial product, Latent Gold is available from www.statisticalinnovations.com Having used on Latent Gold for years, I'm a big fan of that product for its analytic power and range of solutions. For instance, polCA is only useful for LC models with categorical information whereas LG works for true mixtures...plus, their developers are always adding new modules. The most recent addition builds LC models using hidden Markov chains. Bear in mind that LG is not an "end-to-end" data platform, i.e., it is not good for heavy data manipulation or lifting. Mplus is another commercially available product for this class of models with pricing similar to LG.
Creating clusters for binary data Latent class modeling would be one approach to finding underlying, "hidden" partitions or groupings of diseases. LC is a very flexible method with two broad approaches: replications based on repeated
55,936
Creating clusters for binary data
Many forms of clustering could work. Since you asked about constructing a dendrogram, it sounds like you want hierarchical clustering. Hierarchical agglomerative clustering is a popular class of methods. You'll have to choose the linkage function, which determines how clusters are merged. UPGMA (aka average linkage) is one example. A good discussion on this topic is available here. For distance/dissimilarity-based clustering (including hierarchical clustering), you would need a distance measure that works for binary data. The Hamming distance is one example. The Hamming distance between two binary vectors is the number of elements that are not equal. In your example, the Hamming distance between two diseases would be the number of patients that are positive for one disease but not the other.
Creating clusters for binary data
Many forms of clustering could work. Since you asked about constructing a dendrogram, it sounds like you want hierarchical clustering. Hierarchical agglomerative clustering is a popular class of metho
Creating clusters for binary data Many forms of clustering could work. Since you asked about constructing a dendrogram, it sounds like you want hierarchical clustering. Hierarchical agglomerative clustering is a popular class of methods. You'll have to choose the linkage function, which determines how clusters are merged. UPGMA (aka average linkage) is one example. A good discussion on this topic is available here. For distance/dissimilarity-based clustering (including hierarchical clustering), you would need a distance measure that works for binary data. The Hamming distance is one example. The Hamming distance between two binary vectors is the number of elements that are not equal. In your example, the Hamming distance between two diseases would be the number of patients that are positive for one disease but not the other.
Creating clusters for binary data Many forms of clustering could work. Since you asked about constructing a dendrogram, it sounds like you want hierarchical clustering. Hierarchical agglomerative clustering is a popular class of metho
55,937
GLMM- relationship between AICc weight and random effects?
I would strongly advise you to avoid automated model selection procedures such as dredge() (even the function name makes me shiver). There may be some merit in these when you are primarily concerned about prediction for future data, but even in this case it is strongly recommended to use some form of cross-validation, where you can build your model on a training dataset and then assess it's predictive capability on another dataset. If you build a model based on AIC with your whole dataset, while it may predict your current dataset well, there is a good chance it will perform poorly on new data. When your goal is mainly inference, the best way forward is to use theory and common sense to build your model. Unless you have a huge number of variables, then I think that theory and common sense is also a better method for prediction too. A good starting point is to draw a path diagram to hypothesize the associations, and directions of causality, according to theory. This can allow you to build a model avoiding common problems such as over-adjustment for confounding, and including variables that should not be present in the model (for example, if they lie on the causal path between an exposure and the outcome). Although the current theory may not be not be well-developed (developing the theory is presumably one of your goals) a path diagram may help you rule out some possible models. DAGitty is a very user-friendly web-based graphical tool that can assist with this. You have measured 14 variables, presumably choosing these for some good reasons. Excluding some solely on the basis of high bivariate correlation without understanding the relations between them is dangerous. If they are essentially measuring the same thing, or one is derived implicitly (or explicitly) from another, this may be valid, but if one is a cause of the other, then you need to think carefully about which one to exclude. A path diagram will be very useful for this. You have repeated measures on individual subjects, therefore a priori it is a good idea to account for this by using random intercepts for subject because measurements on the same individual are likely to be more similar to those on another individual. I would strongly urge you not to rely on p-values in general, but even more so in this case. If you really want to test the significance of the random intercept then a bootstrap estimate is perhaps the most robust way to proceed. As for your question about low weights, I don't know what these weights represent exactly, but I assume that they must all add up to 1 so if there are a lot of models to choose from then obviously there will be many models with small weights, and if the "best" ones are very similar to each other then by necessity, the "best" ones will have small weights. Note that the difference in AICc between the "best" and the worst (well, the 6th best, since there are only 6 shown) is just 1.69, which is telling you that there is very little difference between any of these models.
GLMM- relationship between AICc weight and random effects?
I would strongly advise you to avoid automated model selection procedures such as dredge() (even the function name makes me shiver). There may be some merit in these when you are primarily concerned a
GLMM- relationship between AICc weight and random effects? I would strongly advise you to avoid automated model selection procedures such as dredge() (even the function name makes me shiver). There may be some merit in these when you are primarily concerned about prediction for future data, but even in this case it is strongly recommended to use some form of cross-validation, where you can build your model on a training dataset and then assess it's predictive capability on another dataset. If you build a model based on AIC with your whole dataset, while it may predict your current dataset well, there is a good chance it will perform poorly on new data. When your goal is mainly inference, the best way forward is to use theory and common sense to build your model. Unless you have a huge number of variables, then I think that theory and common sense is also a better method for prediction too. A good starting point is to draw a path diagram to hypothesize the associations, and directions of causality, according to theory. This can allow you to build a model avoiding common problems such as over-adjustment for confounding, and including variables that should not be present in the model (for example, if they lie on the causal path between an exposure and the outcome). Although the current theory may not be not be well-developed (developing the theory is presumably one of your goals) a path diagram may help you rule out some possible models. DAGitty is a very user-friendly web-based graphical tool that can assist with this. You have measured 14 variables, presumably choosing these for some good reasons. Excluding some solely on the basis of high bivariate correlation without understanding the relations between them is dangerous. If they are essentially measuring the same thing, or one is derived implicitly (or explicitly) from another, this may be valid, but if one is a cause of the other, then you need to think carefully about which one to exclude. A path diagram will be very useful for this. You have repeated measures on individual subjects, therefore a priori it is a good idea to account for this by using random intercepts for subject because measurements on the same individual are likely to be more similar to those on another individual. I would strongly urge you not to rely on p-values in general, but even more so in this case. If you really want to test the significance of the random intercept then a bootstrap estimate is perhaps the most robust way to proceed. As for your question about low weights, I don't know what these weights represent exactly, but I assume that they must all add up to 1 so if there are a lot of models to choose from then obviously there will be many models with small weights, and if the "best" ones are very similar to each other then by necessity, the "best" ones will have small weights. Note that the difference in AICc between the "best" and the worst (well, the 6th best, since there are only 6 shown) is just 1.69, which is telling you that there is very little difference between any of these models.
GLMM- relationship between AICc weight and random effects? I would strongly advise you to avoid automated model selection procedures such as dredge() (even the function name makes me shiver). There may be some merit in these when you are primarily concerned a
55,938
GLMM- relationship between AICc weight and random effects?
Akaike weights only provide information about the set of models from which they are calculated, so in your example, you can't really learn anything from comparing weights for the glmm set of models to weights for the glm set of models (i.e. Akaike weights won't tell you whether the random effect is appropriate). There's a good discussion on testing random effects here. It does specifically state "do not compare lmer models with the corresponding lm fits, or glmer/glm; the log-likelihoods are not commensurate". That said, there is a worked glmer example by Ben Bolker here that does explicitly compare log-likelihoods and AICc values between glm and glmer models. If you do go this route, it will be the AICc values that you want to compare between glm and glmer, not the delta or weight values (which are only meaningful within a set). The fact that the variance of your random effect is high (and sd relatively low) suggests to me that you should retain the random effect. Likewise, comparing the conditional and marginal R2 values (0.8 vs. 0.01) suggests that the random effect is explaining a lot of the variation in your response. Update It doesn't make sense to ask whether the glmer weight values are "too low". An Akaike weight is the probability that a model is the 'best', given the data and the set of models under consideration. If the top models in a given set all have low and similar Akaike weights, it just means that no one model in that set stands out as being much better than the other models in that set... it doesn't mean that the top models in your set are bad models in an absolute sense. It doesn't tell you either way. Akaike weights are only useful for relative comparison. To assess goodness of fit in an absolute sense, you could look at R2 values. Your R2c of 80% suggests to me that that particular model is very good, though that's mostly driven by the random effect (i.e. individuals varied a lot in habitat use). Since your aim is to get fixed effect coefficients for use in a RSF, and since no single model stands out as the 'best', model averaging would be a good approach. If mod is your glmer object, you can get model-averaged coefficients to use in your RSF as follows: mod_dredge <- dredge(mod) mod_avg <- model.avg(mod_dredge) coefTable(mod_avg) Since your fixed effects have relatively low explanatory power (R2m = 1%), an RSF based on those coefficients may not have much predictive power. As for what you can do to improve this model, it's tough to say... it could just be a biological reality that your study animals vary a lot in habitat use (thus the important random effect), but overall habitat use by these animals is simply not strongly related to the set of specific fixed effects you have examined.
GLMM- relationship between AICc weight and random effects?
Akaike weights only provide information about the set of models from which they are calculated, so in your example, you can't really learn anything from comparing weights for the glmm set of models to
GLMM- relationship between AICc weight and random effects? Akaike weights only provide information about the set of models from which they are calculated, so in your example, you can't really learn anything from comparing weights for the glmm set of models to weights for the glm set of models (i.e. Akaike weights won't tell you whether the random effect is appropriate). There's a good discussion on testing random effects here. It does specifically state "do not compare lmer models with the corresponding lm fits, or glmer/glm; the log-likelihoods are not commensurate". That said, there is a worked glmer example by Ben Bolker here that does explicitly compare log-likelihoods and AICc values between glm and glmer models. If you do go this route, it will be the AICc values that you want to compare between glm and glmer, not the delta or weight values (which are only meaningful within a set). The fact that the variance of your random effect is high (and sd relatively low) suggests to me that you should retain the random effect. Likewise, comparing the conditional and marginal R2 values (0.8 vs. 0.01) suggests that the random effect is explaining a lot of the variation in your response. Update It doesn't make sense to ask whether the glmer weight values are "too low". An Akaike weight is the probability that a model is the 'best', given the data and the set of models under consideration. If the top models in a given set all have low and similar Akaike weights, it just means that no one model in that set stands out as being much better than the other models in that set... it doesn't mean that the top models in your set are bad models in an absolute sense. It doesn't tell you either way. Akaike weights are only useful for relative comparison. To assess goodness of fit in an absolute sense, you could look at R2 values. Your R2c of 80% suggests to me that that particular model is very good, though that's mostly driven by the random effect (i.e. individuals varied a lot in habitat use). Since your aim is to get fixed effect coefficients for use in a RSF, and since no single model stands out as the 'best', model averaging would be a good approach. If mod is your glmer object, you can get model-averaged coefficients to use in your RSF as follows: mod_dredge <- dredge(mod) mod_avg <- model.avg(mod_dredge) coefTable(mod_avg) Since your fixed effects have relatively low explanatory power (R2m = 1%), an RSF based on those coefficients may not have much predictive power. As for what you can do to improve this model, it's tough to say... it could just be a biological reality that your study animals vary a lot in habitat use (thus the important random effect), but overall habitat use by these animals is simply not strongly related to the set of specific fixed effects you have examined.
GLMM- relationship between AICc weight and random effects? Akaike weights only provide information about the set of models from which they are calculated, so in your example, you can't really learn anything from comparing weights for the glmm set of models to
55,939
In PCA, do the principal components beyond the first optimize any expression?
The first $k$ principal components minimize the squared reconstruction error. That is, we project the data onto the first $k$ principal components, then back into the original space to obtain a 'reconstruction' of the data. The first $k$ principal components are the vectors that minimize the sum of squared distances between each point and its reconstruction (the paper below mentions this point, among many other sources). Among all sets of $k$ vectors, the first principal components do not maximize the sum of the variance of the data projected onto each vector. For example, in many cases we could increase the variance by making all vectors point near the direction of the first principal component. But, if we constrain the vectors to be orthogonal (as PCA does), then the first principal components do indeed have this property (e.g. see here). Another interpretation is that the first $k$ principal components maximize the likelihood of a particular Gaussian latent variable model. See the following paper: Tipping & Bishop (1999). Probabilistic principal component analysis.
In PCA, do the principal components beyond the first optimize any expression?
The first $k$ principal components minimize the squared reconstruction error. That is, we project the data onto the first $k$ principal components, then back into the original space to obtain a 'recon
In PCA, do the principal components beyond the first optimize any expression? The first $k$ principal components minimize the squared reconstruction error. That is, we project the data onto the first $k$ principal components, then back into the original space to obtain a 'reconstruction' of the data. The first $k$ principal components are the vectors that minimize the sum of squared distances between each point and its reconstruction (the paper below mentions this point, among many other sources). Among all sets of $k$ vectors, the first principal components do not maximize the sum of the variance of the data projected onto each vector. For example, in many cases we could increase the variance by making all vectors point near the direction of the first principal component. But, if we constrain the vectors to be orthogonal (as PCA does), then the first principal components do indeed have this property (e.g. see here). Another interpretation is that the first $k$ principal components maximize the likelihood of a particular Gaussian latent variable model. See the following paper: Tipping & Bishop (1999). Probabilistic principal component analysis.
In PCA, do the principal components beyond the first optimize any expression? The first $k$ principal components minimize the squared reconstruction error. That is, we project the data onto the first $k$ principal components, then back into the original space to obtain a 'recon
55,940
Linear regression polynomial slope constraint in R
First of all, let's write the unconstrained model so that the coefficients are consistently ordered (in this case, from lowest to highest degree): unconstrained_model <- lm(y ~ x + I(x^2) +I(x^3)) Secondly, constrained least square regression always has higher RMSE on the data sample than unconstrained least square, unless the latter satisfies the constraints, in which case the two solutions are the same. In practice, if the data don't respect the constraints very well, the constrained fit can be quite bad. In your case the data are decreasing, which is good. But you also have 4 data points where the ordinate is nearly constant (with respect to the range of y, $I_y =[\min(y),\max(y)]$), even if the corresponding variation in x is considerable, with respect to $I_x = [\min(x),\max(x)]$.In other words, you have a large plateau in the center of your sample: df <- data.frame(x,y) library(ggplot2) (p <- ggplot(data=df,aes(x=x,y=y)) + geom_line()+geom_point()+geom_smooth()) This is not a pattern which can be easily fit by a monotone decreasing third degree polynomial. Having said that, let's solve the constrained regression problem. First of all let's get explicit equations for the constraints in terms of of the model coefficients. You already noted that the third degree polynomial is decreasing in $I=[0,1]$ if and only if $$3ax^2+2bx+c <0 \quad \forall x \in I $$ Note that $d$ doesn't appear in this inequality, and the reason is obvious -whether our model (the third degree polynomial) is increasing or not, doesn't depend on the value of the intercept. Since $3ax^2+2bx+c$ must be negative in 0, we get one of the constraints as $c<0$. Now, to get the other constraint inequalities, we just need to make the substitutions $$ t_1=x,\quad t_2=x^2$$ and note that $$x\in[0,1]\Rightarrow(t_1,t_2)\in [0,1]\times [0,1]$$ We are then led to the simpler problem of imposing a negativity constraint on a linear (degree one) polynomial in two variables: $$f(t_1,t_2)=3at_2+2bt_1+c <0 \quad \forall (t_1,t_2) \in [0,1]\times[0,1] $$ $f$ is linear and its domain, the unit square $[0,1]^2$, is convex. Then if $f$ is positive in the four corners $(0,0)$, $(0,1)$, $(1,0)$, $(1,1)$, it is positive everywhere in $[0,1]^2$. Actually, the only possible values for $(t_1,t_2)$ are those in the lower-right half of the square (the triangle with corners $(0,0)$, $(0,1)$, $(1,1)$), because $$x \in [0,1] \Rightarrow x^2 \le x \Leftrightarrow t_2 \le t_1 $$ Thus, we only need to impose that $f$ is negative in the three corners $(0,0)$, $(0,1)$, $(1,1)$. The negativity condition in $(0,0)$ gives again $c<0$. Imposing it also in $(1,0)$ and $(1,1)$, we finally obtain the three conditions $$c<0,\quad3a+c <0, \quad 3a+2b+c <0$$ Perfect! Now we have three linear inequality constraints which express the condition that our model is negative in $I$. To solve a linear least square problem with equality and/or inequality constraints, R offers a package with a simple and intuitive interface, limSolve. library(limSolve) A <- cbind(rep(1,length(x)),x,x^2,x^3) b <- y G <- matrix(nrow=3,ncol=4,byrow = TRUE,data = c(0, -1,-2,-3,0,-1,-2,0,0,-1,0,0)) h <- rep(0,3) constrained_model <- lsei(A = A, B = b, G = G, H = h, type=2) Great, now let's get predictions for both models. lm objects have a predict method, but no equivalent method is available for the results of lsei. Thus we'll use a different approach and define a my_predict function my_predict <- function(x,coefficients){ X <- cbind(rep(1,length(x)),x,x^2,x^3) predictions <- X%*%coefficients } Then # compute predictions xpred <- seq(0,1,len=100) predictions_constrained <- my_predict(xpred,constrained_model$X) predictions_unconstrained <- my_predict(xpred,unconstrained_model$coefficients) df2 <- data.frame(xpred,predictions_unconstrained,predictions_constrained) # plot results p <- ggplot(data = df,aes(x = x, y = y,color = "data")) + geom_point() + geom_line(data = df2, aes(x = xpred, y = predictions_unconstrained, color = "unconstrained fit")) + geom_line(data = df2, aes(x = xpred, y = predictions_constrained, color = "constrained fit")) p As expected, the constrained fit is considerably worse than the unconstrained one. We can also compute the $R^2$, and get the same picture: SS <-sum((y-mean(y))^2) RSS_unconstrained <- sum((unconstrained_model$residuals)^2) RSS_constrained <- sum((A%*%constrained_model$X-y)^2) R2_constrained <- 1-RSS_constrained/SS R2_unconstrained <- 1-RSS_unconstrained/SS The constrained model explains only about 70% of the total variation, while the unconstrained model explains about 83% of the total variation. As a final note, the fact that the fit is worse on the sample data doesn't necessarily mean that the constrained model is worse than the unconstrained one, in terms of predictive accuracy. If, for example, the ideal model obeys the constraints, it may be that the constrained model has lower test error on unseen data. Concerning the new set of data: x <- c(0.01041667, 0.30208333, 0.61458333, 0.65625000, 0.83333333) y <- c(772, 607, 576, 567, 550) Repeating exactly the same calculations leads to a constrained model having the highest degree term equal to zero: > constrained_model $X x 759.8756 -449.1166 224.5583 0.0000 $residualNorm [1] 0 $solutionNorm [1] 1854.049 $IsError [1] FALSE $type [1] "lsei" This polynomial is nonincreasing on $I=[0,1]$, as required, but it has a minimum in $x=1$, thus it would start increasing for $x>1$. As a matter of fact, the derivative is $2bx+c$ (since $a=0$), which for $x=1$ becomes > 2*constrained_model$X[3]+constrained_model$X[2] -6.82121e-13 which, given the accuracy of the computations involved, is basically 0. The fact that $x=1$ is a minimum is also evident from the plot: This
Linear regression polynomial slope constraint in R
First of all, let's write the unconstrained model so that the coefficients are consistently ordered (in this case, from lowest to highest degree): unconstrained_model <- lm(y ~ x + I(x^2) +I(x^3)) Se
Linear regression polynomial slope constraint in R First of all, let's write the unconstrained model so that the coefficients are consistently ordered (in this case, from lowest to highest degree): unconstrained_model <- lm(y ~ x + I(x^2) +I(x^3)) Secondly, constrained least square regression always has higher RMSE on the data sample than unconstrained least square, unless the latter satisfies the constraints, in which case the two solutions are the same. In practice, if the data don't respect the constraints very well, the constrained fit can be quite bad. In your case the data are decreasing, which is good. But you also have 4 data points where the ordinate is nearly constant (with respect to the range of y, $I_y =[\min(y),\max(y)]$), even if the corresponding variation in x is considerable, with respect to $I_x = [\min(x),\max(x)]$.In other words, you have a large plateau in the center of your sample: df <- data.frame(x,y) library(ggplot2) (p <- ggplot(data=df,aes(x=x,y=y)) + geom_line()+geom_point()+geom_smooth()) This is not a pattern which can be easily fit by a monotone decreasing third degree polynomial. Having said that, let's solve the constrained regression problem. First of all let's get explicit equations for the constraints in terms of of the model coefficients. You already noted that the third degree polynomial is decreasing in $I=[0,1]$ if and only if $$3ax^2+2bx+c <0 \quad \forall x \in I $$ Note that $d$ doesn't appear in this inequality, and the reason is obvious -whether our model (the third degree polynomial) is increasing or not, doesn't depend on the value of the intercept. Since $3ax^2+2bx+c$ must be negative in 0, we get one of the constraints as $c<0$. Now, to get the other constraint inequalities, we just need to make the substitutions $$ t_1=x,\quad t_2=x^2$$ and note that $$x\in[0,1]\Rightarrow(t_1,t_2)\in [0,1]\times [0,1]$$ We are then led to the simpler problem of imposing a negativity constraint on a linear (degree one) polynomial in two variables: $$f(t_1,t_2)=3at_2+2bt_1+c <0 \quad \forall (t_1,t_2) \in [0,1]\times[0,1] $$ $f$ is linear and its domain, the unit square $[0,1]^2$, is convex. Then if $f$ is positive in the four corners $(0,0)$, $(0,1)$, $(1,0)$, $(1,1)$, it is positive everywhere in $[0,1]^2$. Actually, the only possible values for $(t_1,t_2)$ are those in the lower-right half of the square (the triangle with corners $(0,0)$, $(0,1)$, $(1,1)$), because $$x \in [0,1] \Rightarrow x^2 \le x \Leftrightarrow t_2 \le t_1 $$ Thus, we only need to impose that $f$ is negative in the three corners $(0,0)$, $(0,1)$, $(1,1)$. The negativity condition in $(0,0)$ gives again $c<0$. Imposing it also in $(1,0)$ and $(1,1)$, we finally obtain the three conditions $$c<0,\quad3a+c <0, \quad 3a+2b+c <0$$ Perfect! Now we have three linear inequality constraints which express the condition that our model is negative in $I$. To solve a linear least square problem with equality and/or inequality constraints, R offers a package with a simple and intuitive interface, limSolve. library(limSolve) A <- cbind(rep(1,length(x)),x,x^2,x^3) b <- y G <- matrix(nrow=3,ncol=4,byrow = TRUE,data = c(0, -1,-2,-3,0,-1,-2,0,0,-1,0,0)) h <- rep(0,3) constrained_model <- lsei(A = A, B = b, G = G, H = h, type=2) Great, now let's get predictions for both models. lm objects have a predict method, but no equivalent method is available for the results of lsei. Thus we'll use a different approach and define a my_predict function my_predict <- function(x,coefficients){ X <- cbind(rep(1,length(x)),x,x^2,x^3) predictions <- X%*%coefficients } Then # compute predictions xpred <- seq(0,1,len=100) predictions_constrained <- my_predict(xpred,constrained_model$X) predictions_unconstrained <- my_predict(xpred,unconstrained_model$coefficients) df2 <- data.frame(xpred,predictions_unconstrained,predictions_constrained) # plot results p <- ggplot(data = df,aes(x = x, y = y,color = "data")) + geom_point() + geom_line(data = df2, aes(x = xpred, y = predictions_unconstrained, color = "unconstrained fit")) + geom_line(data = df2, aes(x = xpred, y = predictions_constrained, color = "constrained fit")) p As expected, the constrained fit is considerably worse than the unconstrained one. We can also compute the $R^2$, and get the same picture: SS <-sum((y-mean(y))^2) RSS_unconstrained <- sum((unconstrained_model$residuals)^2) RSS_constrained <- sum((A%*%constrained_model$X-y)^2) R2_constrained <- 1-RSS_constrained/SS R2_unconstrained <- 1-RSS_unconstrained/SS The constrained model explains only about 70% of the total variation, while the unconstrained model explains about 83% of the total variation. As a final note, the fact that the fit is worse on the sample data doesn't necessarily mean that the constrained model is worse than the unconstrained one, in terms of predictive accuracy. If, for example, the ideal model obeys the constraints, it may be that the constrained model has lower test error on unseen data. Concerning the new set of data: x <- c(0.01041667, 0.30208333, 0.61458333, 0.65625000, 0.83333333) y <- c(772, 607, 576, 567, 550) Repeating exactly the same calculations leads to a constrained model having the highest degree term equal to zero: > constrained_model $X x 759.8756 -449.1166 224.5583 0.0000 $residualNorm [1] 0 $solutionNorm [1] 1854.049 $IsError [1] FALSE $type [1] "lsei" This polynomial is nonincreasing on $I=[0,1]$, as required, but it has a minimum in $x=1$, thus it would start increasing for $x>1$. As a matter of fact, the derivative is $2bx+c$ (since $a=0$), which for $x=1$ becomes > 2*constrained_model$X[3]+constrained_model$X[2] -6.82121e-13 which, given the accuracy of the computations involved, is basically 0. The fact that $x=1$ is a minimum is also evident from the plot: This
Linear regression polynomial slope constraint in R First of all, let's write the unconstrained model so that the coefficients are consistently ordered (in this case, from lowest to highest degree): unconstrained_model <- lm(y ~ x + I(x^2) +I(x^3)) Se
55,941
Working with percentages of positive variables
As mentioned in the OP and comments, the sample mean is an unbiased estimator of the population mean so we should not fear any "positive bias" when using it to obtain a point estimate of the forecasted impact ($\mu$). That said, if the percentages tend to be highly variable, which may happen if you work with small denominators, you have to factor that in either: by properly reflecting this natural variability in your uncertainty of your estimation of $E(\mu)$ (e.g. via interval estimates or the posterior, which can inform final decisions via e.g. ROPE). or by making sure that the observed result (e.g. $\bar{x}$) is significant enough before embracing it (e.g. NHST). In the former, you would factor that in through the likelihood or the prior. Alternatively it would come directly from the estimator. If using the latter, the fear of easily embracing a false positive when assessing $\bar{x}$ (because is has a natural high variation), is unfounded since the distribution under the null $H_0$ should have a proper (as in "long enough") positive tail. Important note: As discussed in the comments, when working with expectations over time, where the change is applied repeatedly, the sample mean is biased as it does not account for compounding. However, when we are considering single-time changes, the sample mean is unbiased. In the latter, we ask ourselves "What would happen if I apply the same change I tried my experimental units, to other units (once)?". In other words, with no compounding, the sample mean is unbiased. With compounding it is of course biased and "log returns" or the "geometric mean" are better suited.
Working with percentages of positive variables
As mentioned in the OP and comments, the sample mean is an unbiased estimator of the population mean so we should not fear any "positive bias" when using it to obtain a point estimate of the forecaste
Working with percentages of positive variables As mentioned in the OP and comments, the sample mean is an unbiased estimator of the population mean so we should not fear any "positive bias" when using it to obtain a point estimate of the forecasted impact ($\mu$). That said, if the percentages tend to be highly variable, which may happen if you work with small denominators, you have to factor that in either: by properly reflecting this natural variability in your uncertainty of your estimation of $E(\mu)$ (e.g. via interval estimates or the posterior, which can inform final decisions via e.g. ROPE). or by making sure that the observed result (e.g. $\bar{x}$) is significant enough before embracing it (e.g. NHST). In the former, you would factor that in through the likelihood or the prior. Alternatively it would come directly from the estimator. If using the latter, the fear of easily embracing a false positive when assessing $\bar{x}$ (because is has a natural high variation), is unfounded since the distribution under the null $H_0$ should have a proper (as in "long enough") positive tail. Important note: As discussed in the comments, when working with expectations over time, where the change is applied repeatedly, the sample mean is biased as it does not account for compounding. However, when we are considering single-time changes, the sample mean is unbiased. In the latter, we ask ourselves "What would happen if I apply the same change I tried my experimental units, to other units (once)?". In other words, with no compounding, the sample mean is unbiased. With compounding it is of course biased and "log returns" or the "geometric mean" are better suited.
Working with percentages of positive variables As mentioned in the OP and comments, the sample mean is an unbiased estimator of the population mean so we should not fear any "positive bias" when using it to obtain a point estimate of the forecaste
55,942
Working with percentages of positive variables
I think it depends on what you want to do. Looking at your example from finance, it seems to me as if you want to estimate your total return (in euro or dollar) for a portfolio, using an estimated percentage return on a sample. Let's say you have a sample of securities with values $v_i$, and returns $r_i$, $i=1, 2, \dots n$, so the rates of return are $rr_i=\frac{r_i}{v_i}$. This is a sample of your whole portfolio of size $N$. The total value of your portfolio is $V=\sum_{i=1}^N v_i$. The total return of the portfolio (in euro or dollar) is $R=\sum_{i=1}^N r_i$. Your goal is to estimate the rate of return of the whole portfolio $\frac{R}{V}$, using only information from the sample. In this case you can estimate the rate of return $\widehat{rr}$ from the sample in two different ways: Mean of the returns in the sample : $\widehat{rr}^{(1)}=\frac{1}{n} \sum_{i=1}^n rr_i$ (this is what you do in the example) the second option is to use $\widehat{rr}^{(2)}=\frac{\bar{r}}{\bar{v}} $, (where $\bar{r}=\frac{1}{n} \sum_{i=1}^n r_i$ and $\bar{v}=\frac{1}{n} \sum_{i=1}^n v_i$) or the ratio of the average return (in euro) in the sample and the average value in the sample. It can be shown that both estimators are biased estimators for $\frac{R}{V}$, but that the bias of the second one is smaller (see this link) So the less biased estimate of the return of your portfolio is $\hat{R}=\frac{\bar{r}}{\bar{v}} V$. The one you talk about is the first one, so the one with the highest bias. There is a whole theory on this, you should google for ''ratio estimator'' or ''ratio of means versus mean of ratios''. I found an example at this link EDIT because of your comment below: first of all, you asked for a reference, follow this link the sample average $\frac{1}{n} \sum_{i=1}^n rr_i$ is an unbiased estimator of the population mean ratio $\frac{1}{N} \sum_{i=1}^N rr_i$ (note the $n$ for the sample and $N$ for the population). However, and as I said at the beginning of my answer, it depends on what you want to do. (a) if you want to estimate the mean population ratio, then you can use the sample average, (b) but you seem to look for a rate of return that you can estimate from your sample and that you want to use to estimate the rate of return for the whole portfolio. If you know the whole portfolio then its total value is $V=\sum_{i=1}^N v_i$ and its total (euro-) return $R=\sum_{i=1}^N r_i$. Note that I use $N$ so it is about the population. This means also that the population mean value is $\mu_v = \frac{1}{N} \sum_{i=1}^N v_i$ and the population mean return (in euro) is $\mu_r = \frac{1}{N} \sum_{i=1}^N r_i$. On the other hand, the rate of return for the population is $\frac{R}{V}=\frac{\sum_{i=1}^N r_i}{\sum_{i=1}^N v_i}$ which is obviously equal to $\frac{\frac{1}{N}\sum_{i=1}^N r_i}{\frac{1}{N}\sum_{i=1}^N v_i}$ which is equal to $\frac{\mu_r}{\mu_v}$, or the population rate of return is equal to the population mean return in euro divided by the population mean value. Hence the idea to use $\frac{\bar{r}}{\bar{v}}$ as an estimator for the population rate of return. So if you want to estimate the rate of return of the whole portfolio then it is better to use the ratio of the averages. If you want to estimate the mean of the individual securities' rate of return then it is better to use the average of the returns.
Working with percentages of positive variables
I think it depends on what you want to do. Looking at your example from finance, it seems to me as if you want to estimate your total return (in euro or dollar) for a portfolio, using an estimated pe
Working with percentages of positive variables I think it depends on what you want to do. Looking at your example from finance, it seems to me as if you want to estimate your total return (in euro or dollar) for a portfolio, using an estimated percentage return on a sample. Let's say you have a sample of securities with values $v_i$, and returns $r_i$, $i=1, 2, \dots n$, so the rates of return are $rr_i=\frac{r_i}{v_i}$. This is a sample of your whole portfolio of size $N$. The total value of your portfolio is $V=\sum_{i=1}^N v_i$. The total return of the portfolio (in euro or dollar) is $R=\sum_{i=1}^N r_i$. Your goal is to estimate the rate of return of the whole portfolio $\frac{R}{V}$, using only information from the sample. In this case you can estimate the rate of return $\widehat{rr}$ from the sample in two different ways: Mean of the returns in the sample : $\widehat{rr}^{(1)}=\frac{1}{n} \sum_{i=1}^n rr_i$ (this is what you do in the example) the second option is to use $\widehat{rr}^{(2)}=\frac{\bar{r}}{\bar{v}} $, (where $\bar{r}=\frac{1}{n} \sum_{i=1}^n r_i$ and $\bar{v}=\frac{1}{n} \sum_{i=1}^n v_i$) or the ratio of the average return (in euro) in the sample and the average value in the sample. It can be shown that both estimators are biased estimators for $\frac{R}{V}$, but that the bias of the second one is smaller (see this link) So the less biased estimate of the return of your portfolio is $\hat{R}=\frac{\bar{r}}{\bar{v}} V$. The one you talk about is the first one, so the one with the highest bias. There is a whole theory on this, you should google for ''ratio estimator'' or ''ratio of means versus mean of ratios''. I found an example at this link EDIT because of your comment below: first of all, you asked for a reference, follow this link the sample average $\frac{1}{n} \sum_{i=1}^n rr_i$ is an unbiased estimator of the population mean ratio $\frac{1}{N} \sum_{i=1}^N rr_i$ (note the $n$ for the sample and $N$ for the population). However, and as I said at the beginning of my answer, it depends on what you want to do. (a) if you want to estimate the mean population ratio, then you can use the sample average, (b) but you seem to look for a rate of return that you can estimate from your sample and that you want to use to estimate the rate of return for the whole portfolio. If you know the whole portfolio then its total value is $V=\sum_{i=1}^N v_i$ and its total (euro-) return $R=\sum_{i=1}^N r_i$. Note that I use $N$ so it is about the population. This means also that the population mean value is $\mu_v = \frac{1}{N} \sum_{i=1}^N v_i$ and the population mean return (in euro) is $\mu_r = \frac{1}{N} \sum_{i=1}^N r_i$. On the other hand, the rate of return for the population is $\frac{R}{V}=\frac{\sum_{i=1}^N r_i}{\sum_{i=1}^N v_i}$ which is obviously equal to $\frac{\frac{1}{N}\sum_{i=1}^N r_i}{\frac{1}{N}\sum_{i=1}^N v_i}$ which is equal to $\frac{\mu_r}{\mu_v}$, or the population rate of return is equal to the population mean return in euro divided by the population mean value. Hence the idea to use $\frac{\bar{r}}{\bar{v}}$ as an estimator for the population rate of return. So if you want to estimate the rate of return of the whole portfolio then it is better to use the ratio of the averages. If you want to estimate the mean of the individual securities' rate of return then it is better to use the average of the returns.
Working with percentages of positive variables I think it depends on what you want to do. Looking at your example from finance, it seems to me as if you want to estimate your total return (in euro or dollar) for a portfolio, using an estimated pe
55,943
Working with percentages of positive variables
Asymmetry of percentage/proportion changes: The issue you have raised in your question is not actually a statistical problem. Rather, it is a mathematical problem about the appropriate way to measure percentage changes in a non-negative quantity. If you want to aggregate positive and negative percentage changes for a non-negative variable, you need to measure these positive and negative changes in such a way that they are considered to be of "equal magnitude" when they would cancel out with each other in aggregate. We known that this does not happen if we use the percentage/proportion change as the measure, since this measure is "asymmetric". To see this, suppose we have changes measured by: $$r_{t+1} \equiv \frac{a_{t+1} - a_t}{a_t}.$$ Over a sequence of time periods $t=1,...,T$ the final amount we obtain can be written in terms of the initial amount, and these proportionate-change measures, as: $$a_T = a_0 \prod_{t=1}^{T} (1+r_t).$$ It is easy to see that if we take an initial quantity, and impose an increase and a decrease of the same "magnitude" (i.e., $r_1 = r$ and $r_2 = -r$) then we don't get back to the value we started from. Instead we have the lesser amount: $$a_2 = a_0 (1+r_1)(1-r_2) = a_0 (1+r)(1-r) = a_0 (1-r^2).$$ Hence, we can see here that positive and negative percentage/proportion changes are asymmetric, in the sense that the negative proportionate change is of greater "real" magnitude than a positive proportionate change using the same value. You are right to be concerned that negative changes are of greater force than positive changes with the same headline percentage value. How to measure percentage/proportion changes: Because of the above phenomenon, when we are dealing with data showing percentage/proportion changes in a quantity, we should measure the magnitude of the changes in a way that ensures that positive and negative changes of the same "magnitude" cancel each other out. This is accomplished by measuring change on a logarithmic scale, using the measure: $$\delta_{t+1} \equiv \frac{\ln a_{t+1} - \ln a_{t}}{\ln a_{t}}.$$ (It is worth noting that this measure is related to the proportionate change measure by the equation $\exp (\delta_{t+1}) = 1 + r_{t+1}$.) In finance, this quantity is called the force-of-interest.$^\dagger$ Over a sequence of time periods $t=1,...,T$ the final amount we obtain can be written in terms of the initial amount, and these force-of-interest measures, as: $$a_T = a_0 \exp \Big( \sum_{t=1}^T \delta_t \Big).$$ Unlike percentage changes on the raw scale, on this scale, equal positive and negative values of the force-of-interest cancel out, leaving you with the value you started with. (This is easily seen from the fact that these values enter the above equation through their sum.) How to avoid "bias" in your data analysis: From the above exposition, we can see that all you need to do to deal with this issue is to measure changes on the logarithmic scale, using the force-of-interest. Convert your percentage changes to measures of the force-of-interest and then you will be using a measure that is "symmetric", in the sense that positive and negative changes of the same value cancel out with one another. As other commentators have pointed out, this issue is entirely distinct from the issue of statistical bias in your estimators. You will still need to choose an appropriate statistical analysis to model your data, and you should choose appropriate statistical estimators. The advice here will give you a symmetric measure for changes in your variable. If you combine this with an unbiased estimator of the true mean force-of-interest then you will have a reasonable measure of the tendency of your experiment to positively or negatively impact your variable of interest. $^\dagger$ It is worth noting that the force-of-interest is usually defined on a continuous scale, but here we are looking at the analogy over a discrete time scale. For a non-negative accumulation function $a$, taken over continuous time, the force-of-interest is given by $\delta(t) = \frac{d}{dt} \ln a(t)$.
Working with percentages of positive variables
Asymmetry of percentage/proportion changes: The issue you have raised in your question is not actually a statistical problem. Rather, it is a mathematical problem about the appropriate way to measure
Working with percentages of positive variables Asymmetry of percentage/proportion changes: The issue you have raised in your question is not actually a statistical problem. Rather, it is a mathematical problem about the appropriate way to measure percentage changes in a non-negative quantity. If you want to aggregate positive and negative percentage changes for a non-negative variable, you need to measure these positive and negative changes in such a way that they are considered to be of "equal magnitude" when they would cancel out with each other in aggregate. We known that this does not happen if we use the percentage/proportion change as the measure, since this measure is "asymmetric". To see this, suppose we have changes measured by: $$r_{t+1} \equiv \frac{a_{t+1} - a_t}{a_t}.$$ Over a sequence of time periods $t=1,...,T$ the final amount we obtain can be written in terms of the initial amount, and these proportionate-change measures, as: $$a_T = a_0 \prod_{t=1}^{T} (1+r_t).$$ It is easy to see that if we take an initial quantity, and impose an increase and a decrease of the same "magnitude" (i.e., $r_1 = r$ and $r_2 = -r$) then we don't get back to the value we started from. Instead we have the lesser amount: $$a_2 = a_0 (1+r_1)(1-r_2) = a_0 (1+r)(1-r) = a_0 (1-r^2).$$ Hence, we can see here that positive and negative percentage/proportion changes are asymmetric, in the sense that the negative proportionate change is of greater "real" magnitude than a positive proportionate change using the same value. You are right to be concerned that negative changes are of greater force than positive changes with the same headline percentage value. How to measure percentage/proportion changes: Because of the above phenomenon, when we are dealing with data showing percentage/proportion changes in a quantity, we should measure the magnitude of the changes in a way that ensures that positive and negative changes of the same "magnitude" cancel each other out. This is accomplished by measuring change on a logarithmic scale, using the measure: $$\delta_{t+1} \equiv \frac{\ln a_{t+1} - \ln a_{t}}{\ln a_{t}}.$$ (It is worth noting that this measure is related to the proportionate change measure by the equation $\exp (\delta_{t+1}) = 1 + r_{t+1}$.) In finance, this quantity is called the force-of-interest.$^\dagger$ Over a sequence of time periods $t=1,...,T$ the final amount we obtain can be written in terms of the initial amount, and these force-of-interest measures, as: $$a_T = a_0 \exp \Big( \sum_{t=1}^T \delta_t \Big).$$ Unlike percentage changes on the raw scale, on this scale, equal positive and negative values of the force-of-interest cancel out, leaving you with the value you started with. (This is easily seen from the fact that these values enter the above equation through their sum.) How to avoid "bias" in your data analysis: From the above exposition, we can see that all you need to do to deal with this issue is to measure changes on the logarithmic scale, using the force-of-interest. Convert your percentage changes to measures of the force-of-interest and then you will be using a measure that is "symmetric", in the sense that positive and negative changes of the same value cancel out with one another. As other commentators have pointed out, this issue is entirely distinct from the issue of statistical bias in your estimators. You will still need to choose an appropriate statistical analysis to model your data, and you should choose appropriate statistical estimators. The advice here will give you a symmetric measure for changes in your variable. If you combine this with an unbiased estimator of the true mean force-of-interest then you will have a reasonable measure of the tendency of your experiment to positively or negatively impact your variable of interest. $^\dagger$ It is worth noting that the force-of-interest is usually defined on a continuous scale, but here we are looking at the analogy over a discrete time scale. For a non-negative accumulation function $a$, taken over continuous time, the force-of-interest is given by $\delta(t) = \frac{d}{dt} \ln a(t)$.
Working with percentages of positive variables Asymmetry of percentage/proportion changes: The issue you have raised in your question is not actually a statistical problem. Rather, it is a mathematical problem about the appropriate way to measure
55,944
Can PCA allow to identify redundant variables that can be removed before doing cluster analysis?
Also consider sparse principal component analysis, and redundancy analysis. The latter is implemented in the R Hmisc package redun function and involves attempting to predict each predictor from all the other predictors. It handles the "wings" issue discussed above.
Can PCA allow to identify redundant variables that can be removed before doing cluster analysis?
Also consider sparse principal component analysis, and redundancy analysis. The latter is implemented in the R Hmisc package redun function and involves attempting to predict each predictor from all
Can PCA allow to identify redundant variables that can be removed before doing cluster analysis? Also consider sparse principal component analysis, and redundancy analysis. The latter is implemented in the R Hmisc package redun function and involves attempting to predict each predictor from all the other predictors. It handles the "wings" issue discussed above.
Can PCA allow to identify redundant variables that can be removed before doing cluster analysis? Also consider sparse principal component analysis, and redundancy analysis. The latter is implemented in the R Hmisc package redun function and involves attempting to predict each predictor from all
55,945
Can PCA allow to identify redundant variables that can be removed before doing cluster analysis?
I'll first remark that conventional PCA is not so well adapted to categorical features (such as whether or not an organism has wings). The reason is that the principal components are generally nontrivial linear combinations of the input features, and it's not always clear what that should mean. For instance the first principal component could be composed of something like $45\%$ "has wings", $30\%$ "feathered", and $25\%$ "6 legs" (these percentages can be computed from the first eigenvector). It would be difficult to use this to eliminate a feature because secretly the "right" features are "has wings and feathers" (birds) and "has wings and six legs" (insects). With enough data some sort of PCA can surface this insight, but it's not always obvious in practice. That said, there is some work in this direction; a good buzzword is Multiple correspondence analysis. The more general problem that you're considering is feature selection, and there are many good approaches for eliminating redundancy in features subject to certain constraints; given that you're not working with an astronomical number of observations, mRMR might be good for your problem.
Can PCA allow to identify redundant variables that can be removed before doing cluster analysis?
I'll first remark that conventional PCA is not so well adapted to categorical features (such as whether or not an organism has wings). The reason is that the principal components are generally nontri
Can PCA allow to identify redundant variables that can be removed before doing cluster analysis? I'll first remark that conventional PCA is not so well adapted to categorical features (such as whether or not an organism has wings). The reason is that the principal components are generally nontrivial linear combinations of the input features, and it's not always clear what that should mean. For instance the first principal component could be composed of something like $45\%$ "has wings", $30\%$ "feathered", and $25\%$ "6 legs" (these percentages can be computed from the first eigenvector). It would be difficult to use this to eliminate a feature because secretly the "right" features are "has wings and feathers" (birds) and "has wings and six legs" (insects). With enough data some sort of PCA can surface this insight, but it's not always obvious in practice. That said, there is some work in this direction; a good buzzword is Multiple correspondence analysis. The more general problem that you're considering is feature selection, and there are many good approaches for eliminating redundancy in features subject to certain constraints; given that you're not working with an astronomical number of observations, mRMR might be good for your problem.
Can PCA allow to identify redundant variables that can be removed before doing cluster analysis? I'll first remark that conventional PCA is not so well adapted to categorical features (such as whether or not an organism has wings). The reason is that the principal components are generally nontri
55,946
Can PCA allow to identify redundant variables that can be removed before doing cluster analysis?
Having done the two step exercise of PCA followed by clustering more than a few times, I have developed a strong POV. First, there are lots of good reasons for smoothing your inputs with PCA -- most importantly, redundancy is removed. Next and as @hssay notes, the resulting PCA is a linear combination of all of the inputs. Identifying a subset of features that load maximally on them and retaining only that subset for the cluster solution would destroy variance. Given that, my recommendation is that you use the complete set of components as input to your clustering algorithm. Then there's your question of interpretation. It is a fact in the applied world that people (teams) can, will and do spend enormous amounts of time on interpreting the components. To me, this is a waste of time since they are merely a means to an end...the end being a "good" cluster solution. Once you have generated a partitioning of your information that has good statistical properties, create an output file containing the cluster assignments for each object in your data. Next, generate a back-end interpretation based on the original features that ignores the components (since you don't care what they mean). This back-end analysis can consist of a spreadsheet based on the new output file that has columns for the clusters and the rows are an appropriate measure of central tendency, e.g., mean, median, mode, whatever. To facilitate the interpretation, add an index column for each cluster that represents the ratio of the cluster value to the overall (or grand) value -- include the values for the overall data in your sheet in a separate column. By multiplying that ratio by 100 (and rounding), you create a new heuristic that is kind of like an informal t-test or an IQ score. Indexes between 80 and 120 would be considered "normal" behavior, 120+ is a feature or behavior that is distinctively true for a cluster while indexes 80 and less are features or behaviors that are not representative of that cluster. The more extreme the index, the more that cluster deviates from normative behavior. Just use caution interpreting small values in the denominator as the indexes can get quite large. Another problem with this "quick and dirty" approach to interpretation occurs when some of your values are negative. Negative indexes need more careful consideration. The fact is that people typically don't care and don't want to know how you got to a solution. They just want the answer. Sometimes they're willing to work with you on getting to a final answer, other times they just leave the whole thing up to you. Of course, this discussion ignores the question of what a "good" cluster solution is. That's another story.
Can PCA allow to identify redundant variables that can be removed before doing cluster analysis?
Having done the two step exercise of PCA followed by clustering more than a few times, I have developed a strong POV. First, there are lots of good reasons for smoothing your inputs with PCA -- most i
Can PCA allow to identify redundant variables that can be removed before doing cluster analysis? Having done the two step exercise of PCA followed by clustering more than a few times, I have developed a strong POV. First, there are lots of good reasons for smoothing your inputs with PCA -- most importantly, redundancy is removed. Next and as @hssay notes, the resulting PCA is a linear combination of all of the inputs. Identifying a subset of features that load maximally on them and retaining only that subset for the cluster solution would destroy variance. Given that, my recommendation is that you use the complete set of components as input to your clustering algorithm. Then there's your question of interpretation. It is a fact in the applied world that people (teams) can, will and do spend enormous amounts of time on interpreting the components. To me, this is a waste of time since they are merely a means to an end...the end being a "good" cluster solution. Once you have generated a partitioning of your information that has good statistical properties, create an output file containing the cluster assignments for each object in your data. Next, generate a back-end interpretation based on the original features that ignores the components (since you don't care what they mean). This back-end analysis can consist of a spreadsheet based on the new output file that has columns for the clusters and the rows are an appropriate measure of central tendency, e.g., mean, median, mode, whatever. To facilitate the interpretation, add an index column for each cluster that represents the ratio of the cluster value to the overall (or grand) value -- include the values for the overall data in your sheet in a separate column. By multiplying that ratio by 100 (and rounding), you create a new heuristic that is kind of like an informal t-test or an IQ score. Indexes between 80 and 120 would be considered "normal" behavior, 120+ is a feature or behavior that is distinctively true for a cluster while indexes 80 and less are features or behaviors that are not representative of that cluster. The more extreme the index, the more that cluster deviates from normative behavior. Just use caution interpreting small values in the denominator as the indexes can get quite large. Another problem with this "quick and dirty" approach to interpretation occurs when some of your values are negative. Negative indexes need more careful consideration. The fact is that people typically don't care and don't want to know how you got to a solution. They just want the answer. Sometimes they're willing to work with you on getting to a final answer, other times they just leave the whole thing up to you. Of course, this discussion ignores the question of what a "good" cluster solution is. That's another story.
Can PCA allow to identify redundant variables that can be removed before doing cluster analysis? Having done the two step exercise of PCA followed by clustering more than a few times, I have developed a strong POV. First, there are lots of good reasons for smoothing your inputs with PCA -- most i
55,947
Can PCA allow to identify redundant variables that can be removed before doing cluster analysis?
Specifically on the question of interpreting the result: you cannot generally pinpoint to individual variables contributing maximum variance. What the result of 3 components contributing maximum variance is this: there are 3 new variables derived by taking a linear combination of original variables which is found to represent maximum variance. This is a known problem with PCA: you loose interpretation power. The so called factors derived by combining different variables may not have any intuitive meaning. One recommendation now is to build your subsequent model (clustering) on the derived variables data. So rather than building clusters on 20 column data, you'll build it on 3 column data.
Can PCA allow to identify redundant variables that can be removed before doing cluster analysis?
Specifically on the question of interpreting the result: you cannot generally pinpoint to individual variables contributing maximum variance. What the result of 3 components contributing maximum varia
Can PCA allow to identify redundant variables that can be removed before doing cluster analysis? Specifically on the question of interpreting the result: you cannot generally pinpoint to individual variables contributing maximum variance. What the result of 3 components contributing maximum variance is this: there are 3 new variables derived by taking a linear combination of original variables which is found to represent maximum variance. This is a known problem with PCA: you loose interpretation power. The so called factors derived by combining different variables may not have any intuitive meaning. One recommendation now is to build your subsequent model (clustering) on the derived variables data. So rather than building clusters on 20 column data, you'll build it on 3 column data.
Can PCA allow to identify redundant variables that can be removed before doing cluster analysis? Specifically on the question of interpreting the result: you cannot generally pinpoint to individual variables contributing maximum variance. What the result of 3 components contributing maximum varia
55,948
Why are the degrees of freedom for a chi-square test on a 2x2 contingency table always 1?
As far as I know, degrees of freedom in Chi Square distribution are related to the number of classes a population can be classified minus the linear restrictions used to estimate the parameters. Originally, Karl Pearson provided Chi Square statistic to compare observed versus expected values in a contingency table, where you have a sample of size $n$ (this is always fixed) in $k$ different classes. He was based in the multinomial distribution, where sample size is fixed, and arrived at the expression used today of the sum of the squared difference between observed values and the expected, divided by the expected. At this point, we have $k$ classes and only $k-1$ of them "vary freely". In a 2 by 2 table, we have 2 variables (or two samples) with 2 levels and in each one we have $(k-1)=1$ that vary freely. The total number of cells that vary freely then is $(k-1)(k-1)=1$ again. If you think of one single variable with 4 levels, that wouldn't fit in a contingency table: it's just one factor, and in that case it is correct that applying a Chi Square test to assess goodness of fit will have 3 df.
Why are the degrees of freedom for a chi-square test on a 2x2 contingency table always 1?
As far as I know, degrees of freedom in Chi Square distribution are related to the number of classes a population can be classified minus the linear restrictions used to estimate the parameters. Origi
Why are the degrees of freedom for a chi-square test on a 2x2 contingency table always 1? As far as I know, degrees of freedom in Chi Square distribution are related to the number of classes a population can be classified minus the linear restrictions used to estimate the parameters. Originally, Karl Pearson provided Chi Square statistic to compare observed versus expected values in a contingency table, where you have a sample of size $n$ (this is always fixed) in $k$ different classes. He was based in the multinomial distribution, where sample size is fixed, and arrived at the expression used today of the sum of the squared difference between observed values and the expected, divided by the expected. At this point, we have $k$ classes and only $k-1$ of them "vary freely". In a 2 by 2 table, we have 2 variables (or two samples) with 2 levels and in each one we have $(k-1)=1$ that vary freely. The total number of cells that vary freely then is $(k-1)(k-1)=1$ again. If you think of one single variable with 4 levels, that wouldn't fit in a contingency table: it's just one factor, and in that case it is correct that applying a Chi Square test to assess goodness of fit will have 3 df.
Why are the degrees of freedom for a chi-square test on a 2x2 contingency table always 1? As far as I know, degrees of freedom in Chi Square distribution are related to the number of classes a population can be classified minus the linear restrictions used to estimate the parameters. Origi
55,949
Why are the degrees of freedom for a chi-square test on a 2x2 contingency table always 1?
This is a much more complicated question that it might first seem, and there was a bitter disagreement between Fisher and Pearson on this question. With modern computing it is easy to demonstrate by simulation that the distribution is $\chi^2_1$, not $\chi^2_3$, eg > tests<-replicate(1000,{ + y<-rbinom(400, 1, .2) + x<-rep(0:1,each=200) + chisq.test(table(x,y),correct=FALSE)$statistic + }) > mean(tests) [1] 0.9988341 > var(tests) [1] 1.870081 A theoretical point: the marginal probabilities are ancillary, so we will be better off conditioning on them, which gives one df left over. The distribution conditional on the margins is (in large samples) $\chi^2_1$ regardless of what the margins are, so the unconditional distribution should also be $\chi^2_1$, agreeing with simulations.
Why are the degrees of freedom for a chi-square test on a 2x2 contingency table always 1?
This is a much more complicated question that it might first seem, and there was a bitter disagreement between Fisher and Pearson on this question. With modern computing it is easy to demonstrate by
Why are the degrees of freedom for a chi-square test on a 2x2 contingency table always 1? This is a much more complicated question that it might first seem, and there was a bitter disagreement between Fisher and Pearson on this question. With modern computing it is easy to demonstrate by simulation that the distribution is $\chi^2_1$, not $\chi^2_3$, eg > tests<-replicate(1000,{ + y<-rbinom(400, 1, .2) + x<-rep(0:1,each=200) + chisq.test(table(x,y),correct=FALSE)$statistic + }) > mean(tests) [1] 0.9988341 > var(tests) [1] 1.870081 A theoretical point: the marginal probabilities are ancillary, so we will be better off conditioning on them, which gives one df left over. The distribution conditional on the margins is (in large samples) $\chi^2_1$ regardless of what the margins are, so the unconditional distribution should also be $\chi^2_1$, agreeing with simulations.
Why are the degrees of freedom for a chi-square test on a 2x2 contingency table always 1? This is a much more complicated question that it might first seem, and there was a bitter disagreement between Fisher and Pearson on this question. With modern computing it is easy to demonstrate by
55,950
Why are the degrees of freedom for a chi-square test on a 2x2 contingency table always 1?
If we have a 2x2 table with two variables and sample size n is set, then in essence, the marginal sums of those two variables taken sepparately is also set as n. That comes from the fact that with marginals we only look at one variable and disregard the other completely. We can imagine deleting the line that devide the columns (or the rows) like we never knew the other variable existed. If we do that, the two columns for each row (or the two rows for each column) get combined (and the numbers added) and n becomes the marginal sum of the rows (or columns) that were left devided as levels of one of the variables. That means for each variable now, we only need to fill 1 level of 2 levels, since the sample size n has become the marginal sum and once we know one frequency the other frequency is determined by n minus frequency number one. Hence the degrees of freedom formula. The reason why we have to sepparate the two variables like that, and combine rows and columns is that we are interested in looking at how likely our observed deviance from the expected values is under the independence model when the variables are independet and the frequencies come from such a distribution(the null-hypothesis). To do that, we need to use the specific chi-squared distribution the shape of which is dictated by the independence model and the appropriate degrees of freedom. In a sense the independence model that we want to test requires the sepparation of the variables to calculate the degrees of freedom in the correct way.
Why are the degrees of freedom for a chi-square test on a 2x2 contingency table always 1?
If we have a 2x2 table with two variables and sample size n is set, then in essence, the marginal sums of those two variables taken sepparately is also set as n. That comes from the fact that with m
Why are the degrees of freedom for a chi-square test on a 2x2 contingency table always 1? If we have a 2x2 table with two variables and sample size n is set, then in essence, the marginal sums of those two variables taken sepparately is also set as n. That comes from the fact that with marginals we only look at one variable and disregard the other completely. We can imagine deleting the line that devide the columns (or the rows) like we never knew the other variable existed. If we do that, the two columns for each row (or the two rows for each column) get combined (and the numbers added) and n becomes the marginal sum of the rows (or columns) that were left devided as levels of one of the variables. That means for each variable now, we only need to fill 1 level of 2 levels, since the sample size n has become the marginal sum and once we know one frequency the other frequency is determined by n minus frequency number one. Hence the degrees of freedom formula. The reason why we have to sepparate the two variables like that, and combine rows and columns is that we are interested in looking at how likely our observed deviance from the expected values is under the independence model when the variables are independet and the frequencies come from such a distribution(the null-hypothesis). To do that, we need to use the specific chi-squared distribution the shape of which is dictated by the independence model and the appropriate degrees of freedom. In a sense the independence model that we want to test requires the sepparation of the variables to calculate the degrees of freedom in the correct way.
Why are the degrees of freedom for a chi-square test on a 2x2 contingency table always 1? If we have a 2x2 table with two variables and sample size n is set, then in essence, the marginal sums of those two variables taken sepparately is also set as n. That comes from the fact that with m
55,951
Why are the degrees of freedom for a chi-square test on a 2x2 contingency table always 1?
We have a $M \times N$ contingency table, under H1: no association, there are $MN-1$ number of free parameters. Under H0: $p_{ij} = p_ip_j$, we have $(N-1) + (M-1)$ free parameters, $$DegreeOfFreedom = MN-1 - (N-1) - (M-1) = (M-1)(N-1)$$
Why are the degrees of freedom for a chi-square test on a 2x2 contingency table always 1?
We have a $M \times N$ contingency table, under H1: no association, there are $MN-1$ number of free parameters. Under H0: $p_{ij} = p_ip_j$, we have $(N-1) + (M-1)$ free parameters, $$DegreeOfFreedom
Why are the degrees of freedom for a chi-square test on a 2x2 contingency table always 1? We have a $M \times N$ contingency table, under H1: no association, there are $MN-1$ number of free parameters. Under H0: $p_{ij} = p_ip_j$, we have $(N-1) + (M-1)$ free parameters, $$DegreeOfFreedom = MN-1 - (N-1) - (M-1) = (M-1)(N-1)$$
Why are the degrees of freedom for a chi-square test on a 2x2 contingency table always 1? We have a $M \times N$ contingency table, under H1: no association, there are $MN-1$ number of free parameters. Under H0: $p_{ij} = p_ip_j$, we have $(N-1) + (M-1)$ free parameters, $$DegreeOfFreedom
55,952
Good text on nonlinear regression (M.S. graduate-level)?
Two pretty standard references would be Bates, D.M. & Watts, D.G. (1988), Nonlinear Regression Analysis and Its Applications, Wiley, New York. Seber, G.A.F. & Wild, C.J. (1989), Nonlinear Regression, Wiley, New York.
Good text on nonlinear regression (M.S. graduate-level)?
Two pretty standard references would be Bates, D.M. & Watts, D.G. (1988), Nonlinear Regression Analysis and Its Applications, Wiley, New York. Seber, G.A.F. & Wild, C.J. (1989), Nonlinear Regression
Good text on nonlinear regression (M.S. graduate-level)? Two pretty standard references would be Bates, D.M. & Watts, D.G. (1988), Nonlinear Regression Analysis and Its Applications, Wiley, New York. Seber, G.A.F. & Wild, C.J. (1989), Nonlinear Regression, Wiley, New York.
Good text on nonlinear regression (M.S. graduate-level)? Two pretty standard references would be Bates, D.M. & Watts, D.G. (1988), Nonlinear Regression Analysis and Its Applications, Wiley, New York. Seber, G.A.F. & Wild, C.J. (1989), Nonlinear Regression
55,953
Real-world example on significance testing with large samples
Page 205 of Meehl (1990) briefly describes a study of 57,000 high-school seniors in which 92% of 990 different cross-tabulations (between 45 variables; 45 choose 2 is 990) were statistically significant. Most people who've heard of this study are probably familiar with it from Cohen (1994). Standing, Sproule, and Khouzam (1991) examined a dataset of 135 variables from 2,058 Canadian grade-school students. 4,506 of 17,936 correlation coefficients (25%) had a two-tailed $p < .001$. In this age of big data, a similar study to Meehl's and Standing et al.'s with a very large sample size and number of variables would be nice. But we do have Kramer, Guillory, and Hancock (2014), a study of some 690,000 Facebook users that found signifcant effects that were positively microscopic, such as a reduction of positive posts in users' News Feeds decreasing the percentage of positive words in the users' own posts by $-0.1\%$ [$t(310,044) = -5.63$, $P < 0.001$, Cohen's $d = 0.02$]. What's really rich is that Kramer et al. pooh-pooh another significant effect they didn't want to find on the justification that it was tiny: "Positivity and negativity were evaluated separately given evidence that they are not simply opposite ends of the same spectrum. Indeed, negative and positive word use scarcely correlated [$r = -0.04$, $t(620,587) = -38.01$, $P < 0.001$]." (p. 8,789). Cohen, J. (1994). The earth is round (p < .05). American Psychologist, 49(12), 997–1003. doi:10.1037/0003-066X.49.12.997 Kramer, A. D., Guillory, J. E., & Hancock, J. T. (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences, 111(24), 8788–8790. doi:10.1073/pnas.1320040111 Meehl, P. E. (1990). Why summaries of research on psychological theories are often uninterpretable. Psychological Reports, 66(1), 195–244. doi:10.2466/pr0.1990.66.1.195 Standing, L., Sproule, R., & Khouzam, N. (1991). Empirical statistics: IV. Illustrating Meehl's sixth law of soft psychology: Everything correlates with everything. Psychological Reports, 69(1), 123–126. doi:10.2466/PR0.69.5.123-126
Real-world example on significance testing with large samples
Page 205 of Meehl (1990) briefly describes a study of 57,000 high-school seniors in which 92% of 990 different cross-tabulations (between 45 variables; 45 choose 2 is 990) were statistically significa
Real-world example on significance testing with large samples Page 205 of Meehl (1990) briefly describes a study of 57,000 high-school seniors in which 92% of 990 different cross-tabulations (between 45 variables; 45 choose 2 is 990) were statistically significant. Most people who've heard of this study are probably familiar with it from Cohen (1994). Standing, Sproule, and Khouzam (1991) examined a dataset of 135 variables from 2,058 Canadian grade-school students. 4,506 of 17,936 correlation coefficients (25%) had a two-tailed $p < .001$. In this age of big data, a similar study to Meehl's and Standing et al.'s with a very large sample size and number of variables would be nice. But we do have Kramer, Guillory, and Hancock (2014), a study of some 690,000 Facebook users that found signifcant effects that were positively microscopic, such as a reduction of positive posts in users' News Feeds decreasing the percentage of positive words in the users' own posts by $-0.1\%$ [$t(310,044) = -5.63$, $P < 0.001$, Cohen's $d = 0.02$]. What's really rich is that Kramer et al. pooh-pooh another significant effect they didn't want to find on the justification that it was tiny: "Positivity and negativity were evaluated separately given evidence that they are not simply opposite ends of the same spectrum. Indeed, negative and positive word use scarcely correlated [$r = -0.04$, $t(620,587) = -38.01$, $P < 0.001$]." (p. 8,789). Cohen, J. (1994). The earth is round (p < .05). American Psychologist, 49(12), 997–1003. doi:10.1037/0003-066X.49.12.997 Kramer, A. D., Guillory, J. E., & Hancock, J. T. (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences, 111(24), 8788–8790. doi:10.1073/pnas.1320040111 Meehl, P. E. (1990). Why summaries of research on psychological theories are often uninterpretable. Psychological Reports, 66(1), 195–244. doi:10.2466/pr0.1990.66.1.195 Standing, L., Sproule, R., & Khouzam, N. (1991). Empirical statistics: IV. Illustrating Meehl's sixth law of soft psychology: Everything correlates with everything. Psychological Reports, 69(1), 123–126. doi:10.2466/PR0.69.5.123-126
Real-world example on significance testing with large samples Page 205 of Meehl (1990) briefly describes a study of 57,000 high-school seniors in which 92% of 990 different cross-tabulations (between 45 variables; 45 choose 2 is 990) were statistically significa
55,954
Real-world example on significance testing with large samples
The thing that I find so teethgrating about this expression: "null hypothesis is almost always false" is that it underscores the sloppy way with which modern, frequentist hypothesis testing is done. If you adhere to that framework, then it is true, virtually all causal relations are in some, albeit miniscule and complex, way statistically true and can be found if one were to sample enough. To me, it begs for a return to Fisherian testing. To remind people, Fisher never advocated a decision rule approach to significance testing, he merely said that a $p$-value should be compared to the statistical power of the analysis. What this does is that it requires the investigator to a priori specify what they might consider a significant effect. By doing so, in the interpretation of results, it is made plainly apparent that the results are coming from an overpowered analysis. Results from overpowered analyses usually report very small effects, and the power to detect such an effect is usually very low. So while the $p$-value is very significant, the power is very small, and we call into question how "significant" these findings really are. On the other hand, when you compare the power for the apriori effect size, overpowered analyses will have powers that are so large, and p-values that are so small they cannot be practically compared. This illustrates the intuitive discrepancy between what the researcher said they would find, and what they actually found. For instance, suppose you have a trial to see the effect of blood pressure lowering meds. You realize, "ah! In power analyses, the investigator used a mean difference of 1.20 mmHg of blood pressure thinking that was a clinically significant effect, but in their analyses found a difference of 0.0300 mmHg with a 95% confidence interval 0.0299 - 0.0301 mmHg." And at that point you realize while these results are statistically significant, they really aren't clinically significant.
Real-world example on significance testing with large samples
The thing that I find so teethgrating about this expression: "null hypothesis is almost always false" is that it underscores the sloppy way with which modern, frequentist hypothesis testing is done
Real-world example on significance testing with large samples The thing that I find so teethgrating about this expression: "null hypothesis is almost always false" is that it underscores the sloppy way with which modern, frequentist hypothesis testing is done. If you adhere to that framework, then it is true, virtually all causal relations are in some, albeit miniscule and complex, way statistically true and can be found if one were to sample enough. To me, it begs for a return to Fisherian testing. To remind people, Fisher never advocated a decision rule approach to significance testing, he merely said that a $p$-value should be compared to the statistical power of the analysis. What this does is that it requires the investigator to a priori specify what they might consider a significant effect. By doing so, in the interpretation of results, it is made plainly apparent that the results are coming from an overpowered analysis. Results from overpowered analyses usually report very small effects, and the power to detect such an effect is usually very low. So while the $p$-value is very significant, the power is very small, and we call into question how "significant" these findings really are. On the other hand, when you compare the power for the apriori effect size, overpowered analyses will have powers that are so large, and p-values that are so small they cannot be practically compared. This illustrates the intuitive discrepancy between what the researcher said they would find, and what they actually found. For instance, suppose you have a trial to see the effect of blood pressure lowering meds. You realize, "ah! In power analyses, the investigator used a mean difference of 1.20 mmHg of blood pressure thinking that was a clinically significant effect, but in their analyses found a difference of 0.0300 mmHg with a 95% confidence interval 0.0299 - 0.0301 mmHg." And at that point you realize while these results are statistically significant, they really aren't clinically significant.
Real-world example on significance testing with large samples The thing that I find so teethgrating about this expression: "null hypothesis is almost always false" is that it underscores the sloppy way with which modern, frequentist hypothesis testing is done
55,955
Real-world example on significance testing with large samples
I think you should into the American's Statistic association statement on p-values. They do quote that big data researches should not draw any conclusions from p-values by itself.
Real-world example on significance testing with large samples
I think you should into the American's Statistic association statement on p-values. They do quote that big data researches should not draw any conclusions from p-values by itself.
Real-world example on significance testing with large samples I think you should into the American's Statistic association statement on p-values. They do quote that big data researches should not draw any conclusions from p-values by itself.
Real-world example on significance testing with large samples I think you should into the American's Statistic association statement on p-values. They do quote that big data researches should not draw any conclusions from p-values by itself.
55,956
Constrained optimization in R
Using answer given by @crayfish and a detailed answer on how to put construct constraints here, I was able to come up with a solution. #define F and S here F = c(10,10,5) S = c(8,8,9,8,4) #loss_fun: to be minimized loss_fun <- function(A){ P = matrix(A, nrow = n,ncol = m, byrow=T) T = S*P #proportion matrix * S F2 = rowSums(T) # Predicted values of F E = F - F2 # Error return(sum(E*E)) } n = length(F) m = length(S) #Initial solution (theta) P_init = c(rep(0.1,n*m)) # Creating Constraints vi = matrix(rep(0,n*m*n*(m+1)),ncol = n*m, byrow = T) for (i in 1:n){ for (j in 1:(n*m)) { if (j <= m * i & j > m * (i-1)) vi[i,j] = -1 } } for (i in (n+1):(n*m+n)) { for (j in 1:(n*m)) { if ((i-n) == j) vi[i,j] = 1 } } myci = c(rep(-1,n),rep(0,n*m)) # check if initial value is in feasible region vi %*% P_init - myci #run the optimization module z = constrOptim(P_init,loss_fun,NULL,ui=vi, ci=myci) #result P_final = matrix(z$par,nrow=n,byrow=T) Since creation of constraints may not be very easy to understand from the code, here is a visualization of constraints when n = 3 and m = 4 -1. p11 + -1. p12 + -1. p13 + -1. p14 >= -1 -1. p21 + -1. p22 + -1. p23 + -1. p24 >= -1 -1. p31 + -1. p32 + -1. p33 + -1. p34 >= -1 p11 >= 0 p12 >= 0 p13 >= 0 p14 >= 0 p21 >= 0 p22 >= 0 p23 >= 0 p24 >= 0 p31 >= 0 p32 >= 0 p33 >= 0 p34 >= 0
Constrained optimization in R
Using answer given by @crayfish and a detailed answer on how to put construct constraints here, I was able to come up with a solution. #define F and S here F = c(10,10,5) S = c(8,8,9,8,4) #loss_fun:
Constrained optimization in R Using answer given by @crayfish and a detailed answer on how to put construct constraints here, I was able to come up with a solution. #define F and S here F = c(10,10,5) S = c(8,8,9,8,4) #loss_fun: to be minimized loss_fun <- function(A){ P = matrix(A, nrow = n,ncol = m, byrow=T) T = S*P #proportion matrix * S F2 = rowSums(T) # Predicted values of F E = F - F2 # Error return(sum(E*E)) } n = length(F) m = length(S) #Initial solution (theta) P_init = c(rep(0.1,n*m)) # Creating Constraints vi = matrix(rep(0,n*m*n*(m+1)),ncol = n*m, byrow = T) for (i in 1:n){ for (j in 1:(n*m)) { if (j <= m * i & j > m * (i-1)) vi[i,j] = -1 } } for (i in (n+1):(n*m+n)) { for (j in 1:(n*m)) { if ((i-n) == j) vi[i,j] = 1 } } myci = c(rep(-1,n),rep(0,n*m)) # check if initial value is in feasible region vi %*% P_init - myci #run the optimization module z = constrOptim(P_init,loss_fun,NULL,ui=vi, ci=myci) #result P_final = matrix(z$par,nrow=n,byrow=T) Since creation of constraints may not be very easy to understand from the code, here is a visualization of constraints when n = 3 and m = 4 -1. p11 + -1. p12 + -1. p13 + -1. p14 >= -1 -1. p21 + -1. p22 + -1. p23 + -1. p24 >= -1 -1. p31 + -1. p32 + -1. p33 + -1. p34 >= -1 p11 >= 0 p12 >= 0 p13 >= 0 p14 >= 0 p21 >= 0 p22 >= 0 p23 >= 0 p24 >= 0 p31 >= 0 p32 >= 0 p33 >= 0 p34 >= 0
Constrained optimization in R Using answer given by @crayfish and a detailed answer on how to put construct constraints here, I was able to come up with a solution. #define F and S here F = c(10,10,5) S = c(8,8,9,8,4) #loss_fun:
55,957
Constrained optimization in R
I'm not sure if I'm right but help constrOptim {stats} say theta: numeric (vector) starting value (of length p): must be in the feasible region, and your theta is a matrix. I think here is near to your idea (Input is a vector and the function makes it a matrix). [EDITED] NOT NEAR because of difference constraints. Please refer to OP's ANSWER. F = c(10,10,5) S = c(8,8,8,4) n = length(F) m = length(S) P_init2 = rep(0, 12) loss_fun2 <- function(a){ P <- matrix(a, nrow=n, ncol=m) T = S*P F2 = rowSums(T) E = F - F2 return(sum(E*E)) } x = loss_fun2(rep(0, 12)) x # 225 z = constrOptim(P_init2, loss_fun2, NULL, ui = rep(-1, n * m), ci = -1)
Constrained optimization in R
I'm not sure if I'm right but help constrOptim {stats} say theta: numeric (vector) starting value (of length p): must be in the feasible region, and your theta is a matrix. I think here is near to yo
Constrained optimization in R I'm not sure if I'm right but help constrOptim {stats} say theta: numeric (vector) starting value (of length p): must be in the feasible region, and your theta is a matrix. I think here is near to your idea (Input is a vector and the function makes it a matrix). [EDITED] NOT NEAR because of difference constraints. Please refer to OP's ANSWER. F = c(10,10,5) S = c(8,8,8,4) n = length(F) m = length(S) P_init2 = rep(0, 12) loss_fun2 <- function(a){ P <- matrix(a, nrow=n, ncol=m) T = S*P F2 = rowSums(T) E = F - F2 return(sum(E*E)) } x = loss_fun2(rep(0, 12)) x # 225 z = constrOptim(P_init2, loss_fun2, NULL, ui = rep(-1, n * m), ci = -1)
Constrained optimization in R I'm not sure if I'm right but help constrOptim {stats} say theta: numeric (vector) starting value (of length p): must be in the feasible region, and your theta is a matrix. I think here is near to yo