idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
55,501 | Modeling vacancy rate | It would seem to make sense to use a generalized linear mixed model with family=binomial and a logit or probit link. This would restrict your fitted values to the range (0,1). I don't know whether you can combine that with an autoregressive error structure in lmer4 though. | Modeling vacancy rate | It would seem to make sense to use a generalized linear mixed model with family=binomial and a logit or probit link. This would restrict your fitted values to the range (0,1). I don't know whether you | Modeling vacancy rate
It would seem to make sense to use a generalized linear mixed model with family=binomial and a logit or probit link. This would restrict your fitted values to the range (0,1). I don't know whether you can combine that with an autoregressive error structure in lmer4 though. | Modeling vacancy rate
It would seem to make sense to use a generalized linear mixed model with family=binomial and a logit or probit link. This would restrict your fitted values to the range (0,1). I don't know whether you |
55,502 | What is the prediction error while using deming regression (weighted total least squares) | Update I've updated the answer to reflect the discussions in the comments.
The model is given as
\begin{align*}
y&=y^{*}+\varepsilon\\\\
x&=x^{*}+\eta\\\\
y^{*}&=\alpha+x^{*}\beta
\end{align*}
So when forecasting with a new value $x$ we can forecast either $y$ or $y^{*}$. Their forecasts coincide $\hat{y}=\hat{y}^{*}=\hat{\alpha}+\hat{\beta}x$ but their error variances will be different:
$$Var(\hat{y})=Var(\hat{y}^{*})+Var(\varepsilon)$$
To get $Var(\hat{y}^{*})$ write
\begin{align*}
\hat{y}^{*}-y^{*}&=\hat{\alpha}-\alpha+\hat{\beta} (x^{*}+\eta)-\beta x^{\*}\\\
&=(\hat{\alpha}-\alpha)+(\hat{\beta}-\beta) x^{*}+ \hat{\beta}\eta
\end{align*}
So
\begin{align*}
Var(\hat{y}^{*})=E(\hat{y}^{*}-y^{*})^2&=D(\hat{\alpha}-\alpha)+D(\hat{\beta}-\beta) (x^{*})^2+ E\hat{\beta}^2D\eta\\\\
& + 2\textrm{cov}(\hat{\alpha}-\alpha,\hat{\beta}-\beta)x^{*}
\end{align*} | What is the prediction error while using deming regression (weighted total least squares) | Update I've updated the answer to reflect the discussions in the comments.
The model is given as
\begin{align*}
y&=y^{*}+\varepsilon\\\\
x&=x^{*}+\eta\\\\
y^{*}&=\alpha+x^{*}\beta
\end{align*}
So when | What is the prediction error while using deming regression (weighted total least squares)
Update I've updated the answer to reflect the discussions in the comments.
The model is given as
\begin{align*}
y&=y^{*}+\varepsilon\\\\
x&=x^{*}+\eta\\\\
y^{*}&=\alpha+x^{*}\beta
\end{align*}
So when forecasting with a new value $x$ we can forecast either $y$ or $y^{*}$. Their forecasts coincide $\hat{y}=\hat{y}^{*}=\hat{\alpha}+\hat{\beta}x$ but their error variances will be different:
$$Var(\hat{y})=Var(\hat{y}^{*})+Var(\varepsilon)$$
To get $Var(\hat{y}^{*})$ write
\begin{align*}
\hat{y}^{*}-y^{*}&=\hat{\alpha}-\alpha+\hat{\beta} (x^{*}+\eta)-\beta x^{\*}\\\
&=(\hat{\alpha}-\alpha)+(\hat{\beta}-\beta) x^{*}+ \hat{\beta}\eta
\end{align*}
So
\begin{align*}
Var(\hat{y}^{*})=E(\hat{y}^{*}-y^{*})^2&=D(\hat{\alpha}-\alpha)+D(\hat{\beta}-\beta) (x^{*})^2+ E\hat{\beta}^2D\eta\\\\
& + 2\textrm{cov}(\hat{\alpha}-\alpha,\hat{\beta}-\beta)x^{*}
\end{align*} | What is the prediction error while using deming regression (weighted total least squares)
Update I've updated the answer to reflect the discussions in the comments.
The model is given as
\begin{align*}
y&=y^{*}+\varepsilon\\\\
x&=x^{*}+\eta\\\\
y^{*}&=\alpha+x^{*}\beta
\end{align*}
So when |
55,503 | Interpreting correlation from two linear mixed-effect models | You're misinterpreting these results, which is easy to do as with mixed models there's more than one type of 'fitted value' and the documentation of lmer isn't as clear as it might be. Try using fixed.effects() in place of fitted() and you should get correlations which makes more intuitive sense if you're interested in the contribution of the fixed effects.
The fitted() function of lmer is documented as giving the 'conditional means'. I had to check the Theory.pdf vignette to work out that these include the predictions of the modelled random effects. Your modelled random effect variances are, overall, smaller in the model including the fixed effect. But smaller random effects mean less shrinkage, i.e. the predicted random effect is closer to the observed residual. When calculating the correlation, it seems that in your case this smaller shrinkage just overcomes the improvement from the fixed effect.
The interpretation of $R^2$ as 'proportion of variance explained' gets more complex with mixed models, as it depends whether you think of random effects as 'explaining' variance. Probably not, in most cases. | Interpreting correlation from two linear mixed-effect models | You're misinterpreting these results, which is easy to do as with mixed models there's more than one type of 'fitted value' and the documentation of lmer isn't as clear as it might be. Try using fixed | Interpreting correlation from two linear mixed-effect models
You're misinterpreting these results, which is easy to do as with mixed models there's more than one type of 'fitted value' and the documentation of lmer isn't as clear as it might be. Try using fixed.effects() in place of fitted() and you should get correlations which makes more intuitive sense if you're interested in the contribution of the fixed effects.
The fitted() function of lmer is documented as giving the 'conditional means'. I had to check the Theory.pdf vignette to work out that these include the predictions of the modelled random effects. Your modelled random effect variances are, overall, smaller in the model including the fixed effect. But smaller random effects mean less shrinkage, i.e. the predicted random effect is closer to the observed residual. When calculating the correlation, it seems that in your case this smaller shrinkage just overcomes the improvement from the fixed effect.
The interpretation of $R^2$ as 'proportion of variance explained' gets more complex with mixed models, as it depends whether you think of random effects as 'explaining' variance. Probably not, in most cases. | Interpreting correlation from two linear mixed-effect models
You're misinterpreting these results, which is easy to do as with mixed models there's more than one type of 'fitted value' and the documentation of lmer isn't as clear as it might be. Try using fixed |
55,504 | Two-sample test for multivariate normal distributions under the assumption that means are the same | The Mauchly's test allows to test if a given covariance matrix is proportional to a reference (identity or other) and is available through mauchly.test() under R. It is mostly used in repeated-measures design (to test (1) if the dependent variable VC matrices are equal or homogeneous, and (2) whether the correlations between the levels of the within-subjects variable are comparable--altogether, this is known as the sphericity assumption).
Box’s M statistic is used (in MANOVA or LDA) to test for homogeneity of covariance matrices, but as it is very sensitive to normality it will often reject the null (R code not available in standard packages).
Covariance structure models as found in Structural Equation Modeling are also an option for more complex stuff (although in multigroup analysis testing for the equality of covariances makes little sense if the variances are not equal), but I have no references to offer actually.
I guess any textbook on multivariate data analysis would have additional details on these procedures. I also found this article for the case where normality assumption is not met:
Aslam, S and Rocke, DM. A robust
testing procedure for the equality of
covariance matrices, Computational
Statistics & Data Analysis 49 (2005)
863-874 | Two-sample test for multivariate normal distributions under the assumption that means are the same | The Mauchly's test allows to test if a given covariance matrix is proportional to a reference (identity or other) and is available through mauchly.test() under R. It is mostly used in repeated-measure | Two-sample test for multivariate normal distributions under the assumption that means are the same
The Mauchly's test allows to test if a given covariance matrix is proportional to a reference (identity or other) and is available through mauchly.test() under R. It is mostly used in repeated-measures design (to test (1) if the dependent variable VC matrices are equal or homogeneous, and (2) whether the correlations between the levels of the within-subjects variable are comparable--altogether, this is known as the sphericity assumption).
Box’s M statistic is used (in MANOVA or LDA) to test for homogeneity of covariance matrices, but as it is very sensitive to normality it will often reject the null (R code not available in standard packages).
Covariance structure models as found in Structural Equation Modeling are also an option for more complex stuff (although in multigroup analysis testing for the equality of covariances makes little sense if the variances are not equal), but I have no references to offer actually.
I guess any textbook on multivariate data analysis would have additional details on these procedures. I also found this article for the case where normality assumption is not met:
Aslam, S and Rocke, DM. A robust
testing procedure for the equality of
covariance matrices, Computational
Statistics & Data Analysis 49 (2005)
863-874 | Two-sample test for multivariate normal distributions under the assumption that means are the same
The Mauchly's test allows to test if a given covariance matrix is proportional to a reference (identity or other) and is available through mauchly.test() under R. It is mostly used in repeated-measure |
55,505 | Using R2WinBUGS, how to extract information from each chain? | The object returned by read.bugs is an object of S3 class mcmc.list.
You can use the double brackets [[ to access the separate chains, i.e. the different mcmc-objects that make up the larger mcmc.list object, which really is simply a list of mcmc-objects that inherits some information about thinning and chain length from its components.
More to the point, s.th. like lapply(codaobject, function(x){ colMeans(x) }) should return the posterior means for each parameter in each chain and lapply(codaobject, function(x){ apply(x, 2, sd) }) should give chain- and parameter-specific posterior sd's, since each chain is essentially just a numeric matrix with rows corresponding to the (saved) iterations and columns corresponding to the different params.
EDIT:
I think Gelman/Hill's "Bayesian Data Analysis" contains some worked examples using R2WinBUGS. | Using R2WinBUGS, how to extract information from each chain? | The object returned by read.bugs is an object of S3 class mcmc.list.
You can use the double brackets [[ to access the separate chains, i.e. the different mcmc-objects that make up the larger mcmc.lis | Using R2WinBUGS, how to extract information from each chain?
The object returned by read.bugs is an object of S3 class mcmc.list.
You can use the double brackets [[ to access the separate chains, i.e. the different mcmc-objects that make up the larger mcmc.list object, which really is simply a list of mcmc-objects that inherits some information about thinning and chain length from its components.
More to the point, s.th. like lapply(codaobject, function(x){ colMeans(x) }) should return the posterior means for each parameter in each chain and lapply(codaobject, function(x){ apply(x, 2, sd) }) should give chain- and parameter-specific posterior sd's, since each chain is essentially just a numeric matrix with rows corresponding to the (saved) iterations and columns corresponding to the different params.
EDIT:
I think Gelman/Hill's "Bayesian Data Analysis" contains some worked examples using R2WinBUGS. | Using R2WinBUGS, how to extract information from each chain?
The object returned by read.bugs is an object of S3 class mcmc.list.
You can use the double brackets [[ to access the separate chains, i.e. the different mcmc-objects that make up the larger mcmc.lis |
55,506 | Using R2WinBUGS, how to extract information from each chain? | The contents of your chains are stored in three different formats. Take a look at
bugs.sim$sims.array
bugs.sim$sims.list
bugs.sim$sims.matrix
and read the Value section of ?bugs. | Using R2WinBUGS, how to extract information from each chain? | The contents of your chains are stored in three different formats. Take a look at
bugs.sim$sims.array
bugs.sim$sims.list
bugs.sim$sims.matrix
and read the Value section of ?bugs. | Using R2WinBUGS, how to extract information from each chain?
The contents of your chains are stored in three different formats. Take a look at
bugs.sim$sims.array
bugs.sim$sims.list
bugs.sim$sims.matrix
and read the Value section of ?bugs. | Using R2WinBUGS, how to extract information from each chain?
The contents of your chains are stored in three different formats. Take a look at
bugs.sim$sims.array
bugs.sim$sims.list
bugs.sim$sims.matrix
and read the Value section of ?bugs. |
55,507 | Calculation of incidence rate for epidemiological study in hospital | It is commonly admitted that the denominator for IRs is the "population at risk" (i.e., all individuals in which the studied event(s) may occur). Although your first formula is generally used, I found in The new public health, by Tulchinsky and Varavikova (Elsevier, 2009, 2nd. ed., p. 84) that a distinction is made between ordinary incidence rate, where the average size of the population in the fixed period of time is used in the denominator, and person-time incidence rate, with PT at risk in the denominator.
Obviously, when individuals not at risk of the disease are included in the denominator, the resultant measure of disease frequency will underestimate the true incidence of disease in the population under investigation, but see Numerators, denominators and populations at risk. | Calculation of incidence rate for epidemiological study in hospital | It is commonly admitted that the denominator for IRs is the "population at risk" (i.e., all individuals in which the studied event(s) may occur). Although your first formula is generally used, I found | Calculation of incidence rate for epidemiological study in hospital
It is commonly admitted that the denominator for IRs is the "population at risk" (i.e., all individuals in which the studied event(s) may occur). Although your first formula is generally used, I found in The new public health, by Tulchinsky and Varavikova (Elsevier, 2009, 2nd. ed., p. 84) that a distinction is made between ordinary incidence rate, where the average size of the population in the fixed period of time is used in the denominator, and person-time incidence rate, with PT at risk in the denominator.
Obviously, when individuals not at risk of the disease are included in the denominator, the resultant measure of disease frequency will underestimate the true incidence of disease in the population under investigation, but see Numerators, denominators and populations at risk. | Calculation of incidence rate for epidemiological study in hospital
It is commonly admitted that the denominator for IRs is the "population at risk" (i.e., all individuals in which the studied event(s) may occur). Although your first formula is generally used, I found |
55,508 | Calculation of incidence rate for epidemiological study in hospital | Short answer: When in doubt, trust in Rothman.
Long answer: It depends. You only want to include time where you are actually at risk of the outcome in your calculation of the denominator. Time where you aren't at risk (known as immortal person-time) should never be used in the calculation of a rate.
In your case, if the outcome of interest is the onset of a disease, then yes, the moment they have the outcome, they have it. You stop counting their time in the denominator, and they move to the numerator. Doing otherwise underestimates the incidence rate. The answer to what the other formula is doing is well...doing it wrong. Or more likely, doing it with the best data that is available - such as when all a study has is a number of counts per time and a number of person-time for an interval. But it should always be noted that this is a subtle underestimation.
Frequently, one might actually assign times when you only know an interval to try to address this. To use a hospital example:
We have 100 patients, who stay for 1 month. During that time, we get 5 cases.
The underestimation is 5 cases/100 person-months (400 person-weeks). We can however say that, in our experience, most infections occur within the first say, week of hospitalization. So we will assume that all cases occurred in that week. Now it's 5 cases/ 100 person-weeks. Or, if we have reason to believe they all got it in the 3rd week - say, we know the Jello in the cafeteria was contaminated with norovirus, we can assume that all cases occurred then, and now its 5 cases/300-person weeks. Most often, people just pick the middle of the interval for unknown circumstances. The technique you're seeing, where you use the most time possible in the denominator, is the lowest estimation of the rate you can have. | Calculation of incidence rate for epidemiological study in hospital | Short answer: When in doubt, trust in Rothman.
Long answer: It depends. You only want to include time where you are actually at risk of the outcome in your calculation of the denominator. Time where y | Calculation of incidence rate for epidemiological study in hospital
Short answer: When in doubt, trust in Rothman.
Long answer: It depends. You only want to include time where you are actually at risk of the outcome in your calculation of the denominator. Time where you aren't at risk (known as immortal person-time) should never be used in the calculation of a rate.
In your case, if the outcome of interest is the onset of a disease, then yes, the moment they have the outcome, they have it. You stop counting their time in the denominator, and they move to the numerator. Doing otherwise underestimates the incidence rate. The answer to what the other formula is doing is well...doing it wrong. Or more likely, doing it with the best data that is available - such as when all a study has is a number of counts per time and a number of person-time for an interval. But it should always be noted that this is a subtle underestimation.
Frequently, one might actually assign times when you only know an interval to try to address this. To use a hospital example:
We have 100 patients, who stay for 1 month. During that time, we get 5 cases.
The underestimation is 5 cases/100 person-months (400 person-weeks). We can however say that, in our experience, most infections occur within the first say, week of hospitalization. So we will assume that all cases occurred in that week. Now it's 5 cases/ 100 person-weeks. Or, if we have reason to believe they all got it in the 3rd week - say, we know the Jello in the cafeteria was contaminated with norovirus, we can assume that all cases occurred then, and now its 5 cases/300-person weeks. Most often, people just pick the middle of the interval for unknown circumstances. The technique you're seeing, where you use the most time possible in the denominator, is the lowest estimation of the rate you can have. | Calculation of incidence rate for epidemiological study in hospital
Short answer: When in doubt, trust in Rothman.
Long answer: It depends. You only want to include time where you are actually at risk of the outcome in your calculation of the denominator. Time where y |
55,509 | Calculation of incidence rate for epidemiological study in hospital | I think Rothman's definition should be used. In the second definition all patients seem to be given the same duration (is this correct?) so that the incidence will not be the incidence rate but will be proportional to the cumulative incidence (cases / total in given time frame) | Calculation of incidence rate for epidemiological study in hospital | I think Rothman's definition should be used. In the second definition all patients seem to be given the same duration (is this correct?) so that the incidence will not be the incidence rate but will b | Calculation of incidence rate for epidemiological study in hospital
I think Rothman's definition should be used. In the second definition all patients seem to be given the same duration (is this correct?) so that the incidence will not be the incidence rate but will be proportional to the cumulative incidence (cases / total in given time frame) | Calculation of incidence rate for epidemiological study in hospital
I think Rothman's definition should be used. In the second definition all patients seem to be given the same duration (is this correct?) so that the incidence will not be the incidence rate but will b |
55,510 | How to test if change is significant across multiple categories? | There are subtle issues involving the difference between designed comparisons and post-hoc comparisons, of which this likely is an example.
If, before collecting the data, you anticipated this kind of pattern, you could employ a simple nonparametric test. The null hypothesis would be that all changes are due to chance with the alternative being that a specified category was increasing and the other eight categories were decreasing. Under the null, positive changes have a 50% chance of occurring, implying the chance of the alternative is $(0.50)^8(1 - 0.50)^1$ = $0.002$: highly significant evidence for the alternative.
The analysis for a post-hoc observation is difficult because we can't even get started with describing the situation. Exactly what kind of pattern would you happen to notice and considered worthy of testing? So many are possible, with no accurate description available, that all we can say (from experience) is that (a) it is highly likely that any interested investigator would notice some pattern in the data and (b) a post-hoc hypothesis test could be constructed to "demonstrate" the "high significance" of that pattern, exactly as I did above. For these reasons, applying hypothesis tests after the fact to support claims of "statistical validity" for exploratory results is frowned upon. (Among statisticians, who should know better, it is called "data snooping" or worse.)
One way out is to conduct your analysis with c. half the data, randomly selected. Look for any patterns you like. Construct an appropriate suite of hypothesis tests for those patterns and then apply them to the held-out data only. This is in the spirit of the scientific requirement for replication. If you don't do this, then you would be obliged to repeat your experiment to confirm whatever you're seeing in the data you currently have. | How to test if change is significant across multiple categories? | There are subtle issues involving the difference between designed comparisons and post-hoc comparisons, of which this likely is an example.
If, before collecting the data, you anticipated this kind of | How to test if change is significant across multiple categories?
There are subtle issues involving the difference between designed comparisons and post-hoc comparisons, of which this likely is an example.
If, before collecting the data, you anticipated this kind of pattern, you could employ a simple nonparametric test. The null hypothesis would be that all changes are due to chance with the alternative being that a specified category was increasing and the other eight categories were decreasing. Under the null, positive changes have a 50% chance of occurring, implying the chance of the alternative is $(0.50)^8(1 - 0.50)^1$ = $0.002$: highly significant evidence for the alternative.
The analysis for a post-hoc observation is difficult because we can't even get started with describing the situation. Exactly what kind of pattern would you happen to notice and considered worthy of testing? So many are possible, with no accurate description available, that all we can say (from experience) is that (a) it is highly likely that any interested investigator would notice some pattern in the data and (b) a post-hoc hypothesis test could be constructed to "demonstrate" the "high significance" of that pattern, exactly as I did above. For these reasons, applying hypothesis tests after the fact to support claims of "statistical validity" for exploratory results is frowned upon. (Among statisticians, who should know better, it is called "data snooping" or worse.)
One way out is to conduct your analysis with c. half the data, randomly selected. Look for any patterns you like. Construct an appropriate suite of hypothesis tests for those patterns and then apply them to the held-out data only. This is in the spirit of the scientific requirement for replication. If you don't do this, then you would be obliged to repeat your experiment to confirm whatever you're seeing in the data you currently have. | How to test if change is significant across multiple categories?
There are subtle issues involving the difference between designed comparisons and post-hoc comparisons, of which this likely is an example.
If, before collecting the data, you anticipated this kind of |
55,511 | How to test if change is significant across multiple categories? | Given the additional information you've subsequently posted I'm not sure any statistical test is going to be that informative. If you had a strong prediction of a pattern such as this or similar, this is such a low probability event that you're pretty much set just getting these data. With an N of 400 almost any tests will most definitely be significant. Some good descriptive stats like confidence intervals would be very useful.
I would suggest that caution be made in your description of the downward trend being remotely meaningful. It's such a tiny amount that, yeah, if your N is big enough it will be significant. But is that tiny drop in percentage meaningful? I think the more meaningful statement is that it's not an increase like the others and that it is staying roughly flat. Don't try to change the story of really small effects with statistical tests. | How to test if change is significant across multiple categories? | Given the additional information you've subsequently posted I'm not sure any statistical test is going to be that informative. If you had a strong prediction of a pattern such as this or similar, thi | How to test if change is significant across multiple categories?
Given the additional information you've subsequently posted I'm not sure any statistical test is going to be that informative. If you had a strong prediction of a pattern such as this or similar, this is such a low probability event that you're pretty much set just getting these data. With an N of 400 almost any tests will most definitely be significant. Some good descriptive stats like confidence intervals would be very useful.
I would suggest that caution be made in your description of the downward trend being remotely meaningful. It's such a tiny amount that, yeah, if your N is big enough it will be significant. But is that tiny drop in percentage meaningful? I think the more meaningful statement is that it's not an increase like the others and that it is staying roughly flat. Don't try to change the story of really small effects with statistical tests. | How to test if change is significant across multiple categories?
Given the additional information you've subsequently posted I'm not sure any statistical test is going to be that informative. If you had a strong prediction of a pattern such as this or similar, thi |
55,512 | Expected distribution of random draws | The expected frequency of observing $k$ purple balls in $d$ draws (without replacement) from an urn of $p$ purple balls and $n-p$ other balls is obtained by counting and equals
$$\frac{{p \choose k} {n-p \choose d-k} }{{n \choose d}}.$$
Test a sample (of say $100$) such experiments with a chi-squared statistic using these probabilities as the reference.
In the second case, integrate over the prior distributions. There is no nice formula for that, but the integration (actually a sum for these discrete variables) can be carried out exactly if you wish. In the example given in the edited section -- independent uniform distributions of $n$ from $20$ to $30$ (thus having a one in 11 chance of being any value between $20$ and $30$ inclusive), of $p$ from $0$ to $4$, and of $d$ from $6$ to $12$ -- the result is a probability distribution on the possible numbers of purples ($0, 1, 2, 3, 4$) with values
$0: 69728476151/142333251060 = 0.489896$
$1: 8092734193/24540215700 = 0.329774$
$2: 36854/258825 = 0.14239$
$3: 169436/4917675 = 0.0344545$
$4: 17141/4917675 = 0.00348559$.
Use a chi-squared test for this situation, too. As usual when conducting chi-squared tests, you will want to lump the last two or three categories into one because their expectations are less than $5$ (for $100$ repetitions).
There is no problem with zero values.
Edit (in response to a followup question)
The integrations are performed as multiple sums. In this case, there is some prior distribution for $n$, a prior distribution for $p$, and a prior distribution for $d$. For each possible ordered triple of outcomes $(n,p,d)$ together they give a probability $\Pr(n,p,d)$. (With uniform distributions as above this probability is a constant equal to $1/((30-20+1)(4-0+1)(12-6+1))$.) One forms the sum over all possible values of $(n,p,d)$ (a triple sum in this case) of
$$\Pr(n,p,d) \frac{{p \choose k} {n-p \choose d-k} }{{n \choose d}}.$$ | Expected distribution of random draws | The expected frequency of observing $k$ purple balls in $d$ draws (without replacement) from an urn of $p$ purple balls and $n-p$ other balls is obtained by counting and equals
$$\frac{{p \choose k} { | Expected distribution of random draws
The expected frequency of observing $k$ purple balls in $d$ draws (without replacement) from an urn of $p$ purple balls and $n-p$ other balls is obtained by counting and equals
$$\frac{{p \choose k} {n-p \choose d-k} }{{n \choose d}}.$$
Test a sample (of say $100$) such experiments with a chi-squared statistic using these probabilities as the reference.
In the second case, integrate over the prior distributions. There is no nice formula for that, but the integration (actually a sum for these discrete variables) can be carried out exactly if you wish. In the example given in the edited section -- independent uniform distributions of $n$ from $20$ to $30$ (thus having a one in 11 chance of being any value between $20$ and $30$ inclusive), of $p$ from $0$ to $4$, and of $d$ from $6$ to $12$ -- the result is a probability distribution on the possible numbers of purples ($0, 1, 2, 3, 4$) with values
$0: 69728476151/142333251060 = 0.489896$
$1: 8092734193/24540215700 = 0.329774$
$2: 36854/258825 = 0.14239$
$3: 169436/4917675 = 0.0344545$
$4: 17141/4917675 = 0.00348559$.
Use a chi-squared test for this situation, too. As usual when conducting chi-squared tests, you will want to lump the last two or three categories into one because their expectations are less than $5$ (for $100$ repetitions).
There is no problem with zero values.
Edit (in response to a followup question)
The integrations are performed as multiple sums. In this case, there is some prior distribution for $n$, a prior distribution for $p$, and a prior distribution for $d$. For each possible ordered triple of outcomes $(n,p,d)$ together they give a probability $\Pr(n,p,d)$. (With uniform distributions as above this probability is a constant equal to $1/((30-20+1)(4-0+1)(12-6+1))$.) One forms the sum over all possible values of $(n,p,d)$ (a triple sum in this case) of
$$\Pr(n,p,d) \frac{{p \choose k} {n-p \choose d-k} }{{n \choose d}}.$$ | Expected distribution of random draws
The expected frequency of observing $k$ purple balls in $d$ draws (without replacement) from an urn of $p$ purple balls and $n-p$ other balls is obtained by counting and equals
$$\frac{{p \choose k} { |
55,513 | Expected distribution of random draws | First Part: The draws from the urn follow a hypergeometric distribution assuming random draws. Any deviation from the theoretical probabilities vis-a-vis the observed frequencies can be evaluated using chi-square tests.
Second Part:
Let:
$n \sim U(20,30)$ be the total number of balls in the urn
$p \sim U(0,4)$ be the number of purple balls.
$d \sim U(6,12)$ be the number of draws.
$n_{pi}$ be the number of purple balls we the $i^\mbox{th}$ draw.
Thus,
$$P(n_{pi}|n,p,d) = \frac{\binom{p}{n_{pi}} \binom{n-p}{d-n_{pi}}}{\binom{n}{d}}$$
You can integrate the above probabilities using the priors for $n$, $p$ and $d$ which will give you the expected frequencies to observe purple balls provided you draw them at random. You can then compare the expected frequencies with the observed frequencies to assess if the process is truly random. | Expected distribution of random draws | First Part: The draws from the urn follow a hypergeometric distribution assuming random draws. Any deviation from the theoretical probabilities vis-a-vis the observed frequencies can be evaluated usin | Expected distribution of random draws
First Part: The draws from the urn follow a hypergeometric distribution assuming random draws. Any deviation from the theoretical probabilities vis-a-vis the observed frequencies can be evaluated using chi-square tests.
Second Part:
Let:
$n \sim U(20,30)$ be the total number of balls in the urn
$p \sim U(0,4)$ be the number of purple balls.
$d \sim U(6,12)$ be the number of draws.
$n_{pi}$ be the number of purple balls we the $i^\mbox{th}$ draw.
Thus,
$$P(n_{pi}|n,p,d) = \frac{\binom{p}{n_{pi}} \binom{n-p}{d-n_{pi}}}{\binom{n}{d}}$$
You can integrate the above probabilities using the priors for $n$, $p$ and $d$ which will give you the expected frequencies to observe purple balls provided you draw them at random. You can then compare the expected frequencies with the observed frequencies to assess if the process is truly random. | Expected distribution of random draws
First Part: The draws from the urn follow a hypergeometric distribution assuming random draws. Any deviation from the theoretical probabilities vis-a-vis the observed frequencies can be evaluated usin |
55,514 | Expected distribution of random draws | So here is my motivation for the questions, although I know this is not necessary I like it when people follow up on their questions so I will do the same. I would like to thank both Srikant and whuber for their helpful answers. (I ask no-one upvote this as it is not an answer to the question and both whuber's and Srikant's deserve to be above this, and you should upvote their excellent answers.)
The other day for a class field trip I sat in on the proceedings of an appeals court. Several of the criminal appeals brought before that day concerned issues surrounding Batson challenges. A Batson challenge concerns the use of racial discrimination when an attorney uses what are called peremptory challenges during the voir dire process of jury selection. (I'm in the US so this is entirely in the context of USA criminal law).
Two separate questions arose in the deliberations that were statistical in nature.
The first question was the chance that two out of two asian jurors (the purple balls) seated currently in the venire panel (the urn consisting of the total number of balls) would be selected by chance (the total number of peremptory challenges used equals the number of balls drawn). The attorney in this case stated the probability that both Asian jurors would be selected was $1/28$. I don't have the materials the attorneys presented to the court of appeals, so I do not know how the attorney calculated this probability. But this is essentially my question #1, so given the formula for the Hypergeometric distribution I calculated the expected probability given,
$n = 20$ The number of jurors seated in the venire panel
$p = 2$ The number of Asian jurors seated in the venire panel, both of whom were picked
$d = 6$ The number of peremptory challenges of use to an attorney
which then leads to an expected value of
$$\frac{\binom{p}{p} \binom{n-p}{d-p}}{\binom{n}{d}}=\frac{\binom{2}{2} \binom{20-2}{6-2}}{\binom{20}{6}}=\frac{3}{38}.$$
*note these are my best guesses of the values based on what I know of the case, if I had the court record I could know for sure. Values calculated using Wolfram Alpha.
Although the court is unlikely to establish a bright line rule which states the probability threshold that establishes a prima facie case of racial discrimination, at least one judge thought the use of such statistics is applicable to establishing the validity of a Batson challenge.
In this case there wasn't much doubt in the eyes of the court that Asian jurors had a stereotype among attorneys that they were pro-prosecution. But in a subsequent case a defense attorney used a Batson challenge to claim a prosecutor was being racially biased by eliminating 4 out of 6 Black women. The judges in this appeal were somewhat skeptical that Black women were a group that had a cognizable stereotype attached to them, but this again is a question amenable to statistical knowledge. Hence my question #2, given 100 observations could I determine if black women were eliminated using peremptory challenges in a non-random manner. In reality the attorneys are not eliminating prospective jurors based on only the race and sex of the juror, but that would not preclude someone from determining if the pattern of peremptory challenges at least appears or does not appear random (although non-randomness does not necessarily indicate racial discrimination).
Again I'd like to say thank you both to whuber and Srikant for their answers. | Expected distribution of random draws | So here is my motivation for the questions, although I know this is not necessary I like it when people follow up on their questions so I will do the same. I would like to thank both Srikant and whube | Expected distribution of random draws
So here is my motivation for the questions, although I know this is not necessary I like it when people follow up on their questions so I will do the same. I would like to thank both Srikant and whuber for their helpful answers. (I ask no-one upvote this as it is not an answer to the question and both whuber's and Srikant's deserve to be above this, and you should upvote their excellent answers.)
The other day for a class field trip I sat in on the proceedings of an appeals court. Several of the criminal appeals brought before that day concerned issues surrounding Batson challenges. A Batson challenge concerns the use of racial discrimination when an attorney uses what are called peremptory challenges during the voir dire process of jury selection. (I'm in the US so this is entirely in the context of USA criminal law).
Two separate questions arose in the deliberations that were statistical in nature.
The first question was the chance that two out of two asian jurors (the purple balls) seated currently in the venire panel (the urn consisting of the total number of balls) would be selected by chance (the total number of peremptory challenges used equals the number of balls drawn). The attorney in this case stated the probability that both Asian jurors would be selected was $1/28$. I don't have the materials the attorneys presented to the court of appeals, so I do not know how the attorney calculated this probability. But this is essentially my question #1, so given the formula for the Hypergeometric distribution I calculated the expected probability given,
$n = 20$ The number of jurors seated in the venire panel
$p = 2$ The number of Asian jurors seated in the venire panel, both of whom were picked
$d = 6$ The number of peremptory challenges of use to an attorney
which then leads to an expected value of
$$\frac{\binom{p}{p} \binom{n-p}{d-p}}{\binom{n}{d}}=\frac{\binom{2}{2} \binom{20-2}{6-2}}{\binom{20}{6}}=\frac{3}{38}.$$
*note these are my best guesses of the values based on what I know of the case, if I had the court record I could know for sure. Values calculated using Wolfram Alpha.
Although the court is unlikely to establish a bright line rule which states the probability threshold that establishes a prima facie case of racial discrimination, at least one judge thought the use of such statistics is applicable to establishing the validity of a Batson challenge.
In this case there wasn't much doubt in the eyes of the court that Asian jurors had a stereotype among attorneys that they were pro-prosecution. But in a subsequent case a defense attorney used a Batson challenge to claim a prosecutor was being racially biased by eliminating 4 out of 6 Black women. The judges in this appeal were somewhat skeptical that Black women were a group that had a cognizable stereotype attached to them, but this again is a question amenable to statistical knowledge. Hence my question #2, given 100 observations could I determine if black women were eliminated using peremptory challenges in a non-random manner. In reality the attorneys are not eliminating prospective jurors based on only the race and sex of the juror, but that would not preclude someone from determining if the pattern of peremptory challenges at least appears or does not appear random (although non-randomness does not necessarily indicate racial discrimination).
Again I'd like to say thank you both to whuber and Srikant for their answers. | Expected distribution of random draws
So here is my motivation for the questions, although I know this is not necessary I like it when people follow up on their questions so I will do the same. I would like to thank both Srikant and whube |
55,515 | When is it acceptable to collapse across groups when performing a factor analysis? | There seems to be two cases to consider, depending on whether your scale was already validated using standard psychometric methods (from classical test or item response theory). In what follows, I will consider the first case where I assume preliminary studies have demonstrated construct validity and scores reliability for your scale.
In this case, there is no formal need to apply exploratory factor analysis, unless you want to examine the pattern matrix within each group (but I generally do it, just to ensure that there are no items that unexpectedly highlight low factor loading or cross-load onto different factors); in order to be able to pool all your data, you need to use a multi-group factor analysis (hence, a confirmatory approach as you suggest), which basically amount to add extra parameters for testing a group effect on factor loading (1st order model) or factor correlation (2nd order model, if this makes sense) which would impact measurement invariance across subgroups of respondents. This can be done using Mplus (see the discussion about CFA there) or Mx (e.g. Conor et al., 2009), not sure about Amos as it seems to be restricted to simple factor structure. The Mx software has been redesigned to work within the R environment, OpenMx. The wiki is well responding so you can ask questions if you encounter difficulties with it. There is also a more recent package, lavaan, which appears to be a promising package for SEMs.
Alternatives models coming from IRT may also be considered, including a Latent Regression Rasch Model (for each scale separately, see De Boeck and Wilson, 2004), or a Multivariate Mixture Rasch Model (von Davier and Carstensen, 2007). You can take a look at Volume 20 of the Journal of Statistical Software, entirely devoted to psychometrics in R, for further information about IRT modeling with R.
You may be able to reach similar tests using Structural Equation Modeling, though.
If factor structure proves to be equivalent across the two groups, then you can aggregate the scores (on your four summated scales) and report your statistics as usual.
However, it is always a challenging task to use CFA since not rejecting H0 does by no mean allow you to check that your postulated theoretical model is correct in the true world, but just that there is no reason to reject it on statistical grounds; on the other hand, rejecting the null would lead to accept the alternative, which is generally left unspecified, unless you apply sequential testing of nested models. Anyway, this is the way we go in cross-cultural settings, especially when we want to assess whether a given questionnaire (e.g., on Patients Reported Outcomes) measures what it purports to do whatever the population it is administered to.
Now, regarding the apparent differences between the two groups -- one is drawn from a population of students, the other is a clinical sample, assessed at a later date -- it depends very much on your own considerations: Does mixing of these two samples makes sense from the literature surrounding the questionnaire used (esp., it should have shown temporal stability and applicability in a wide population), do you plan to generalize your findings over a larger population (obviously, you gain power by increasing sample size). At first sight, I would say that you need to ensure that both groups are comparable with respect to the characteristics thought to influence one's score on this questionnaire (e.g., gender, age, SES, biomedical history, etc.), and this can be done using classical statistics for two-groups comparison (on raw scores). It is worth noting that in clinical studies, we face the reverse situation: We usually want to show that scores differ between different clinical subgroups (or between treated and naive patients), which is often refered to as know-group validity.
Reference:
De Boeck, P. and Wilson, M. (2004). Explanatory Item Response Models. A Generalized Linear and Nonlinear Approach. Springer.
von Davier, M. and Carstensen, C.H. (2007). Multivariate and Mixture Distribution Rasch Models. Springer. | When is it acceptable to collapse across groups when performing a factor analysis? | There seems to be two cases to consider, depending on whether your scale was already validated using standard psychometric methods (from classical test or item response theory). In what follows, I wil | When is it acceptable to collapse across groups when performing a factor analysis?
There seems to be two cases to consider, depending on whether your scale was already validated using standard psychometric methods (from classical test or item response theory). In what follows, I will consider the first case where I assume preliminary studies have demonstrated construct validity and scores reliability for your scale.
In this case, there is no formal need to apply exploratory factor analysis, unless you want to examine the pattern matrix within each group (but I generally do it, just to ensure that there are no items that unexpectedly highlight low factor loading or cross-load onto different factors); in order to be able to pool all your data, you need to use a multi-group factor analysis (hence, a confirmatory approach as you suggest), which basically amount to add extra parameters for testing a group effect on factor loading (1st order model) or factor correlation (2nd order model, if this makes sense) which would impact measurement invariance across subgroups of respondents. This can be done using Mplus (see the discussion about CFA there) or Mx (e.g. Conor et al., 2009), not sure about Amos as it seems to be restricted to simple factor structure. The Mx software has been redesigned to work within the R environment, OpenMx. The wiki is well responding so you can ask questions if you encounter difficulties with it. There is also a more recent package, lavaan, which appears to be a promising package for SEMs.
Alternatives models coming from IRT may also be considered, including a Latent Regression Rasch Model (for each scale separately, see De Boeck and Wilson, 2004), or a Multivariate Mixture Rasch Model (von Davier and Carstensen, 2007). You can take a look at Volume 20 of the Journal of Statistical Software, entirely devoted to psychometrics in R, for further information about IRT modeling with R.
You may be able to reach similar tests using Structural Equation Modeling, though.
If factor structure proves to be equivalent across the two groups, then you can aggregate the scores (on your four summated scales) and report your statistics as usual.
However, it is always a challenging task to use CFA since not rejecting H0 does by no mean allow you to check that your postulated theoretical model is correct in the true world, but just that there is no reason to reject it on statistical grounds; on the other hand, rejecting the null would lead to accept the alternative, which is generally left unspecified, unless you apply sequential testing of nested models. Anyway, this is the way we go in cross-cultural settings, especially when we want to assess whether a given questionnaire (e.g., on Patients Reported Outcomes) measures what it purports to do whatever the population it is administered to.
Now, regarding the apparent differences between the two groups -- one is drawn from a population of students, the other is a clinical sample, assessed at a later date -- it depends very much on your own considerations: Does mixing of these two samples makes sense from the literature surrounding the questionnaire used (esp., it should have shown temporal stability and applicability in a wide population), do you plan to generalize your findings over a larger population (obviously, you gain power by increasing sample size). At first sight, I would say that you need to ensure that both groups are comparable with respect to the characteristics thought to influence one's score on this questionnaire (e.g., gender, age, SES, biomedical history, etc.), and this can be done using classical statistics for two-groups comparison (on raw scores). It is worth noting that in clinical studies, we face the reverse situation: We usually want to show that scores differ between different clinical subgroups (or between treated and naive patients), which is often refered to as know-group validity.
Reference:
De Boeck, P. and Wilson, M. (2004). Explanatory Item Response Models. A Generalized Linear and Nonlinear Approach. Springer.
von Davier, M. and Carstensen, C.H. (2007). Multivariate and Mixture Distribution Rasch Models. Springer. | When is it acceptable to collapse across groups when performing a factor analysis?
There seems to be two cases to consider, depending on whether your scale was already validated using standard psychometric methods (from classical test or item response theory). In what follows, I wil |
55,516 | When is it acceptable to collapse across groups when performing a factor analysis? | The approach you mention seems reasonable, but you'd have to take into account that you cannot see the total dataset as a single population. So theoretically, you should use any kind of method that can take differences between those groups into account, similar to using "group" as a random term in an ANOVA or GLM approach.
An alternative for empirical evaluation would be to check formally whether an effect of group can be found on the answers. To do that, you could create a binary dataset with following columns :
yes/no - item - participant - group
With this you can use item as a random term, participant nested in group and test the fixed effect of group using e.g. a glm with a logit link. You can just ignore participant too if you lose too many df.
This is an approximation of the truth, but if the effect of group is significant, I wouldn't collapse the dataset. | When is it acceptable to collapse across groups when performing a factor analysis? | The approach you mention seems reasonable, but you'd have to take into account that you cannot see the total dataset as a single population. So theoretically, you should use any kind of method that ca | When is it acceptable to collapse across groups when performing a factor analysis?
The approach you mention seems reasonable, but you'd have to take into account that you cannot see the total dataset as a single population. So theoretically, you should use any kind of method that can take differences between those groups into account, similar to using "group" as a random term in an ANOVA or GLM approach.
An alternative for empirical evaluation would be to check formally whether an effect of group can be found on the answers. To do that, you could create a binary dataset with following columns :
yes/no - item - participant - group
With this you can use item as a random term, participant nested in group and test the fixed effect of group using e.g. a glm with a logit link. You can just ignore participant too if you lose too many df.
This is an approximation of the truth, but if the effect of group is significant, I wouldn't collapse the dataset. | When is it acceptable to collapse across groups when performing a factor analysis?
The approach you mention seems reasonable, but you'd have to take into account that you cannot see the total dataset as a single population. So theoretically, you should use any kind of method that ca |
55,517 | When is it acceptable to collapse across groups when performing a factor analysis? | It might be a little fly by night, but your theory may suggest whether the two groups have the same factor structure or not. If your theory suggests they do, and there is no reason to doubt the theory, I'd suggest you could go right ahead and trust that they have the same factor structure.
Your empirical assessment would probably be a good route to go just to spot check the theoretical assessment as whether they are likely to share the same structure. However, I don't intuitively see why mean differences between items would imply they have a different underlying factor structure. It seems to me that might just suggest that one group has higher or lower scores on a given factor. | When is it acceptable to collapse across groups when performing a factor analysis? | It might be a little fly by night, but your theory may suggest whether the two groups have the same factor structure or not. If your theory suggests they do, and there is no reason to doubt the theor | When is it acceptable to collapse across groups when performing a factor analysis?
It might be a little fly by night, but your theory may suggest whether the two groups have the same factor structure or not. If your theory suggests they do, and there is no reason to doubt the theory, I'd suggest you could go right ahead and trust that they have the same factor structure.
Your empirical assessment would probably be a good route to go just to spot check the theoretical assessment as whether they are likely to share the same structure. However, I don't intuitively see why mean differences between items would imply they have a different underlying factor structure. It seems to me that might just suggest that one group has higher or lower scores on a given factor. | When is it acceptable to collapse across groups when performing a factor analysis?
It might be a little fly by night, but your theory may suggest whether the two groups have the same factor structure or not. If your theory suggests they do, and there is no reason to doubt the theor |
55,518 | Two answers to the dartboard problem | Intuitively, imagine modeling the second formulation as follows: randomly select an angle to the $x$-axis, calling it $\theta$, then model the location of the dart as falling uniformly in a very thin rectangle along the line $y = (\tan\theta) x$. Approximately, the dart is in the inner circle with probability $1/3$. However, when you consider the collection of all such thin rectangles (draw them, say), you will see that they have more overlapping area near the center of the dartboard, and less overlap towards the perimeter of the dartboard. This will be more obvious as you draw the rectangles larger and larger (though the approximation will be worse). As you make the rectangles thinner, the approximation gets better, but the same principle applies: you are putting more area around the center of the circle, which increases the probability of hitting the inner circle. | Two answers to the dartboard problem | Intuitively, imagine modeling the second formulation as follows: randomly select an angle to the $x$-axis, calling it $\theta$, then model the location of the dart as falling uniformly in a very thin | Two answers to the dartboard problem
Intuitively, imagine modeling the second formulation as follows: randomly select an angle to the $x$-axis, calling it $\theta$, then model the location of the dart as falling uniformly in a very thin rectangle along the line $y = (\tan\theta) x$. Approximately, the dart is in the inner circle with probability $1/3$. However, when you consider the collection of all such thin rectangles (draw them, say), you will see that they have more overlapping area near the center of the dartboard, and less overlap towards the perimeter of the dartboard. This will be more obvious as you draw the rectangles larger and larger (though the approximation will be worse). As you make the rectangles thinner, the approximation gets better, but the same principle applies: you are putting more area around the center of the circle, which increases the probability of hitting the inner circle. | Two answers to the dartboard problem
Intuitively, imagine modeling the second formulation as follows: randomly select an angle to the $x$-axis, calling it $\theta$, then model the location of the dart as falling uniformly in a very thin |
55,519 | Two answers to the dartboard problem | It seems to me that the fundamental issue is that the two scenarios assume different data generating process for the position of a dart which results in different probabilities.
The first situation's data generating process looks like so: (a) Pick a $x \in U[-1,1]$ and (b) Pick a $y$ uniformly subject to the constraint that $x^2+y^2 \le 1$. Then the required probability is $P(x^2 + y^2 \le \frac{1}{9})= \frac{1}{9}$.
The second situation's data generating process is as described in the question: (a) Pick an angle $\theta \in [0,2\pi]$ and (b) Pick a point on the diameter that is at an angle $\theta$ to the x-axis. Under this data generating process the required probability is $\frac{1}{3}$ as mentioned in the question.
As articulated by mbq, the issue is that the phrase 'randomly lands on the dartboard' is not precise enough as it leaves the meaning of 'random' ambiguous. This is similar to asking what is the probability of coin landing heads on a random toss. The answer can be 0.5 if we assume that the coin is a fair coin but it can be anything else (say, 0.8) if the coin is biased towards heads. | Two answers to the dartboard problem | It seems to me that the fundamental issue is that the two scenarios assume different data generating process for the position of a dart which results in different probabilities.
The first situation's | Two answers to the dartboard problem
It seems to me that the fundamental issue is that the two scenarios assume different data generating process for the position of a dart which results in different probabilities.
The first situation's data generating process looks like so: (a) Pick a $x \in U[-1,1]$ and (b) Pick a $y$ uniformly subject to the constraint that $x^2+y^2 \le 1$. Then the required probability is $P(x^2 + y^2 \le \frac{1}{9})= \frac{1}{9}$.
The second situation's data generating process is as described in the question: (a) Pick an angle $\theta \in [0,2\pi]$ and (b) Pick a point on the diameter that is at an angle $\theta$ to the x-axis. Under this data generating process the required probability is $\frac{1}{3}$ as mentioned in the question.
As articulated by mbq, the issue is that the phrase 'randomly lands on the dartboard' is not precise enough as it leaves the meaning of 'random' ambiguous. This is similar to asking what is the probability of coin landing heads on a random toss. The answer can be 0.5 if we assume that the coin is a fair coin but it can be anything else (say, 0.8) if the coin is biased towards heads. | Two answers to the dartboard problem
It seems to me that the fundamental issue is that the two scenarios assume different data generating process for the position of a dart which results in different probabilities.
The first situation's |
55,520 | Two answers to the dartboard problem | Think of the board as a filter -- it just converts the positions on board into an id of a field that dart hit. So that the output will be only a deterministically converted input -- and thus it is obvious that different realization of throwing darts will result in distribution of results.
The paradox itself is purely linguistic -- "random throwing" seems ok, while the true is that it misses crucial information about how the throwing is realized. | Two answers to the dartboard problem | Think of the board as a filter -- it just converts the positions on board into an id of a field that dart hit. So that the output will be only a deterministically converted input -- and thus it is obv | Two answers to the dartboard problem
Think of the board as a filter -- it just converts the positions on board into an id of a field that dart hit. So that the output will be only a deterministically converted input -- and thus it is obvious that different realization of throwing darts will result in distribution of results.
The paradox itself is purely linguistic -- "random throwing" seems ok, while the true is that it misses crucial information about how the throwing is realized. | Two answers to the dartboard problem
Think of the board as a filter -- it just converts the positions on board into an id of a field that dart hit. So that the output will be only a deterministically converted input -- and thus it is obv |
55,521 | Method to compare variable coefficient in two regression models | Here is my suggestion. Rerun your model(s) using one single regression. And, the Summer/Winter variable would be simply a single dummy variable (1,0). This way you would have a coefficient for Summer to differentiate it from Winter. And, the regression coefficients for your three other variables would be consistent with one single weight rank. | Method to compare variable coefficient in two regression models | Here is my suggestion. Rerun your model(s) using one single regression. And, the Summer/Winter variable would be simply a single dummy variable (1,0). This way you would have a coefficient for Summ | Method to compare variable coefficient in two regression models
Here is my suggestion. Rerun your model(s) using one single regression. And, the Summer/Winter variable would be simply a single dummy variable (1,0). This way you would have a coefficient for Summer to differentiate it from Winter. And, the regression coefficients for your three other variables would be consistent with one single weight rank. | Method to compare variable coefficient in two regression models
Here is my suggestion. Rerun your model(s) using one single regression. And, the Summer/Winter variable would be simply a single dummy variable (1,0). This way you would have a coefficient for Summ |
55,522 | Method to compare variable coefficient in two regression models | One answer is to do a seemingly unrelated regression. Suppose that you only have a single predictor plus an intercept. Create a data set (or data matrix) like
wo 1 wp 0 0
so 0 0 1 sp
where 'wo' is the outcome in the winter season and 'wp' is the winter predictor/"X" value and 'so' is the summer outcome value and 'sp' is the summer predictor. The 1s represent the summer and winter intercept terms. Basically, you have two sets of variables: summer variables and winter variables. In the summer, all the winter variables are set to 0 and vice-versa in the summer.
After you run a regression on the full set of summer and winter variables using the data template above, you get a full covariance matrix that can be used to compare the coefficients for the winter months to the summer months using standard regression procedures.
This is one way of implementing @kwak's suggestion of having different slope coefficients for each season. | Method to compare variable coefficient in two regression models | One answer is to do a seemingly unrelated regression. Suppose that you only have a single predictor plus an intercept. Create a data set (or data matrix) like
wo 1 wp 0 0
so 0 0 1 sp
where 'wo' is t | Method to compare variable coefficient in two regression models
One answer is to do a seemingly unrelated regression. Suppose that you only have a single predictor plus an intercept. Create a data set (or data matrix) like
wo 1 wp 0 0
so 0 0 1 sp
where 'wo' is the outcome in the winter season and 'wp' is the winter predictor/"X" value and 'so' is the summer outcome value and 'sp' is the summer predictor. The 1s represent the summer and winter intercept terms. Basically, you have two sets of variables: summer variables and winter variables. In the summer, all the winter variables are set to 0 and vice-versa in the summer.
After you run a regression on the full set of summer and winter variables using the data template above, you get a full covariance matrix that can be used to compare the coefficients for the winter months to the summer months using standard regression procedures.
This is one way of implementing @kwak's suggestion of having different slope coefficients for each season. | Method to compare variable coefficient in two regression models
One answer is to do a seemingly unrelated regression. Suppose that you only have a single predictor plus an intercept. Create a data set (or data matrix) like
wo 1 wp 0 0
so 0 0 1 sp
where 'wo' is t |
55,523 | Good references on communicating the results of a statistical analysis to laypeople or non-expert stakeholders? | I picked up lots of useful tips from The Art of Statistics by David Spiegelhalter. I think he does an exceptional job of communicating some very abstract concepts without flattening the nuance in the process | Good references on communicating the results of a statistical analysis to laypeople or non-expert st | I picked up lots of useful tips from The Art of Statistics by David Spiegelhalter. I think he does an exceptional job of communicating some very abstract concepts without flattening the nuance in the | Good references on communicating the results of a statistical analysis to laypeople or non-expert stakeholders?
I picked up lots of useful tips from The Art of Statistics by David Spiegelhalter. I think he does an exceptional job of communicating some very abstract concepts without flattening the nuance in the process | Good references on communicating the results of a statistical analysis to laypeople or non-expert st
I picked up lots of useful tips from The Art of Statistics by David Spiegelhalter. I think he does an exceptional job of communicating some very abstract concepts without flattening the nuance in the |
55,524 | GLM dropping an interaction | You actually need to remove the main effect of ano0. So your model formula should be
casos ~ 0 + municipio + municipio:ano0 +
offset(log(populacao))
I know it looks like you're omitting a term, but you will estimate exactly the same number of parameters and the model fit will be identical. This gives you an intercept and slope of ano0 for each level of municipio. | GLM dropping an interaction | You actually need to remove the main effect of ano0. So your model formula should be
casos ~ 0 + municipio + municipio:ano0 +
offset(log(populacao))
I know it looks like you're omitting a term, | GLM dropping an interaction
You actually need to remove the main effect of ano0. So your model formula should be
casos ~ 0 + municipio + municipio:ano0 +
offset(log(populacao))
I know it looks like you're omitting a term, but you will estimate exactly the same number of parameters and the model fit will be identical. This gives you an intercept and slope of ano0 for each level of municipio. | GLM dropping an interaction
You actually need to remove the main effect of ano0. So your model formula should be
casos ~ 0 + municipio + municipio:ano0 +
offset(log(populacao))
I know it looks like you're omitting a term, |
55,525 | Hypoexponential distribution is stuck because of $\lambda_i \neq \lambda_j$? | The definition of the hypoexponential distribution (HD) requires that:
$$f(x)=\sum_i^d \left(\prod_{j=1,i\neq j}^{d}\frac{\lambda_j}{\lambda_j-\lambda_i}\right)\lambda_i e^{(-\lambda_ix)},\quad x>0
$$
that expression is only a special case when $\lambda_i \neq \lambda_j$. It is not a requirement for the hypoexponential distribution in general.
The more general hypo-exponential distribution can be expressed as a phase-type distribution
$$f(x) = -\boldsymbol{\alpha}e^{x \boldsymbol{\Theta}} \boldsymbol{\Theta} \mathbf{1} $$
With $$\boldsymbol{\alpha} = (1,0,0,\dots,0,0)$$ and
$$\boldsymbol{\Theta} = \begin{bmatrix}
-\lambda_1 & \lambda_1 & 0 & \dots & 0 & 0 \\
0 & -\lambda_2 & \lambda_2 & \dots & 0 & 0 \\
\vdots & \ddots & \ddots & \ddots & \ddots & \vdots\\
0 & 0 & \ddots & -\lambda_{d-2} & \lambda_{d-2} & 0\\
0 & 0 & \dots & 0 & -\lambda_{d-1} & \lambda_{d-1}\\
0 & 0 & \dots & 0 & 0 & -\lambda_{d}\\
\end{bmatrix}$$
That involves the exponentiation of a matrix. And that can be approximated with a sum using a Taylor series.
You can see the matrix as modelling a sort of Markov chain process with non-discrete time steps. A sum of exponential distributed variables is like waiting for several consecutive transitions whose waiting time is each exponential distributed. Those transitions relate to the Markov chain.
Not only when $\lambda_i \neq \lambda_j$ but also when $\lambda_i = \lambda_j$ then the formula can give problems as demonstrated in this question: Why my cdf of the convolution of n exponential distribution is not in the range(0,1)? | Hypoexponential distribution is stuck because of $\lambda_i \neq \lambda_j$? | The definition of the hypoexponential distribution (HD) requires that:
$$f(x)=\sum_i^d \left(\prod_{j=1,i\neq j}^{d}\frac{\lambda_j}{\lambda_j-\lambda_i}\right)\lambda_i e^{(-\lambda_ix)},\quad x>0
$$ | Hypoexponential distribution is stuck because of $\lambda_i \neq \lambda_j$?
The definition of the hypoexponential distribution (HD) requires that:
$$f(x)=\sum_i^d \left(\prod_{j=1,i\neq j}^{d}\frac{\lambda_j}{\lambda_j-\lambda_i}\right)\lambda_i e^{(-\lambda_ix)},\quad x>0
$$
that expression is only a special case when $\lambda_i \neq \lambda_j$. It is not a requirement for the hypoexponential distribution in general.
The more general hypo-exponential distribution can be expressed as a phase-type distribution
$$f(x) = -\boldsymbol{\alpha}e^{x \boldsymbol{\Theta}} \boldsymbol{\Theta} \mathbf{1} $$
With $$\boldsymbol{\alpha} = (1,0,0,\dots,0,0)$$ and
$$\boldsymbol{\Theta} = \begin{bmatrix}
-\lambda_1 & \lambda_1 & 0 & \dots & 0 & 0 \\
0 & -\lambda_2 & \lambda_2 & \dots & 0 & 0 \\
\vdots & \ddots & \ddots & \ddots & \ddots & \vdots\\
0 & 0 & \ddots & -\lambda_{d-2} & \lambda_{d-2} & 0\\
0 & 0 & \dots & 0 & -\lambda_{d-1} & \lambda_{d-1}\\
0 & 0 & \dots & 0 & 0 & -\lambda_{d}\\
\end{bmatrix}$$
That involves the exponentiation of a matrix. And that can be approximated with a sum using a Taylor series.
You can see the matrix as modelling a sort of Markov chain process with non-discrete time steps. A sum of exponential distributed variables is like waiting for several consecutive transitions whose waiting time is each exponential distributed. Those transitions relate to the Markov chain.
Not only when $\lambda_i \neq \lambda_j$ but also when $\lambda_i = \lambda_j$ then the formula can give problems as demonstrated in this question: Why my cdf of the convolution of n exponential distribution is not in the range(0,1)? | Hypoexponential distribution is stuck because of $\lambda_i \neq \lambda_j$?
The definition of the hypoexponential distribution (HD) requires that:
$$f(x)=\sum_i^d \left(\prod_{j=1,i\neq j}^{d}\frac{\lambda_j}{\lambda_j-\lambda_i}\right)\lambda_i e^{(-\lambda_ix)},\quad x>0
$$ |
55,526 | Hypoexponential distribution is stuck because of $\lambda_i \neq \lambda_j$? | The distribution in question is a mixture of Exponential distributions $\mathcal E(\lambda_i)$ with specified weight, i.e. its density is
$$f(x) = \sum_{i=1}^d p_i\,\lambda_i\exp\{-\lambda_i x\}\quad x\ge 0\tag{1}$$
with
$$p_i=\prod_{j=1\\ j\ne i}^d\dfrac{\lambda_j}{\lambda_i-\lambda_j}\quad i=1,\ldots,d$$
There is therefore a single realisation produced by this density (rather than a sequence of $x_i$'s).
Note that, while
$$\sum_{i=1}^d p_i= \sum_{i=1}^d \prod_{j=1\\ j\ne i}^d\dfrac{\lambda_j}{\lambda_i-\lambda_j}=1$$
(by the Cauchy determinant formula), the weights can be negative, which makes (1) a signed mixture.
Now, if $\lambda_1\approx\lambda_2$ (wlog), starting with the case $d=2$ leads to the Gamma $\mathcal G(2,\lambda_1)$ distribution:
$$\lim_{\epsilon\to 0}\lambda_1(\lambda_1+\epsilon)\dfrac{e^{-\lambda_1x}-e^{-[\lambda_1+\epsilon]x}}{\epsilon}=\lambda_1^2xe^{-\lambda_1x}$$
$-$by L'Hospital's rule$-$as expected since this is the distribution of the sum two iid Exponential $\mathcal E(\lambda_1)$ random variates.
In the general case $d>2$, the undefined term in the density
$$\lim_{\epsilon\to 0}\dfrac{e^{-\lambda_1x}}{\epsilon}\underbrace{\prod_{j=3}^d\dfrac{1}{\lambda_j-\lambda_1}}_\rho
-\dfrac{e^{-[\lambda_1+\epsilon]x}}{\epsilon}\prod_{j=3}^d\dfrac{1}{\lambda_j-\lambda_1-\epsilon}$$
is equal to
$$\lim_{\epsilon\to 0}\frac{\rho e^{-\lambda_1x}}{\epsilon\prod_{j=3}^d[\lambda_j-\lambda_1-\epsilon]}
\left\{\prod_{j=3}^d[\lambda_j-\lambda_1-\epsilon]-e^{-\epsilon x}\rho^{-1}\right\}
=\rho e^{-\lambda_1x}\left\{ x - \sum_{j=3}^d (\lambda_j-\lambda_1)^{-1}\right\}$$
by L'Hospital's rule. | Hypoexponential distribution is stuck because of $\lambda_i \neq \lambda_j$? | The distribution in question is a mixture of Exponential distributions $\mathcal E(\lambda_i)$ with specified weight, i.e. its density is
$$f(x) = \sum_{i=1}^d p_i\,\lambda_i\exp\{-\lambda_i x\}\quad | Hypoexponential distribution is stuck because of $\lambda_i \neq \lambda_j$?
The distribution in question is a mixture of Exponential distributions $\mathcal E(\lambda_i)$ with specified weight, i.e. its density is
$$f(x) = \sum_{i=1}^d p_i\,\lambda_i\exp\{-\lambda_i x\}\quad x\ge 0\tag{1}$$
with
$$p_i=\prod_{j=1\\ j\ne i}^d\dfrac{\lambda_j}{\lambda_i-\lambda_j}\quad i=1,\ldots,d$$
There is therefore a single realisation produced by this density (rather than a sequence of $x_i$'s).
Note that, while
$$\sum_{i=1}^d p_i= \sum_{i=1}^d \prod_{j=1\\ j\ne i}^d\dfrac{\lambda_j}{\lambda_i-\lambda_j}=1$$
(by the Cauchy determinant formula), the weights can be negative, which makes (1) a signed mixture.
Now, if $\lambda_1\approx\lambda_2$ (wlog), starting with the case $d=2$ leads to the Gamma $\mathcal G(2,\lambda_1)$ distribution:
$$\lim_{\epsilon\to 0}\lambda_1(\lambda_1+\epsilon)\dfrac{e^{-\lambda_1x}-e^{-[\lambda_1+\epsilon]x}}{\epsilon}=\lambda_1^2xe^{-\lambda_1x}$$
$-$by L'Hospital's rule$-$as expected since this is the distribution of the sum two iid Exponential $\mathcal E(\lambda_1)$ random variates.
In the general case $d>2$, the undefined term in the density
$$\lim_{\epsilon\to 0}\dfrac{e^{-\lambda_1x}}{\epsilon}\underbrace{\prod_{j=3}^d\dfrac{1}{\lambda_j-\lambda_1}}_\rho
-\dfrac{e^{-[\lambda_1+\epsilon]x}}{\epsilon}\prod_{j=3}^d\dfrac{1}{\lambda_j-\lambda_1-\epsilon}$$
is equal to
$$\lim_{\epsilon\to 0}\frac{\rho e^{-\lambda_1x}}{\epsilon\prod_{j=3}^d[\lambda_j-\lambda_1-\epsilon]}
\left\{\prod_{j=3}^d[\lambda_j-\lambda_1-\epsilon]-e^{-\epsilon x}\rho^{-1}\right\}
=\rho e^{-\lambda_1x}\left\{ x - \sum_{j=3}^d (\lambda_j-\lambda_1)^{-1}\right\}$$
by L'Hospital's rule. | Hypoexponential distribution is stuck because of $\lambda_i \neq \lambda_j$?
The distribution in question is a mixture of Exponential distributions $\mathcal E(\lambda_i)$ with specified weight, i.e. its density is
$$f(x) = \sum_{i=1}^d p_i\,\lambda_i\exp\{-\lambda_i x\}\quad |
55,527 | Quantifying the confidence that the most sampled outcome is the most probable outcome | Using Bayesian methods, you could start with a conjugate Dirichlet prior for the probabilities of the six sides, update it with your observations, and then find the probability from the Dirichlet posterior that side five has the highest underlying probability of the six sides.
This will be affected slightly by the prior you choose and substantially by the actual observations. It may produce slightly counter-intuitive results for small numbers of observations. To take a simpler example with a biased coin,
if you start with a uniform prior for the probability of it being heads then toss it once and see heads, the posterior probability of it being biased towards heads would be $0.75$ and towards tails $0.25$;
if instead you tossed it $200$ times and see heads $101$ times then the posterior probability of it being biased towards heads would be about $0.556$;
if you tossed it $200$ times and see heads $115$ times then the posterior probability of it being biased towards heads would be about $0.983$.
I do not see a simple way of doing the integration with six-sided dice to find the probability a given face is most probable, but simulation will get close enough. The following uses R and a so-called uniform Dirichlet prior for the biases, supposing you observed $21$ dice throws of $2$ ones, $3$ twos, $4$ threes, $5$ fours, $6$ fives and $1$ six:
library(gtools)
probmostlikely <- function(obs, prior=rep(1, length(obs)),
cases=10^6) {
posterior <- prior + obs
sims <- rdirichlet(cases, posterior)
table(apply(sims, 1, function(x) which(x == max(x))[1])) / cases
}
set.seed(2023)
probmostlikely(c(2, 3, 4, 5, 6, 1))
# 1 2 3 4 5 6
# 0.027885 0.072102 0.152415 0.279509 0.460498 0.007591
so suggesting that the die is biased most towards five with posterior probability about $0.46$ (and most towards six with posterior probability just under $0.008$).
Seeing that pattern of observations ten times as often would increase the posterior probability that the die is biased most towards five to just under $0.82$ (and reduces those for one and six to something so small that they never appeared as most likely in a million simulations).
probmostlikely(c(20, 30, 40, 50, 60, 10))
# 2 3 4 5
# 0.000222 0.013702 0.167825 0.818251 | Quantifying the confidence that the most sampled outcome is the most probable outcome | Using Bayesian methods, you could start with a conjugate Dirichlet prior for the probabilities of the six sides, update it with your observations, and then find the probability from the Dirichlet pos | Quantifying the confidence that the most sampled outcome is the most probable outcome
Using Bayesian methods, you could start with a conjugate Dirichlet prior for the probabilities of the six sides, update it with your observations, and then find the probability from the Dirichlet posterior that side five has the highest underlying probability of the six sides.
This will be affected slightly by the prior you choose and substantially by the actual observations. It may produce slightly counter-intuitive results for small numbers of observations. To take a simpler example with a biased coin,
if you start with a uniform prior for the probability of it being heads then toss it once and see heads, the posterior probability of it being biased towards heads would be $0.75$ and towards tails $0.25$;
if instead you tossed it $200$ times and see heads $101$ times then the posterior probability of it being biased towards heads would be about $0.556$;
if you tossed it $200$ times and see heads $115$ times then the posterior probability of it being biased towards heads would be about $0.983$.
I do not see a simple way of doing the integration with six-sided dice to find the probability a given face is most probable, but simulation will get close enough. The following uses R and a so-called uniform Dirichlet prior for the biases, supposing you observed $21$ dice throws of $2$ ones, $3$ twos, $4$ threes, $5$ fours, $6$ fives and $1$ six:
library(gtools)
probmostlikely <- function(obs, prior=rep(1, length(obs)),
cases=10^6) {
posterior <- prior + obs
sims <- rdirichlet(cases, posterior)
table(apply(sims, 1, function(x) which(x == max(x))[1])) / cases
}
set.seed(2023)
probmostlikely(c(2, 3, 4, 5, 6, 1))
# 1 2 3 4 5 6
# 0.027885 0.072102 0.152415 0.279509 0.460498 0.007591
so suggesting that the die is biased most towards five with posterior probability about $0.46$ (and most towards six with posterior probability just under $0.008$).
Seeing that pattern of observations ten times as often would increase the posterior probability that the die is biased most towards five to just under $0.82$ (and reduces those for one and six to something so small that they never appeared as most likely in a million simulations).
probmostlikely(c(20, 30, 40, 50, 60, 10))
# 2 3 4 5
# 0.000222 0.013702 0.167825 0.818251 | Quantifying the confidence that the most sampled outcome is the most probable outcome
Using Bayesian methods, you could start with a conjugate Dirichlet prior for the probabilities of the six sides, update it with your observations, and then find the probability from the Dirichlet pos |
55,528 | Quantifying the confidence that the most sampled outcome is the most probable outcome | We can use profile likelihood methods to construct a confidence interval for the maximum probability $\theta = \max_{j=1}^k p_k$. Here $p_1, p_2, \dotsc, p_k$ represent the discrete distribution of dice rolls, where in your example $p_5$ is somewhat larger than the others. The results of $n$ dice rolls is given by the random variable $X=(X_1, \dotsc, X_k)$ where in the dice example $k=6$. The likelihood function is then
$$ L(p) = p_1^{X_1} p_2^{X_2} \dotsm p_k^{X_k} $$
and the loglikelihood is
$$ \ell(p) =\sum_1^k x_j \log(p_j) $$
The profile likelihood function for $\theta$ as defined above is
$$ \ell_P(\theta) = \max_{p \colon \max p_j = \theta} \ell(p) $$
With some simulated data we get the following profile log-likelihood function
where the horizontal lines can be used to read off confidence intervals with confidence levels 0.95, 0.99 respectively. I will add the R code use at the end of the post. A paper using bootstrapping for estimating $\theta$ is Simultaneous confidence intervals for multinomial proportions
But this is only a partial solution, in a comment you say
@Dave No, I want to find the most probable outcome of the die.
I read that as finding the maximum probability (done above), but also which of the sides of the dice correspond to the max probability. The Bayesian approach in the answer by user Henry is a direct answer to that. It is not so clear how to approach that in a frequentist way, maybe bootstrapping could be tried? One old approach is subset selection, choosing a subset of the sides of the coin which contains the side with max probability with a certain confidence level. Papers discussing such methods is A subset selection procedure for multinomial distributions and SELECTING A SUBSET CONTAINING ALL THE MULTINOMIAL CELLS BETTER THAN A STANDARD WITH INVERSE SAMPLING.
R code for the plot above:
library(alabama)
make_proflik_max <- function(x) {
stopifnot(all(x >= 0))
k <- length(x)
Vectorize(function(theta) {
par <- rep(1/k, k) # initial values
fn <- function(p) -sum(log(p) * x)
gr <- function(p) -x/p
hin <- function(p) { # each component must be positive
c(p)
}
heq <- function(p) { # must be zero
c(sum(p)-1 , max(p) - theta)
}
res <- alabama::auglag(par, fn, hin=hin, heq=heq)
-res$value
} )
}
set.seed(7*11*13) # My public seed
x <- sample(1:6, 200, replace=TRUE, prob=c(9,9,9,9,10,9))
# 5 is a little more probable
x <- table(x)
proflik_max <- make_proflik_max(x)
plot(proflik_max, from=1/6 + 0.001, to=0.35, xlab=expression(theta))
loglik <- function(p) sum(x * log(p))
maxloglik <- loglik(x/200)
mle_pmax <- max(x/200)
abline(h=maxloglik - qchisq(0.95,1)/2, col="red")
abline(h=maxloglik - qchisq(0.99,1)/2, col="blue")
abline(v=mle_pmax) | Quantifying the confidence that the most sampled outcome is the most probable outcome | We can use profile likelihood methods to construct a confidence interval for the maximum probability $\theta = \max_{j=1}^k p_k$. Here $p_1, p_2, \dotsc, p_k$ represent the discrete distribution of d | Quantifying the confidence that the most sampled outcome is the most probable outcome
We can use profile likelihood methods to construct a confidence interval for the maximum probability $\theta = \max_{j=1}^k p_k$. Here $p_1, p_2, \dotsc, p_k$ represent the discrete distribution of dice rolls, where in your example $p_5$ is somewhat larger than the others. The results of $n$ dice rolls is given by the random variable $X=(X_1, \dotsc, X_k)$ where in the dice example $k=6$. The likelihood function is then
$$ L(p) = p_1^{X_1} p_2^{X_2} \dotsm p_k^{X_k} $$
and the loglikelihood is
$$ \ell(p) =\sum_1^k x_j \log(p_j) $$
The profile likelihood function for $\theta$ as defined above is
$$ \ell_P(\theta) = \max_{p \colon \max p_j = \theta} \ell(p) $$
With some simulated data we get the following profile log-likelihood function
where the horizontal lines can be used to read off confidence intervals with confidence levels 0.95, 0.99 respectively. I will add the R code use at the end of the post. A paper using bootstrapping for estimating $\theta$ is Simultaneous confidence intervals for multinomial proportions
But this is only a partial solution, in a comment you say
@Dave No, I want to find the most probable outcome of the die.
I read that as finding the maximum probability (done above), but also which of the sides of the dice correspond to the max probability. The Bayesian approach in the answer by user Henry is a direct answer to that. It is not so clear how to approach that in a frequentist way, maybe bootstrapping could be tried? One old approach is subset selection, choosing a subset of the sides of the coin which contains the side with max probability with a certain confidence level. Papers discussing such methods is A subset selection procedure for multinomial distributions and SELECTING A SUBSET CONTAINING ALL THE MULTINOMIAL CELLS BETTER THAN A STANDARD WITH INVERSE SAMPLING.
R code for the plot above:
library(alabama)
make_proflik_max <- function(x) {
stopifnot(all(x >= 0))
k <- length(x)
Vectorize(function(theta) {
par <- rep(1/k, k) # initial values
fn <- function(p) -sum(log(p) * x)
gr <- function(p) -x/p
hin <- function(p) { # each component must be positive
c(p)
}
heq <- function(p) { # must be zero
c(sum(p)-1 , max(p) - theta)
}
res <- alabama::auglag(par, fn, hin=hin, heq=heq)
-res$value
} )
}
set.seed(7*11*13) # My public seed
x <- sample(1:6, 200, replace=TRUE, prob=c(9,9,9,9,10,9))
# 5 is a little more probable
x <- table(x)
proflik_max <- make_proflik_max(x)
plot(proflik_max, from=1/6 + 0.001, to=0.35, xlab=expression(theta))
loglik <- function(p) sum(x * log(p))
maxloglik <- loglik(x/200)
mle_pmax <- max(x/200)
abline(h=maxloglik - qchisq(0.95,1)/2, col="red")
abline(h=maxloglik - qchisq(0.99,1)/2, col="blue")
abline(v=mle_pmax) | Quantifying the confidence that the most sampled outcome is the most probable outcome
We can use profile likelihood methods to construct a confidence interval for the maximum probability $\theta = \max_{j=1}^k p_k$. Here $p_1, p_2, \dotsc, p_k$ represent the discrete distribution of d |
55,529 | VECM: alpha is a 0-vector? cointegration rank = $k$ even though $X_t$ is I(1)? | A brief answer:
You logic is correct. In theory, this should not happen. In practice, this may be caused by estimation imprecision and/or low power of tests.
In theory, the lag does not matter. As long as the lag is finite, the appropriate linear combination of the cointegrated series is a finite sum of I(0) elements, and that is still I(0). (See this answer of mine for details.) In practice, see point 1.
In theory, the $\alpha$s should not be jointly zero, though one of them may be zero. In practice, see point 1.
Yes, it is standard practice to normalize the first element of $\beta$ to 1. You could always multiple $\alpha$ by a constant and divide $\beta$ by the same constant to obtain the same behavior of the cointegrated system, so normalizing $\beta$ this way is just a matter of convention. | VECM: alpha is a 0-vector? cointegration rank = $k$ even though $X_t$ is I(1)? | A brief answer:
You logic is correct. In theory, this should not happen. In practice, this may be caused by estimation imprecision and/or low power of tests.
In theory, the lag does not matter. As lo | VECM: alpha is a 0-vector? cointegration rank = $k$ even though $X_t$ is I(1)?
A brief answer:
You logic is correct. In theory, this should not happen. In practice, this may be caused by estimation imprecision and/or low power of tests.
In theory, the lag does not matter. As long as the lag is finite, the appropriate linear combination of the cointegrated series is a finite sum of I(0) elements, and that is still I(0). (See this answer of mine for details.) In practice, see point 1.
In theory, the $\alpha$s should not be jointly zero, though one of them may be zero. In practice, see point 1.
Yes, it is standard practice to normalize the first element of $\beta$ to 1. You could always multiple $\alpha$ by a constant and divide $\beta$ by the same constant to obtain the same behavior of the cointegrated system, so normalizing $\beta$ this way is just a matter of convention. | VECM: alpha is a 0-vector? cointegration rank = $k$ even though $X_t$ is I(1)?
A brief answer:
You logic is correct. In theory, this should not happen. In practice, this may be caused by estimation imprecision and/or low power of tests.
In theory, the lag does not matter. As lo |
55,530 | Random number generator for non-central chi-squared with non-integer dimension | I am assuming you know how to generate random draws from a central chi-squared distribution, or from its equivalent gamma version; see below for the details. I also suggest possible readings on algorithms for the generation of random variates from a gamma distribution, in case this premise does not hold; see also the references provided in the comments of your post.
Solution 1.
As per comments, the function rchisq of R does implement what you are looking for. From the help page of this function:
Usage
rchisq(n, df, ncp = 0)
Arguments
n: number of observations. If length(n) > 1, the length is taken to be
the number required.
df: degrees of freedom (non-negative, but can be non-integer).
ncp: non-centrality parameter (non-negative).
Here is a simple R code for generating $10^4$ samples from the chi-square with $\nu = 2.56$ degrees of freedom and non-centrality parameter ncp=6.52.
hist(rchisq(10^4, df=2.56, ncp=6.52))
Solution 2. Expanding over whuber's comment, you can generate numbers from this distribution by noting the fact that its density is an infinite mixture of central chi-squared distributions with Poisson weights. That is, a non-central chi-squared random variable with non-centrality parameter $\lambda$, i.e. $\chi_\nu^2(\lambda)$ has density function
$$
f(x;\nu, \lambda) = \sum_{r=0}^{\infty} \frac{e^{-\lambda/2} (\lambda/2)^r}{r!}f_{\chi_{\nu+2r}^2}(x) = E\left(f_{\chi_{\nu+2R}^2}\right),
$$
where $R\sim \text{Poisson}(\lambda/2)$ and $\chi_\nu^2$ denotes the central chi-squared distribution.
To generate a random variate $x$ from the $\chi_\nu^2(\lambda)$ distribution then do:
draw $r$ from $\text{Poisson}(\lambda/2)$
draw $x$ from $\chi^2_{\nu +2r}$
Here is a simple R code for all this.
# implement a simple function
my_rchisq <- function(df, ncp) {
if(ncp==0)
stop("Please use rchisq.")
n = rpois(1, lambda = ncp/2)
out = rchisq(1, df= df + 2*n, ncp = 0)
return(out)
}
# call the function N times
gen2 <- sapply(1:N, function(x) my_rchisq(dof,ncp))
# draw a histogram and compare it with true pdf
hist(gen2,breaks = 30, probability = TRUE)
plot(function(x) dchisq(x, df=dof, ncp=ncp),
n=200,
add=TRUE,
lwd=2, xlim = c(0,50))
Comments
The code presumes you know how to generate random variates from $\chi_{\nu}^2$, for any real $\nu>0$ (again, see rchisq in R). However, since this distribution is a special case of a gamma distribution, i.e. if $Y \sim \chi_{nu}^2$ then $Y$ has a gamma distribution with shape parameter $\nu/2$ and scale parameter equal to 2, it boils down to generating random draws from the gamma distribution (rgamma in R).
Since you are interested in $0<\text{ncp}<1$, this means that you have to draw from the gamma distribution with a shape parameter between zero and one. It turns out that the generation of random draws, in this case, may be troublesome due to close-to-zero values. The documentation of rgamma reads
Note that for smallish values of shape (and moderate scale) a large
parts of the mass of the Gamma distribution is on values of xx so near
zero that they will be represented as zero in computer arithmetic. So
rgamma may well return values which will be represented as zero. (This
will also happen for very large values of scale since the actual
generation is done for scale = 1.)
Now, if you also wish to know how to generate random draws from a gamma distribution, I suggest looking at the references on the help page of rgamma, which are specific to this issue. In particular, for the problem of shape between 0 and 1 check
Ahrens, J. H. and Dieter, U. (1974). Computer methods for sampling
from gamma, beta, Poisson and binomial distributions. Computing, 12, 223–246. | Random number generator for non-central chi-squared with non-integer dimension | I am assuming you know how to generate random draws from a central chi-squared distribution, or from its equivalent gamma version; see below for the details. I also suggest possible readings on algori | Random number generator for non-central chi-squared with non-integer dimension
I am assuming you know how to generate random draws from a central chi-squared distribution, or from its equivalent gamma version; see below for the details. I also suggest possible readings on algorithms for the generation of random variates from a gamma distribution, in case this premise does not hold; see also the references provided in the comments of your post.
Solution 1.
As per comments, the function rchisq of R does implement what you are looking for. From the help page of this function:
Usage
rchisq(n, df, ncp = 0)
Arguments
n: number of observations. If length(n) > 1, the length is taken to be
the number required.
df: degrees of freedom (non-negative, but can be non-integer).
ncp: non-centrality parameter (non-negative).
Here is a simple R code for generating $10^4$ samples from the chi-square with $\nu = 2.56$ degrees of freedom and non-centrality parameter ncp=6.52.
hist(rchisq(10^4, df=2.56, ncp=6.52))
Solution 2. Expanding over whuber's comment, you can generate numbers from this distribution by noting the fact that its density is an infinite mixture of central chi-squared distributions with Poisson weights. That is, a non-central chi-squared random variable with non-centrality parameter $\lambda$, i.e. $\chi_\nu^2(\lambda)$ has density function
$$
f(x;\nu, \lambda) = \sum_{r=0}^{\infty} \frac{e^{-\lambda/2} (\lambda/2)^r}{r!}f_{\chi_{\nu+2r}^2}(x) = E\left(f_{\chi_{\nu+2R}^2}\right),
$$
where $R\sim \text{Poisson}(\lambda/2)$ and $\chi_\nu^2$ denotes the central chi-squared distribution.
To generate a random variate $x$ from the $\chi_\nu^2(\lambda)$ distribution then do:
draw $r$ from $\text{Poisson}(\lambda/2)$
draw $x$ from $\chi^2_{\nu +2r}$
Here is a simple R code for all this.
# implement a simple function
my_rchisq <- function(df, ncp) {
if(ncp==0)
stop("Please use rchisq.")
n = rpois(1, lambda = ncp/2)
out = rchisq(1, df= df + 2*n, ncp = 0)
return(out)
}
# call the function N times
gen2 <- sapply(1:N, function(x) my_rchisq(dof,ncp))
# draw a histogram and compare it with true pdf
hist(gen2,breaks = 30, probability = TRUE)
plot(function(x) dchisq(x, df=dof, ncp=ncp),
n=200,
add=TRUE,
lwd=2, xlim = c(0,50))
Comments
The code presumes you know how to generate random variates from $\chi_{\nu}^2$, for any real $\nu>0$ (again, see rchisq in R). However, since this distribution is a special case of a gamma distribution, i.e. if $Y \sim \chi_{nu}^2$ then $Y$ has a gamma distribution with shape parameter $\nu/2$ and scale parameter equal to 2, it boils down to generating random draws from the gamma distribution (rgamma in R).
Since you are interested in $0<\text{ncp}<1$, this means that you have to draw from the gamma distribution with a shape parameter between zero and one. It turns out that the generation of random draws, in this case, may be troublesome due to close-to-zero values. The documentation of rgamma reads
Note that for smallish values of shape (and moderate scale) a large
parts of the mass of the Gamma distribution is on values of xx so near
zero that they will be represented as zero in computer arithmetic. So
rgamma may well return values which will be represented as zero. (This
will also happen for very large values of scale since the actual
generation is done for scale = 1.)
Now, if you also wish to know how to generate random draws from a gamma distribution, I suggest looking at the references on the help page of rgamma, which are specific to this issue. In particular, for the problem of shape between 0 and 1 check
Ahrens, J. H. and Dieter, U. (1974). Computer methods for sampling
from gamma, beta, Poisson and binomial distributions. Computing, 12, 223–246. | Random number generator for non-central chi-squared with non-integer dimension
I am assuming you know how to generate random draws from a central chi-squared distribution, or from its equivalent gamma version; see below for the details. I also suggest possible readings on algori |
55,531 | How to Find PDF of Transformed Random Variables Numerically? | As far as I can tell, R doesn't have such symbolic computation capabilities; though R does have some limited symbolic computation capabilities (e.g. the command D computes symbolically the derivative of a user-provided function).
Nevertheless, when the analytical computation is hard/long, or you are sceptical about your computations, you can get an approximation via the Monte Carlo (MC) approach. The idea is pretty simple. Instead of computing the analytical PDF of $Z$, draw random numbers from its distribution. You can then use these draws to approximate whatever quantity of $Z$ you are interested in.
Example.
Suppose $X_1,X_2\sim \text{Exp}(1)$, independently. Then $Z = X_1+X_2 \sim \text{Gamma}(2, 1)$. The MC approach would be to draw $N$ values (i.e. $N\times 1$ vector) from the distribution of $X_1$ and other $N$ values (i.e. $N\times 1$ vector) from the distribution of $X_2$. The element-wise sum of these vectors is another $N\times 1$ vector drawn from $\text{Gamma}(2,1)$.
Here is the R code for this.
# number of samples, the larger the better
N <- 1e4
# draw from X1
x1 <- rexp(N)
# draw from X2 (using the same N!)
x2 <- rexp(N)
# apply your funtion; here is the sum
z = x1+x2
# Distribution of Z computed(approximated) via
# Monte Carlo
hist(z, breaks = 30, probability = T, ylim=c(0,0.4))
# The TRUE distribution of Z
plot(function(x) dgamma(x, shape = 2, rate = 1),
n=100, xlim=c(0,12), lwd=2, add=TRUE)
Side note 1. For the MC method to be applicable, you must be able to take random draws from $X_1,X_2$. Sometimes, this is trivial (as in the example above) but sometimes it may be less trivial. This happens when $X_1$ and $X_2$ do not have known distributions (e.g. normal, gamma, Cauchy, etc.) not there are no standard methods for generating random numbers.
Side note 2. The MC method delivers a stochastic approximation. This means that, if you run the approximation again, you will likely get slightly different answers. Analytical computation doesn't have this issue, thus my advice is to use MC as last resort or as a method for double-checking your analytical calculations. | How to Find PDF of Transformed Random Variables Numerically? | As far as I can tell, R doesn't have such symbolic computation capabilities; though R does have some limited symbolic computation capabilities (e.g. the command D computes symbolically the derivative | How to Find PDF of Transformed Random Variables Numerically?
As far as I can tell, R doesn't have such symbolic computation capabilities; though R does have some limited symbolic computation capabilities (e.g. the command D computes symbolically the derivative of a user-provided function).
Nevertheless, when the analytical computation is hard/long, or you are sceptical about your computations, you can get an approximation via the Monte Carlo (MC) approach. The idea is pretty simple. Instead of computing the analytical PDF of $Z$, draw random numbers from its distribution. You can then use these draws to approximate whatever quantity of $Z$ you are interested in.
Example.
Suppose $X_1,X_2\sim \text{Exp}(1)$, independently. Then $Z = X_1+X_2 \sim \text{Gamma}(2, 1)$. The MC approach would be to draw $N$ values (i.e. $N\times 1$ vector) from the distribution of $X_1$ and other $N$ values (i.e. $N\times 1$ vector) from the distribution of $X_2$. The element-wise sum of these vectors is another $N\times 1$ vector drawn from $\text{Gamma}(2,1)$.
Here is the R code for this.
# number of samples, the larger the better
N <- 1e4
# draw from X1
x1 <- rexp(N)
# draw from X2 (using the same N!)
x2 <- rexp(N)
# apply your funtion; here is the sum
z = x1+x2
# Distribution of Z computed(approximated) via
# Monte Carlo
hist(z, breaks = 30, probability = T, ylim=c(0,0.4))
# The TRUE distribution of Z
plot(function(x) dgamma(x, shape = 2, rate = 1),
n=100, xlim=c(0,12), lwd=2, add=TRUE)
Side note 1. For the MC method to be applicable, you must be able to take random draws from $X_1,X_2$. Sometimes, this is trivial (as in the example above) but sometimes it may be less trivial. This happens when $X_1$ and $X_2$ do not have known distributions (e.g. normal, gamma, Cauchy, etc.) not there are no standard methods for generating random numbers.
Side note 2. The MC method delivers a stochastic approximation. This means that, if you run the approximation again, you will likely get slightly different answers. Analytical computation doesn't have this issue, thus my advice is to use MC as last resort or as a method for double-checking your analytical calculations. | How to Find PDF of Transformed Random Variables Numerically?
As far as I can tell, R doesn't have such symbolic computation capabilities; though R does have some limited symbolic computation capabilities (e.g. the command D computes symbolically the derivative |
55,532 | Boosting definition clarification | Both definitions are reasonable working definitions and equally valid as they capture the iterative nature of boosting algorithms. I view the first one as more general because the iterative nature of boosting was not strictly speaking a requirement early on. Originally boosting was defined with respect to PAC learning in Schapire (1990) "The strength of weak learnability". To quote directly from the 2012 monograph "Boosting: Foundations and Algorithms" by Schapire and Freund: "Boosting has its roots in a theoretical framework for studying machine learning called the PAC model, proposed by Valiant, (...)" and "the key idea behind boosting is to choose training sets for the base learner in such a fashion as to force it to infer something new about the data each time it is called."
Not necessarily. For example, AdaBoost learns predictors for the sample points weighted by the residuals, not for the residuals (as would be the case for gradient boosting for example). That said, both gradient boosting and AdaBoost utilise weak learners to iteratively update their overall predictions "turning multiple weak learners into one strong learner". CV.SE has a very good thread on: In boosting, why are the learners "weak"? that discusses this matter in more detail.
There is no good concise formal definition. Since Breiman's 1998 "Arcing classifier (with discussion and a rejoinder by the author)" we have moved away from the ensemble methods view to the functional gradient descent view; Friedman's 2001 "Greedy function approximation: A gradient boosting machine" being pretty much the theoretical blue-print for all boosting algorithms after it. This numerical optimization in function space necessitates looking at boosting as an iterative procedure; the "incremental functions" learned are the "boosts" and we thus have a "stage-wise" update where each stage uses a suitable approximation to our loss function $L$ to search for its next "boost".
Something like: "Boosting refers to the greedy stage-wise updates done to a prediction function $F$ using weak learners $h$ in order to minimise a specified loss function $L$." is probably the most informal and succinct one can think of... (i.e. trying to put Eqs (9 & 10) from Friedman's 2001 into words.) | Boosting definition clarification | Both definitions are reasonable working definitions and equally valid as they capture the iterative nature of boosting algorithms. I view the first one as more general because the iterative nature of | Boosting definition clarification
Both definitions are reasonable working definitions and equally valid as they capture the iterative nature of boosting algorithms. I view the first one as more general because the iterative nature of boosting was not strictly speaking a requirement early on. Originally boosting was defined with respect to PAC learning in Schapire (1990) "The strength of weak learnability". To quote directly from the 2012 monograph "Boosting: Foundations and Algorithms" by Schapire and Freund: "Boosting has its roots in a theoretical framework for studying machine learning called the PAC model, proposed by Valiant, (...)" and "the key idea behind boosting is to choose training sets for the base learner in such a fashion as to force it to infer something new about the data each time it is called."
Not necessarily. For example, AdaBoost learns predictors for the sample points weighted by the residuals, not for the residuals (as would be the case for gradient boosting for example). That said, both gradient boosting and AdaBoost utilise weak learners to iteratively update their overall predictions "turning multiple weak learners into one strong learner". CV.SE has a very good thread on: In boosting, why are the learners "weak"? that discusses this matter in more detail.
There is no good concise formal definition. Since Breiman's 1998 "Arcing classifier (with discussion and a rejoinder by the author)" we have moved away from the ensemble methods view to the functional gradient descent view; Friedman's 2001 "Greedy function approximation: A gradient boosting machine" being pretty much the theoretical blue-print for all boosting algorithms after it. This numerical optimization in function space necessitates looking at boosting as an iterative procedure; the "incremental functions" learned are the "boosts" and we thus have a "stage-wise" update where each stage uses a suitable approximation to our loss function $L$ to search for its next "boost".
Something like: "Boosting refers to the greedy stage-wise updates done to a prediction function $F$ using weak learners $h$ in order to minimise a specified loss function $L$." is probably the most informal and succinct one can think of... (i.e. trying to put Eqs (9 & 10) from Friedman's 2001 into words.) | Boosting definition clarification
Both definitions are reasonable working definitions and equally valid as they capture the iterative nature of boosting algorithms. I view the first one as more general because the iterative nature of |
55,533 | What is the meaning of squared root n when we talk about asymptotic properties? | It "comes from" the central limit theorem, see What intuitive explanation is there for the central limit theorem?. It turns out that for many, although clearly not all estimators, scaling the estimation error $\hat\beta-\beta$ by $\sqrt{n}$ yields a nondegenerate asymptotic normal distribution.
See root-n consistent estimator, but root-n doesn't converge? for further discussion.
See Estimation of unit-root AR(1) model with OLS for an example of an estimator that does not converge at the $\sqrt{n}$ rate. | What is the meaning of squared root n when we talk about asymptotic properties? | It "comes from" the central limit theorem, see What intuitive explanation is there for the central limit theorem?. It turns out that for many, although clearly not all estimators, scaling the estimati | What is the meaning of squared root n when we talk about asymptotic properties?
It "comes from" the central limit theorem, see What intuitive explanation is there for the central limit theorem?. It turns out that for many, although clearly not all estimators, scaling the estimation error $\hat\beta-\beta$ by $\sqrt{n}$ yields a nondegenerate asymptotic normal distribution.
See root-n consistent estimator, but root-n doesn't converge? for further discussion.
See Estimation of unit-root AR(1) model with OLS for an example of an estimator that does not converge at the $\sqrt{n}$ rate. | What is the meaning of squared root n when we talk about asymptotic properties?
It "comes from" the central limit theorem, see What intuitive explanation is there for the central limit theorem?. It turns out that for many, although clearly not all estimators, scaling the estimati |
55,534 | bootstrapping a linear mixed model with R's lmeresampler or lme4 or a robust regression? | You can certainly use bootstrapping with the (new to me) lmeresampler package but, as you point out, there are only 21 students/participants. You should expect lots of uncertainty, esp. about the interaction.
It's often helpful to start by visualizing the data.
First let's look at the main effects. The plot below tells me — without any models — that:
There seems to be an YEAR effect: most segments have positive slope. This is of course not surprising: we expect students know more with every year they spend at school.
The might be a GROUP effect. Hard to tell from the plot actually.
The variance is higher during the first than the second year. This is not the question you are asking but it's an interesting observation. There can be different definitions of "quality of education/teaching" and one might be related to "consistency of student attainment".
And here is a plot of the interaction between YEAR and GROUP. An interaction means that the blue and red line cross. As you can see, there is lots of variability and the lines are (close to) parallel for quite a few of the students.
This plot should prepare you for the outcome that no matter how you do the bootstrapping and really, no matter what model you choose to fit, the interaction term is (likely) not going to be significant. This is not surprising: Interaction require bigger samples to estimate precisely. See You need 16 times the sample size to estimate an interaction than to estimate a main effect.
I specify B = 5,000 boostrap resamples. This dataset is small and my laptop can handle it (in about 1 minute). This is probably too many replicates. With large B we can only decrease one source of error: the sampling error due to the randomness in the bootstrapping. We can't reduce the error due to having data on only 21 students.
B <- 5000
tic()
mod1_boot <- bootstrap(mod1, .f = fixef, type = "residual", B = B)
toc()
#> 51.31 sec elapsed
I choose the "normal" confidence interval because the "basic" and "percentile" confidence intervals have worse statistical properties. See Is it true that the percentile bootstrap should never be used?
confint(mod1_boot, type = "norm")
#> # A tibble: 4 × 6
#> term estimate lower upper type level
#> <chr> <dbl> <dbl> <dbl> <chr> <dbl>
#> 1 (Intercept) 17.6 16.9 18.3 norm 0.95
#> 2 YEAR2 1.14 0.181 2.11 norm 0.95
#> 3 GROUPB 0.915 -0.00817 1.86 norm 0.95
#> 4 YEAR2:GROUPB -0.602 -1.94 0.718 norm 0.95
We get a confidence interval with the bootstrap procedure, not a p-value, but obviously, the confidence interval either contains 0 or it doesn't.
It might be instructive to try out different types of bootstrapping methods (I tried the "parametric" and "residual" bootstraps.) but not particularly interesting: the confidence interval for the interaction term is wide and as expected we can't draw much of a conclusion about the strength of the interaction based on this small dataset.
Update: The biggest difference is between type="case" and the other types of bootstrap. With "case", the procedure samples students but not the four observations by the same student. This actually might be the most appropriate procedure given the structure of your data.
tic()
mod1_boot <- bootstrap(mod1, .f = fixef, type = "case", B = B,
resample = c(TRUE, FALSE))
toc()
#> 86.396 sec elapsed
confint(mod1_boot, type = "norm")
#> # A tibble: 4 × 6
#> term estimate lower upper type level
#> <chr> <dbl> <dbl> <dbl> <chr> <dbl>
#> 1 (Intercept) 17.6 16.7 18.5 norm 0.95
#> 2 YEAR2 1.14 0.184 2.11 norm 0.95
#> 3 GROUPB 0.915 0.133 1.69 norm 0.95
#> 4 YEAR2:GROUPB -0.602 -1.78 0.598 norm 0.95
R code to reproduce the figures and the analysis.
library("tictoc")
library("broom.mixed")
library("lme4")
library("lmeresampler")
library("tidyverse")
data <- data %>%
rename(
SCORE = CONT_Y,
YEAR = CATEGORIES,
GROUP = MY_GROUP,
PARTICIPANT = PARTICIPANTS
) %>%
mutate(
GROUP = recode(GROUP,
"G1" = "GroupA",
"G2" = "GroupB"
),
YEAR = recode(YEAR,
"A" = "Year1",
"B" = "Year2"
)
)
data %>%
ggplot(
aes(
YEAR, SCORE,
group = PARTICIPANT,
color = GROUP
)
) +
geom_line() +
facet_grid(
~GROUP
) +
theme(
axis.title.x = element_blank(),
legend.position = "none"
)
data %>%
ggplot(
aes(
YEAR, SCORE,
group = GROUP,
color = GROUP
)
) +
geom_line() +
facet_wrap(
~PARTICIPANT,
ncol = 7
) +
theme(
axis.title.x = element_blank(),
legend.position = "none"
)
mod1 <- lmer(
SCORE ~ YEAR * GROUP + (1 | PARTICIPANT),
data = data,
REML = FALSE
)
B <- 5000
tic()
mod1_boot <- bootstrap(mod1, .f = fixef, type = "residual", B = B)
toc()
confint(mod1_boot, type = "norm")
tic()
mod1_boot <- bootstrap(mod1,
.f = fixef, type = "case", B = B,
resample = c(TRUE, FALSE)
)
toc()
confint(mod1_boot, type = "norm") | bootstrapping a linear mixed model with R's lmeresampler or lme4 or a robust regression? | You can certainly use bootstrapping with the (new to me) lmeresampler package but, as you point out, there are only 21 students/participants. You should expect lots of uncertainty, esp. about the inte | bootstrapping a linear mixed model with R's lmeresampler or lme4 or a robust regression?
You can certainly use bootstrapping with the (new to me) lmeresampler package but, as you point out, there are only 21 students/participants. You should expect lots of uncertainty, esp. about the interaction.
It's often helpful to start by visualizing the data.
First let's look at the main effects. The plot below tells me — without any models — that:
There seems to be an YEAR effect: most segments have positive slope. This is of course not surprising: we expect students know more with every year they spend at school.
The might be a GROUP effect. Hard to tell from the plot actually.
The variance is higher during the first than the second year. This is not the question you are asking but it's an interesting observation. There can be different definitions of "quality of education/teaching" and one might be related to "consistency of student attainment".
And here is a plot of the interaction between YEAR and GROUP. An interaction means that the blue and red line cross. As you can see, there is lots of variability and the lines are (close to) parallel for quite a few of the students.
This plot should prepare you for the outcome that no matter how you do the bootstrapping and really, no matter what model you choose to fit, the interaction term is (likely) not going to be significant. This is not surprising: Interaction require bigger samples to estimate precisely. See You need 16 times the sample size to estimate an interaction than to estimate a main effect.
I specify B = 5,000 boostrap resamples. This dataset is small and my laptop can handle it (in about 1 minute). This is probably too many replicates. With large B we can only decrease one source of error: the sampling error due to the randomness in the bootstrapping. We can't reduce the error due to having data on only 21 students.
B <- 5000
tic()
mod1_boot <- bootstrap(mod1, .f = fixef, type = "residual", B = B)
toc()
#> 51.31 sec elapsed
I choose the "normal" confidence interval because the "basic" and "percentile" confidence intervals have worse statistical properties. See Is it true that the percentile bootstrap should never be used?
confint(mod1_boot, type = "norm")
#> # A tibble: 4 × 6
#> term estimate lower upper type level
#> <chr> <dbl> <dbl> <dbl> <chr> <dbl>
#> 1 (Intercept) 17.6 16.9 18.3 norm 0.95
#> 2 YEAR2 1.14 0.181 2.11 norm 0.95
#> 3 GROUPB 0.915 -0.00817 1.86 norm 0.95
#> 4 YEAR2:GROUPB -0.602 -1.94 0.718 norm 0.95
We get a confidence interval with the bootstrap procedure, not a p-value, but obviously, the confidence interval either contains 0 or it doesn't.
It might be instructive to try out different types of bootstrapping methods (I tried the "parametric" and "residual" bootstraps.) but not particularly interesting: the confidence interval for the interaction term is wide and as expected we can't draw much of a conclusion about the strength of the interaction based on this small dataset.
Update: The biggest difference is between type="case" and the other types of bootstrap. With "case", the procedure samples students but not the four observations by the same student. This actually might be the most appropriate procedure given the structure of your data.
tic()
mod1_boot <- bootstrap(mod1, .f = fixef, type = "case", B = B,
resample = c(TRUE, FALSE))
toc()
#> 86.396 sec elapsed
confint(mod1_boot, type = "norm")
#> # A tibble: 4 × 6
#> term estimate lower upper type level
#> <chr> <dbl> <dbl> <dbl> <chr> <dbl>
#> 1 (Intercept) 17.6 16.7 18.5 norm 0.95
#> 2 YEAR2 1.14 0.184 2.11 norm 0.95
#> 3 GROUPB 0.915 0.133 1.69 norm 0.95
#> 4 YEAR2:GROUPB -0.602 -1.78 0.598 norm 0.95
R code to reproduce the figures and the analysis.
library("tictoc")
library("broom.mixed")
library("lme4")
library("lmeresampler")
library("tidyverse")
data <- data %>%
rename(
SCORE = CONT_Y,
YEAR = CATEGORIES,
GROUP = MY_GROUP,
PARTICIPANT = PARTICIPANTS
) %>%
mutate(
GROUP = recode(GROUP,
"G1" = "GroupA",
"G2" = "GroupB"
),
YEAR = recode(YEAR,
"A" = "Year1",
"B" = "Year2"
)
)
data %>%
ggplot(
aes(
YEAR, SCORE,
group = PARTICIPANT,
color = GROUP
)
) +
geom_line() +
facet_grid(
~GROUP
) +
theme(
axis.title.x = element_blank(),
legend.position = "none"
)
data %>%
ggplot(
aes(
YEAR, SCORE,
group = GROUP,
color = GROUP
)
) +
geom_line() +
facet_wrap(
~PARTICIPANT,
ncol = 7
) +
theme(
axis.title.x = element_blank(),
legend.position = "none"
)
mod1 <- lmer(
SCORE ~ YEAR * GROUP + (1 | PARTICIPANT),
data = data,
REML = FALSE
)
B <- 5000
tic()
mod1_boot <- bootstrap(mod1, .f = fixef, type = "residual", B = B)
toc()
confint(mod1_boot, type = "norm")
tic()
mod1_boot <- bootstrap(mod1,
.f = fixef, type = "case", B = B,
resample = c(TRUE, FALSE)
)
toc()
confint(mod1_boot, type = "norm") | bootstrapping a linear mixed model with R's lmeresampler or lme4 or a robust regression?
You can certainly use bootstrapping with the (new to me) lmeresampler package but, as you point out, there are only 21 students/participants. You should expect lots of uncertainty, esp. about the inte |
55,535 | Levels of significance for a publication | To have two significant figures, $0.06$ should be written as $0.61$ (to rule out something like $0.062$), and $0.003$ should be written as $0.0030$ (to rule out something like $0.0031$).
Additionally, $0.00021>0.0001$, so that should be written as $0.00021$. | Levels of significance for a publication | To have two significant figures, $0.06$ should be written as $0.61$ (to rule out something like $0.062$), and $0.003$ should be written as $0.0030$ (to rule out something like $0.0031$).
Additionally, | Levels of significance for a publication
To have two significant figures, $0.06$ should be written as $0.61$ (to rule out something like $0.062$), and $0.003$ should be written as $0.0030$ (to rule out something like $0.0031$).
Additionally, $0.00021>0.0001$, so that should be written as $0.00021$. | Levels of significance for a publication
To have two significant figures, $0.06$ should be written as $0.61$ (to rule out something like $0.062$), and $0.003$ should be written as $0.0030$ (to rule out something like $0.0031$).
Additionally, |
55,536 | Does gridsearch on random forest/extra trees make sense? | You are right that randomness will play a role (like with many other algorithms including MCMC samplers for Bayesian models, XGBoost, LightGBM, neural networks etc.) in the results. The obvious way to minimize randomness in the results of any hyper-parameter optimization method for RF (whether it's random grid-search, grid search or some Bayesian hyperparameter optimization method) is to increase the number of trees (which reduces the randomness in the model behavior - albeit at the cost of an increased training time). Alternatively, you construct a surrogate model on top of the results that takes into account that the signal, of where the best model in the hyperparameter landscape is, is noisy through an appropriate amount of smoothing/regularization. | Does gridsearch on random forest/extra trees make sense? | You are right that randomness will play a role (like with many other algorithms including MCMC samplers for Bayesian models, XGBoost, LightGBM, neural networks etc.) in the results. The obvious way to | Does gridsearch on random forest/extra trees make sense?
You are right that randomness will play a role (like with many other algorithms including MCMC samplers for Bayesian models, XGBoost, LightGBM, neural networks etc.) in the results. The obvious way to minimize randomness in the results of any hyper-parameter optimization method for RF (whether it's random grid-search, grid search or some Bayesian hyperparameter optimization method) is to increase the number of trees (which reduces the randomness in the model behavior - albeit at the cost of an increased training time). Alternatively, you construct a surrogate model on top of the results that takes into account that the signal, of where the best model in the hyperparameter landscape is, is noisy through an appropriate amount of smoothing/regularization. | Does gridsearch on random forest/extra trees make sense?
You are right that randomness will play a role (like with many other algorithms including MCMC samplers for Bayesian models, XGBoost, LightGBM, neural networks etc.) in the results. The obvious way to |
55,537 | Does gridsearch on random forest/extra trees make sense? | To add a little to @Björn's answer, when the model selection criterion is noisy (or there is a random element to the classifier) grid search (or random search) actually makes more sense than some more elegant or more efficient model selection procedures, such as gradient descent or Nelder-Mead simplex, where the randomness may affect the termination criterion for the optimisation algorithm (they generally stop when the improvement in performance or the "gradient" is small).
Randomness in the construction of a classifier is not ideal, so minimising it by e.g. using lots of trees is a good idea. One problem is that the noisyness of the classifier construction may make it easier to over-fit the model selection criteria if the "randomness" of a particular classifier just happens to suit the sampling variation in the validation set (or cross-validation) error. | Does gridsearch on random forest/extra trees make sense? | To add a little to @Björn's answer, when the model selection criterion is noisy (or there is a random element to the classifier) grid search (or random search) actually makes more sense than some more | Does gridsearch on random forest/extra trees make sense?
To add a little to @Björn's answer, when the model selection criterion is noisy (or there is a random element to the classifier) grid search (or random search) actually makes more sense than some more elegant or more efficient model selection procedures, such as gradient descent or Nelder-Mead simplex, where the randomness may affect the termination criterion for the optimisation algorithm (they generally stop when the improvement in performance or the "gradient" is small).
Randomness in the construction of a classifier is not ideal, so minimising it by e.g. using lots of trees is a good idea. One problem is that the noisyness of the classifier construction may make it easier to over-fit the model selection criteria if the "randomness" of a particular classifier just happens to suit the sampling variation in the validation set (or cross-validation) error. | Does gridsearch on random forest/extra trees make sense?
To add a little to @Björn's answer, when the model selection criterion is noisy (or there is a random element to the classifier) grid search (or random search) actually makes more sense than some more |
55,538 | Calculating accuracy of prediction | I'll take a somewhat different track from Demetri. There are multiple aspects here.
A best quantification of prediction accuracy is surprisingly tricky. An accuracy measure is, for practical purposes, a mapping $f$ that takes your point prediction $\hat{p}$ (the predicted proportion of blue marbles, say) and a random sample $x$ from the bag and returns an "accuracy" value. "For practical purposes", because this again relies on sampling from the bag, because we will usually not know the true proportions. This is Demetri's argument.
So we now are faced with an $f(\hat{p},x)$ that is a random variable, because it depends on the sample $x$. (Typically, we would sample multiple $x$ and take the average, $\frac{1}{n}\sum_{i=1}^n f(\hat{p},x_i)$.) We can now take a look at the statistical properties of our random variable $f(\hat{p},x)$ as a function of $\hat{p}$. For instance, we can require that the expected accuracy be maximized of the predicted proportion $\hat{p}$ is equal to the true proportion $p$ in expectation. This will quickly lead to proper scoring rules and the log loss (another term for the entropy Demetri notes). The log loss has this property: it will elicit an unbiased estimate $\hat{p}$ of $p$.
Alternatively, we could of course look at error measures we want to minimize (instead of maximizing accuracy - it's the same thing, of course).
Here is a trap for the unwary. You might be tempted to use something like the Mean Absolute Error between $\hat{p}$ and the proportion of blue marbles you draw. This amounts to
$$ f(\hat{p},x)=\begin{cases} 1-\hat{p} & \text{if $x$ is blue} \\
\hat{p} & \text{if $x$ is red} \end{cases} $$
This looks similar but simpler than the log loss. Instead of the log, we essentially have an absolute value. However, here is the problem: the MAE will not be minimized by the true proportion. Rather, if $p>0.5$, the MAE will be minimized by setting $\hat{p}=1$, regardless of the actual value of $p$!
You can either calculate the expected value of the MAE for each given $\hat{p}$ analytically, or trust a little simulation: for a true $p=0.8$, I looked at $\hat{p}=0, 0.01, 0.02, \dots, 1$, simulated $10,000$ draws from the urn in each case, calculated the MAE, and plotted the MAE against the prediction $\hat{p}$. While I was at it, I did the same using the Mean Squared Error (AKA the Brier Score) and the log loss instead of the MAE (with a minus sign, so we want to minimize the log loss), and added a red vertical line at the true proportion $p=0.8$:
As you see, using the MAE will reward us more (have lower errors) for a biased prediction $\hat{p}=1$ than for the unbiased estimate $\hat{p}=p=0.8$. (If $p<0.5$, the MAE will like $\hat{p}=0$ best.) Thus, if you want an unbiased estimate of $p$, you should not use the MAE. Alternatives include the Mean Squared Error, or the log loss per Demetri.
Well, now we have two error measures, the MSE and the log loss, that will at least not systematically mislead us (like the MAE). How do we choose between these two? (And of course, there are infinitely many other such error metrics which will elicit an unbiased prediction.) Now the question is how painful deviations from the true $p$ are for you. As you see in the simulated plots, as the predicted proportion $\hat{p}$ varies away from the true $p$, the log loss and the MSE behave rather differently. And you can of course use transformations of these error metrics to change their behavior further. As Demetri writes, it comes down to whether a prediction of $\hat{p}=0.6$ or $\hat{p}=1$ is more painful if the true $p=0.8$ - where in practice you will not know the true $p$. A few thoughts specifically on the comparison between the log loss and the MSE/Brier score can be found at Why is LogLoss preferred over other proper scoring rules?
Finally, one last aspect. You ask about "standardizing" error metrics to the interval $[0,100\%]$. As Demetri writes, I do not think this is truly possible. On the one hand, you would need to know when you have reached 100% accuracy, and in a practical situation, this simply won't happen. Similarly, it's hard to say whether you have reached 99% or 99.9% accuracy. It might be easier to assign 0% accuracy to a prediction - if you are certain that $p>0.5$, then you could say that a prediction of $\hat{p}=0$ is "as wrong as it can be", and you could say that this prediction is 0% accurate. Same for $\hat{p}=1$ if you know that $p<0.5$. The problem here is if you are unsure about the true $p$. And of course this only works because we know (actually do know here!) that $0\leq p\leq 1$ - if you are predicting, say, temperatures, it's very hard to say what is the "worst possible prediction" that we would like to say is 0% accurate.
R code for the plots:
true_prop <- 0.8
predictions <- seq(0,1,by=.01)
sims <- runif(10000)<true_prop
maes <- sapply(predictions,function(xx)mean(abs(xx-sims)))
mses <- sapply(predictions,function(xx)mean((xx-sims)^2))
logs <- -sapply(predictions[-c(1,length(predictions))],
function(xx)mean(c(rep(log(xx),sum(sims)),rep(log(1-xx),sum(!sims)))))
par(mfrow=c(1,3),mai=c(.8,.5,.5,.1),las=1)
plot(predictions,maes,type="l",xlab="Predicted proportion",
ylab="",main="Mean Absolute Error")
abline(v=true_prop,col="red")
plot(predictions,mses,type="l",xlab="Predicted proportion",
ylab="",main="Mean Squared Error/Brier Score",las=1)
abline(v=true_prop,col="red")
plot(predictions[-c(1,length(predictions))],logs,type="l",
xlab="Predicted proportion",ylab="",main="Log Loss",las=1)
abline(v=true_prop,col="red") | Calculating accuracy of prediction | I'll take a somewhat different track from Demetri. There are multiple aspects here.
A best quantification of prediction accuracy is surprisingly tricky. An accuracy measure is, for practical purposes, | Calculating accuracy of prediction
I'll take a somewhat different track from Demetri. There are multiple aspects here.
A best quantification of prediction accuracy is surprisingly tricky. An accuracy measure is, for practical purposes, a mapping $f$ that takes your point prediction $\hat{p}$ (the predicted proportion of blue marbles, say) and a random sample $x$ from the bag and returns an "accuracy" value. "For practical purposes", because this again relies on sampling from the bag, because we will usually not know the true proportions. This is Demetri's argument.
So we now are faced with an $f(\hat{p},x)$ that is a random variable, because it depends on the sample $x$. (Typically, we would sample multiple $x$ and take the average, $\frac{1}{n}\sum_{i=1}^n f(\hat{p},x_i)$.) We can now take a look at the statistical properties of our random variable $f(\hat{p},x)$ as a function of $\hat{p}$. For instance, we can require that the expected accuracy be maximized of the predicted proportion $\hat{p}$ is equal to the true proportion $p$ in expectation. This will quickly lead to proper scoring rules and the log loss (another term for the entropy Demetri notes). The log loss has this property: it will elicit an unbiased estimate $\hat{p}$ of $p$.
Alternatively, we could of course look at error measures we want to minimize (instead of maximizing accuracy - it's the same thing, of course).
Here is a trap for the unwary. You might be tempted to use something like the Mean Absolute Error between $\hat{p}$ and the proportion of blue marbles you draw. This amounts to
$$ f(\hat{p},x)=\begin{cases} 1-\hat{p} & \text{if $x$ is blue} \\
\hat{p} & \text{if $x$ is red} \end{cases} $$
This looks similar but simpler than the log loss. Instead of the log, we essentially have an absolute value. However, here is the problem: the MAE will not be minimized by the true proportion. Rather, if $p>0.5$, the MAE will be minimized by setting $\hat{p}=1$, regardless of the actual value of $p$!
You can either calculate the expected value of the MAE for each given $\hat{p}$ analytically, or trust a little simulation: for a true $p=0.8$, I looked at $\hat{p}=0, 0.01, 0.02, \dots, 1$, simulated $10,000$ draws from the urn in each case, calculated the MAE, and plotted the MAE against the prediction $\hat{p}$. While I was at it, I did the same using the Mean Squared Error (AKA the Brier Score) and the log loss instead of the MAE (with a minus sign, so we want to minimize the log loss), and added a red vertical line at the true proportion $p=0.8$:
As you see, using the MAE will reward us more (have lower errors) for a biased prediction $\hat{p}=1$ than for the unbiased estimate $\hat{p}=p=0.8$. (If $p<0.5$, the MAE will like $\hat{p}=0$ best.) Thus, if you want an unbiased estimate of $p$, you should not use the MAE. Alternatives include the Mean Squared Error, or the log loss per Demetri.
Well, now we have two error measures, the MSE and the log loss, that will at least not systematically mislead us (like the MAE). How do we choose between these two? (And of course, there are infinitely many other such error metrics which will elicit an unbiased prediction.) Now the question is how painful deviations from the true $p$ are for you. As you see in the simulated plots, as the predicted proportion $\hat{p}$ varies away from the true $p$, the log loss and the MSE behave rather differently. And you can of course use transformations of these error metrics to change their behavior further. As Demetri writes, it comes down to whether a prediction of $\hat{p}=0.6$ or $\hat{p}=1$ is more painful if the true $p=0.8$ - where in practice you will not know the true $p$. A few thoughts specifically on the comparison between the log loss and the MSE/Brier score can be found at Why is LogLoss preferred over other proper scoring rules?
Finally, one last aspect. You ask about "standardizing" error metrics to the interval $[0,100\%]$. As Demetri writes, I do not think this is truly possible. On the one hand, you would need to know when you have reached 100% accuracy, and in a practical situation, this simply won't happen. Similarly, it's hard to say whether you have reached 99% or 99.9% accuracy. It might be easier to assign 0% accuracy to a prediction - if you are certain that $p>0.5$, then you could say that a prediction of $\hat{p}=0$ is "as wrong as it can be", and you could say that this prediction is 0% accurate. Same for $\hat{p}=1$ if you know that $p<0.5$. The problem here is if you are unsure about the true $p$. And of course this only works because we know (actually do know here!) that $0\leq p\leq 1$ - if you are predicting, say, temperatures, it's very hard to say what is the "worst possible prediction" that we would like to say is 0% accurate.
R code for the plots:
true_prop <- 0.8
predictions <- seq(0,1,by=.01)
sims <- runif(10000)<true_prop
maes <- sapply(predictions,function(xx)mean(abs(xx-sims)))
mses <- sapply(predictions,function(xx)mean((xx-sims)^2))
logs <- -sapply(predictions[-c(1,length(predictions))],
function(xx)mean(c(rep(log(xx),sum(sims)),rep(log(1-xx),sum(!sims)))))
par(mfrow=c(1,3),mai=c(.8,.5,.5,.1),las=1)
plot(predictions,maes,type="l",xlab="Predicted proportion",
ylab="",main="Mean Absolute Error")
abline(v=true_prop,col="red")
plot(predictions,mses,type="l",xlab="Predicted proportion",
ylab="",main="Mean Squared Error/Brier Score",las=1)
abline(v=true_prop,col="red")
plot(predictions[-c(1,length(predictions))],logs,type="l",
xlab="Predicted proportion",ylab="",main="Log Loss",las=1)
abline(v=true_prop,col="red") | Calculating accuracy of prediction
I'll take a somewhat different track from Demetri. There are multiple aspects here.
A best quantification of prediction accuracy is surprisingly tricky. An accuracy measure is, for practical purposes, |
55,539 | Calculating accuracy of prediction | Is it possible to restrict such a quantification to a range between 0 and 100%, such that a correct prediction (20%) would evaluate to a 100% accuracy?
I don't believe so. What you're asking is "is there a way to know when my prediction is equal to the truth" and that just can't be done. That is the entire reason we need statistics.
There are ways to estimate what the prediction should be after having seen data. These methods find the prediction which would minimize an appropriate error metric, and so the prediction would be the best prediction you could make conditioned on the data you had seen so far.
As an example, suppose you pull 10 marbles, 3 of which are red. One way to measure your error is through the cross entropy loss.
$$ \ell(\theta) = - \sum_i y_i \ln(\theta) + (1-y_i)\ln(1-\theta) $$
Here, $\theta$ is your prediction, $y_i=1$ is the event you get a red marble, and $y_i=0$ is the event you get a blue marble. Smaller numbers are preferable here. Using your 60% initial prediction your error would be approximately 8. Is that number bad? Is that number good? Tough to say, we need something to contrast against.
Let's use that data to create a new prediction. We saw that 30% of the marbles were red, so let's devise a new prediction: 30% of the samples are red.
We pull a new sample: 2 reds 8 blues. Now, let's compare the loss values of our initial prediction of 60% with our new prediction of 30%.
Error for 60% prediction: 8.345
Error for 30% prediction: 5.26
Our new prediction is much much better because it resulted in smaller error(in fact, it is the best prediction we could make having only seen those first 10 draws). | Calculating accuracy of prediction | Is it possible to restrict such a quantification to a range between 0 and 100%, such that a correct prediction (20%) would evaluate to a 100% accuracy?
I don't believe so. What you're asking is "is | Calculating accuracy of prediction
Is it possible to restrict such a quantification to a range between 0 and 100%, such that a correct prediction (20%) would evaluate to a 100% accuracy?
I don't believe so. What you're asking is "is there a way to know when my prediction is equal to the truth" and that just can't be done. That is the entire reason we need statistics.
There are ways to estimate what the prediction should be after having seen data. These methods find the prediction which would minimize an appropriate error metric, and so the prediction would be the best prediction you could make conditioned on the data you had seen so far.
As an example, suppose you pull 10 marbles, 3 of which are red. One way to measure your error is through the cross entropy loss.
$$ \ell(\theta) = - \sum_i y_i \ln(\theta) + (1-y_i)\ln(1-\theta) $$
Here, $\theta$ is your prediction, $y_i=1$ is the event you get a red marble, and $y_i=0$ is the event you get a blue marble. Smaller numbers are preferable here. Using your 60% initial prediction your error would be approximately 8. Is that number bad? Is that number good? Tough to say, we need something to contrast against.
Let's use that data to create a new prediction. We saw that 30% of the marbles were red, so let's devise a new prediction: 30% of the samples are red.
We pull a new sample: 2 reds 8 blues. Now, let's compare the loss values of our initial prediction of 60% with our new prediction of 30%.
Error for 60% prediction: 8.345
Error for 30% prediction: 5.26
Our new prediction is much much better because it resulted in smaller error(in fact, it is the best prediction we could make having only seen those first 10 draws). | Calculating accuracy of prediction
Is it possible to restrict such a quantification to a range between 0 and 100%, such that a correct prediction (20%) would evaluate to a 100% accuracy?
I don't believe so. What you're asking is "is |
55,540 | Interpretation of correlation coefficient between two binary variables | There are several possible interpretations. They come down to understanding the correlation between two binary variables.
By definition, the correlation of a joint random variable $(X,Y)$ is the expectation of the product of the standardized versions of these variables. This leads to several useful formulas commonly encountered, such as
$$\rho(X,Y) = \frac{\operatorname{Cov}(X,Y)}{\sqrt{\operatorname{Var}(X)\operatorname{Var}(Y)}}.$$
The distribution of any binary $(0,1)$ variable is determined by the chance it equals $1.$ Let $p=\Pr(X=1)$ and $q=\Pr(Y=1)$ be those chances. (To avoid discussing the trivial cases where either of these is 100% or 0%, let's assume $0\lt p \lt 1$ and $0\lt q \lt 1.$)
When, in addition, $b=\Pr((X,Y)=(1,1))$ is the chance both variables are simultaneously $1,$ the axioms of probability give full information about the joint distribution, summarized in this table:
$$\begin{array}{cc|l}
X & Y & \Pr(X,Y)\\
\hline
0 & 0 & 1 + b - p - q\\
0 & 1 & q-b\\
1 & 0 & p-b\\
1 & 1 & b\\ \hline
\end{array}$$
From this information we may compute $\operatorname{Var}(X) = p(1-p),$ $\operatorname{Var}(Y)=q(1-q),$ and $\operatorname{Cov}(X,Y) = b-pq.$ Plugging this into the formula for the correlation gives
$$\rho(X,Y) = \frac{b - pq}{\sqrt{p(1-p)q(1-q)}} = \lambda b - \mu$$
where the positive numbers $\lambda$ and $\mu$ depend on $p$ and $q$ but not on $b.$ This informs us that when the marginal distributions are fixed,
the correlation of $X$ and $Y$ is a linear function of the chance $X$ and $Y$ are simultaneously equal to $1;$ and vice versa.
The latter statement follows by solving $b = (\rho + \mu)/\lambda,$ which is a linear function of $\rho.$
Since $1-X$ and $1-Y$ are binary variables, too, this result when applied to them translates to a slight generalization: the correlation is a linear function of any one of the four individual probabilities listed in the table.
Consequently, you can always re-interpret the correlation in terms of the chance of any specific joint outcome when the variables are binary.
As an example, suppose $p=q=1/2$ and you have in hand (through a calculation, estimate, or assumption) a correlation coefficient of $\rho = 0.12.$ Compute that $\lambda = 4$ and $\mu = 1.$ Because $0\le b \le 1/2$ is forced on us by the laws of probability, $\rho = 4b-1$ ranges from $-1$ (when $b=0$) to $+1$ (when $b=1/2$). Conversely, $b = (1 + \rho)/4$ in this case, giving $b = (1 + 0.12)/4 = 0.28.$
Another natural interpretation would be in terms of the proportion of time the variables are equal. According to the table, that chance would be given by $(1+b-p-q) + b=1+2b-p-q.$ Calling this quantity $e,$ we have $b = (e+p+q-1)/2,$ which when plugged into the formula for $\rho$ gives
$$\rho(X,Y) = \frac{e-(1-p)(1-q)-pq}{2\sqrt{p(1-p)q(1-q)}} = \kappa e - \nu$$
for positive numbers $\kappa$ and $\nu$ that depend on $p$ and $q$ but not on $e.$ Thus, just as before,
the correlation of $X$ and $Y$ is a linear function of the chance $X$ and $Y$ are simultaneously equal to each other; and vice versa.
Continuing the example with $p=q=1/2,$ compute that $\kappa = 2$ and $\nu = 1.$ Consequently $e = (\nu + \rho)/\kappa = (1 + \rho)/2.$
It might be handy, then, to have efficient code to convert a correlation matrix into a matrix of joint probabilities and vice versa. Here are some examples in R implementing the first interpretation. Of course, both functions require you to supply the vector of binary probabilities ($p,$ $q,$ and so on) and they assume your probabilities and matrices are mathematically possible.
#
# Convert a correlation matrix `Rho` to a matrix of chances that
# binary variables are jointly equal to 1. `p` is the array of expected values.
#
corr.to.prop <- function(Rho, p) {
s <- sqrt(p * (1-p))
Rho * outer(s, s) + outer(p, p)
}
#
# Convert a a matrix of chances `B` that binary variables are jointly equal to 1
# into a correlation matrix. `p` is the array of expected values.
#
prop.to.corr <- function(B, p) {
s <- 1/sqrt(p * (1-p))
(B - outer(p, p)) * outer(s, s)
} | Interpretation of correlation coefficient between two binary variables | There are several possible interpretations. They come down to understanding the correlation between two binary variables.
By definition, the correlation of a joint random variable $(X,Y)$ is the expe | Interpretation of correlation coefficient between two binary variables
There are several possible interpretations. They come down to understanding the correlation between two binary variables.
By definition, the correlation of a joint random variable $(X,Y)$ is the expectation of the product of the standardized versions of these variables. This leads to several useful formulas commonly encountered, such as
$$\rho(X,Y) = \frac{\operatorname{Cov}(X,Y)}{\sqrt{\operatorname{Var}(X)\operatorname{Var}(Y)}}.$$
The distribution of any binary $(0,1)$ variable is determined by the chance it equals $1.$ Let $p=\Pr(X=1)$ and $q=\Pr(Y=1)$ be those chances. (To avoid discussing the trivial cases where either of these is 100% or 0%, let's assume $0\lt p \lt 1$ and $0\lt q \lt 1.$)
When, in addition, $b=\Pr((X,Y)=(1,1))$ is the chance both variables are simultaneously $1,$ the axioms of probability give full information about the joint distribution, summarized in this table:
$$\begin{array}{cc|l}
X & Y & \Pr(X,Y)\\
\hline
0 & 0 & 1 + b - p - q\\
0 & 1 & q-b\\
1 & 0 & p-b\\
1 & 1 & b\\ \hline
\end{array}$$
From this information we may compute $\operatorname{Var}(X) = p(1-p),$ $\operatorname{Var}(Y)=q(1-q),$ and $\operatorname{Cov}(X,Y) = b-pq.$ Plugging this into the formula for the correlation gives
$$\rho(X,Y) = \frac{b - pq}{\sqrt{p(1-p)q(1-q)}} = \lambda b - \mu$$
where the positive numbers $\lambda$ and $\mu$ depend on $p$ and $q$ but not on $b.$ This informs us that when the marginal distributions are fixed,
the correlation of $X$ and $Y$ is a linear function of the chance $X$ and $Y$ are simultaneously equal to $1;$ and vice versa.
The latter statement follows by solving $b = (\rho + \mu)/\lambda,$ which is a linear function of $\rho.$
Since $1-X$ and $1-Y$ are binary variables, too, this result when applied to them translates to a slight generalization: the correlation is a linear function of any one of the four individual probabilities listed in the table.
Consequently, you can always re-interpret the correlation in terms of the chance of any specific joint outcome when the variables are binary.
As an example, suppose $p=q=1/2$ and you have in hand (through a calculation, estimate, or assumption) a correlation coefficient of $\rho = 0.12.$ Compute that $\lambda = 4$ and $\mu = 1.$ Because $0\le b \le 1/2$ is forced on us by the laws of probability, $\rho = 4b-1$ ranges from $-1$ (when $b=0$) to $+1$ (when $b=1/2$). Conversely, $b = (1 + \rho)/4$ in this case, giving $b = (1 + 0.12)/4 = 0.28.$
Another natural interpretation would be in terms of the proportion of time the variables are equal. According to the table, that chance would be given by $(1+b-p-q) + b=1+2b-p-q.$ Calling this quantity $e,$ we have $b = (e+p+q-1)/2,$ which when plugged into the formula for $\rho$ gives
$$\rho(X,Y) = \frac{e-(1-p)(1-q)-pq}{2\sqrt{p(1-p)q(1-q)}} = \kappa e - \nu$$
for positive numbers $\kappa$ and $\nu$ that depend on $p$ and $q$ but not on $e.$ Thus, just as before,
the correlation of $X$ and $Y$ is a linear function of the chance $X$ and $Y$ are simultaneously equal to each other; and vice versa.
Continuing the example with $p=q=1/2,$ compute that $\kappa = 2$ and $\nu = 1.$ Consequently $e = (\nu + \rho)/\kappa = (1 + \rho)/2.$
It might be handy, then, to have efficient code to convert a correlation matrix into a matrix of joint probabilities and vice versa. Here are some examples in R implementing the first interpretation. Of course, both functions require you to supply the vector of binary probabilities ($p,$ $q,$ and so on) and they assume your probabilities and matrices are mathematically possible.
#
# Convert a correlation matrix `Rho` to a matrix of chances that
# binary variables are jointly equal to 1. `p` is the array of expected values.
#
corr.to.prop <- function(Rho, p) {
s <- sqrt(p * (1-p))
Rho * outer(s, s) + outer(p, p)
}
#
# Convert a a matrix of chances `B` that binary variables are jointly equal to 1
# into a correlation matrix. `p` is the array of expected values.
#
prop.to.corr <- function(B, p) {
s <- 1/sqrt(p * (1-p))
(B - outer(p, p)) * outer(s, s)
} | Interpretation of correlation coefficient between two binary variables
There are several possible interpretations. They come down to understanding the correlation between two binary variables.
By definition, the correlation of a joint random variable $(X,Y)$ is the expe |
55,541 | Interpretation of correlation coefficient between two binary variables | For binary data, the correlation coefficient is:
$$r = \frac{p_{11}-p_{1 \bullet} p_{\bullet 1}}{\sqrt{p_{1 \bullet} p_{\bullet 1} (1-p_{1 \bullet})(1-p_{\bullet 1})}},$$
where $p_{1 \bullet}$ and $p_{\bullet 1}$ are the proportions of occurrences for each individual variable and $p_{11}$ is the proportion of mutual occurrence in both variables taken together (the latter is your 18% in this case). As you can see from the formula, it is not generally the case that $r=p_{11}$. The formula also takes account of the proportion of occurrences in each of the individual samples. | Interpretation of correlation coefficient between two binary variables | For binary data, the correlation coefficient is:
$$r = \frac{p_{11}-p_{1 \bullet} p_{\bullet 1}}{\sqrt{p_{1 \bullet} p_{\bullet 1} (1-p_{1 \bullet})(1-p_{\bullet 1})}},$$
where $p_{1 \bullet}$ and $p_ | Interpretation of correlation coefficient between two binary variables
For binary data, the correlation coefficient is:
$$r = \frac{p_{11}-p_{1 \bullet} p_{\bullet 1}}{\sqrt{p_{1 \bullet} p_{\bullet 1} (1-p_{1 \bullet})(1-p_{\bullet 1})}},$$
where $p_{1 \bullet}$ and $p_{\bullet 1}$ are the proportions of occurrences for each individual variable and $p_{11}$ is the proportion of mutual occurrence in both variables taken together (the latter is your 18% in this case). As you can see from the formula, it is not generally the case that $r=p_{11}$. The formula also takes account of the proportion of occurrences in each of the individual samples. | Interpretation of correlation coefficient between two binary variables
For binary data, the correlation coefficient is:
$$r = \frac{p_{11}-p_{1 \bullet} p_{\bullet 1}}{\sqrt{p_{1 \bullet} p_{\bullet 1} (1-p_{1 \bullet})(1-p_{\bullet 1})}},$$
where $p_{1 \bullet}$ and $p_ |
55,542 | Simulating a joint distribution with the inverse method | For the record, integrating out $y$ gives the marginal density
$$f_X(x) = \int_0^1 f(x,y)\,\mathrm{d}y = 3x^2e^{-x^3}.$$
By inspection (or integration via a substitution for $x^3$) this has distribution function
$$F_X(x) = 1 - e^{-x^3}\quad (x \ge 0).$$
Conditional on $x,$ the density of $y$ is proportional to $y^x,$ whose distribution function is
$$F_{Y\mid X}(y\mid x) = y^{1+x}\quad (0\le y \le 1).$$
Thus, if we draw a uniform variable $U$ in the range $[0,1],$ $x = F_X^{-1}(U)$ will have the same distribution as $X;$ and then if we independently draw another uniform variable $V,$ solving the equation
$$V = F_{Y\mid X}(y\mid x)$$
for $y$ will give us one realization $(x,y)$ from the distribution with density $f.$
A practical example is this R implementation, which draws n iid values from this distribution:
rjoint <- function(n) {
x <- (-log(runif(n)))^(1/3)
y <- runif(n)^(1/(1+x))
cbind(x, y)
}
(It saves a little time by applying $F_X^{-1}$ to $1-U,$ which also has a uniform distribution.)
The return value is an $n\times 2$ array with $x$ in the first column and $y$ in the second.
The result of drawing ten million values with x <- rjoint(1e7) (which takes two seconds on this machine) looks like this figure, where contours of the resulting 2D density are drawn over a colored contour plot of $f$ (both use the same contour intervals ranging from $0.2$ near the bottom to $2.2$ at the top).
The agreement looks good.
Comment
This problem had two obvious approaches: integrate out $y$ or integrate out $x.$ Both work, but the latter is so nasty I don't know how to do it by hand. The Wolfram Language 13 engine found an expression:
However, it doesn't know how to simplify or invert this function ;-). | Simulating a joint distribution with the inverse method | For the record, integrating out $y$ gives the marginal density
$$f_X(x) = \int_0^1 f(x,y)\,\mathrm{d}y = 3x^2e^{-x^3}.$$
By inspection (or integration via a substitution for $x^3$) this has distributi | Simulating a joint distribution with the inverse method
For the record, integrating out $y$ gives the marginal density
$$f_X(x) = \int_0^1 f(x,y)\,\mathrm{d}y = 3x^2e^{-x^3}.$$
By inspection (or integration via a substitution for $x^3$) this has distribution function
$$F_X(x) = 1 - e^{-x^3}\quad (x \ge 0).$$
Conditional on $x,$ the density of $y$ is proportional to $y^x,$ whose distribution function is
$$F_{Y\mid X}(y\mid x) = y^{1+x}\quad (0\le y \le 1).$$
Thus, if we draw a uniform variable $U$ in the range $[0,1],$ $x = F_X^{-1}(U)$ will have the same distribution as $X;$ and then if we independently draw another uniform variable $V,$ solving the equation
$$V = F_{Y\mid X}(y\mid x)$$
for $y$ will give us one realization $(x,y)$ from the distribution with density $f.$
A practical example is this R implementation, which draws n iid values from this distribution:
rjoint <- function(n) {
x <- (-log(runif(n)))^(1/3)
y <- runif(n)^(1/(1+x))
cbind(x, y)
}
(It saves a little time by applying $F_X^{-1}$ to $1-U,$ which also has a uniform distribution.)
The return value is an $n\times 2$ array with $x$ in the first column and $y$ in the second.
The result of drawing ten million values with x <- rjoint(1e7) (which takes two seconds on this machine) looks like this figure, where contours of the resulting 2D density are drawn over a colored contour plot of $f$ (both use the same contour intervals ranging from $0.2$ near the bottom to $2.2$ at the top).
The agreement looks good.
Comment
This problem had two obvious approaches: integrate out $y$ or integrate out $x.$ Both work, but the latter is so nasty I don't know how to do it by hand. The Wolfram Language 13 engine found an expression:
However, it doesn't know how to simplify or invert this function ;-). | Simulating a joint distribution with the inverse method
For the record, integrating out $y$ gives the marginal density
$$f_X(x) = \int_0^1 f(x,y)\,\mathrm{d}y = 3x^2e^{-x^3}.$$
By inspection (or integration via a substitution for $x^3$) this has distributi |
55,543 | Is there a preference in the regression performance metric for regression models with the same type of loss minimization? | Dave's answer has nothing to do with whether there's bias in an in-sample metric vs. an out-of-sample metric when the algorithm optimizes the in-sample metric. His answer addresses whether minimizing the in-sample metric necessarily also minimizes the (expected) out-of-sample metric (Edit: it doesn't); it says nothing about the relative values of the two. The bias issue states that if you do minimize an in-sample metric, the corresponding out-of-sample metric can be expected to be worse; it says nothing about whether some other objective function could improve the OOS metric. | Is there a preference in the regression performance metric for regression models with the same type | Dave's answer has nothing to do with whether there's bias in an in-sample metric vs. an out-of-sample metric when the algorithm optimizes the in-sample metric. His answer addresses whether minimizing | Is there a preference in the regression performance metric for regression models with the same type of loss minimization?
Dave's answer has nothing to do with whether there's bias in an in-sample metric vs. an out-of-sample metric when the algorithm optimizes the in-sample metric. His answer addresses whether minimizing the in-sample metric necessarily also minimizes the (expected) out-of-sample metric (Edit: it doesn't); it says nothing about the relative values of the two. The bias issue states that if you do minimize an in-sample metric, the corresponding out-of-sample metric can be expected to be worse; it says nothing about whether some other objective function could improve the OOS metric. | Is there a preference in the regression performance metric for regression models with the same type
Dave's answer has nothing to do with whether there's bias in an in-sample metric vs. an out-of-sample metric when the algorithm optimizes the in-sample metric. His answer addresses whether minimizing |
55,544 | Is there a preference in the regression performance metric for regression models with the same type of loss minimization? | Since the out-of-sample data are different from the in-sample data, all bets are off when it comes to what the out-of-sample metric chooses as its preferred model. In some sense, we are tuning the in-sample loss function as a hyperparameter in order to achieve the best out-of-sample performance on our metric of choice. If the out-of-sample metric prefers a model trained with a different loss function, so be it! That’s why we tune the hyperparameter.
I would present the argument like that. I also would be comfortable giving simulations or empirical data where, for example, out-of-sample square loss prefers a model that was trained with absolute loss than with square loss (such as the examples I gave with Iris and simulations). | Is there a preference in the regression performance metric for regression models with the same type | Since the out-of-sample data are different from the in-sample data, all bets are off when it comes to what the out-of-sample metric chooses as its preferred model. In some sense, we are tuning the in- | Is there a preference in the regression performance metric for regression models with the same type of loss minimization?
Since the out-of-sample data are different from the in-sample data, all bets are off when it comes to what the out-of-sample metric chooses as its preferred model. In some sense, we are tuning the in-sample loss function as a hyperparameter in order to achieve the best out-of-sample performance on our metric of choice. If the out-of-sample metric prefers a model trained with a different loss function, so be it! That’s why we tune the hyperparameter.
I would present the argument like that. I also would be comfortable giving simulations or empirical data where, for example, out-of-sample square loss prefers a model that was trained with absolute loss than with square loss (such as the examples I gave with Iris and simulations). | Is there a preference in the regression performance metric for regression models with the same type
Since the out-of-sample data are different from the in-sample data, all bets are off when it comes to what the out-of-sample metric chooses as its preferred model. In some sense, we are tuning the in- |
55,545 | A universal measure of the accuracy of linear regression models | You say that you don’t want to use a metric that is going to prefer a particular model, so out-of-sample (R)MSE is out because it will prefer the model that was trained with square loss. Au contraire! Let’s do a simulation and show a model trained by minimizing square loss having greater out-of-sample square loss than a model trained by minimizing absolute loss.
library(quantreg)
library(MASS)
set.seed(2022)
N <- 100
# Define correlated predictors
#
X <- MASS::mvrnorm(100, c(0, 0), matrix(c(1, 0.9, 0.9, 1), 2, 2))
# Define response variable
#
ye <- 3 - X[, 1] + 2*X[, 2]
e <- rt(N, 1.1) # error term, t-distributed with heavy tails for outliers
y <- ye + e
# Allocate the first 20 observations to testing and the rest to training
#
X_test <- X[1:20, ]
y_test <- y[1:20]
#
X_train <- X[21:N, ]
y_train <- y[21:N]
# Define an OLS linear model
#
x1 <- X_train[, 1]
x2 <- X_train[, 2]
L1 <- lm(y_train ~ x1 + x2)
# Define a linear model trained using MAE
#
L2 <- quantreg::rq(y_train ~ x1 + x2, tau = 0.5)
# Make predictions for the test set using each of the two models
#
p1 <- predict(L1, data.frame(x1 = X_test[, 1], x2 = X_test[, 2]))
p2 <- predict(L2, data.frame(x1 = X_test[, 1], x2 = X_test[, 2]))
# Calculate the MSE for both sets of predictions
#
mse1 <- mean((y_test - p1)^2)
mse2 <- mean((y_test - p2)^2)
print(paste(
"OLS has MSE = ", mse1 # I get ~10.1
))
print(paste(
"Median quantile regression has MSE = ", mse2 # I get ~7.7, so lower
# than the OLS model gave
))
In this setup, which uses correlated features and outliers (from the t-distributed error term), the OLS regression has worse out-of-sample MSE than the MAE-trained regression, showing that out-of-sample MSE need not prefer the model that minimized in-sample MSE.
Consequently, if you believe out-of-sample square loss to be the metric of interest, go with that. Since you are not assured of picking a model that you trained by minimizing square loss, it is worth your time to investigate such models.
It might be surprising for the model trained on the out-of-sample metric to lose to a mode trained on a different metric, but it should not be. This is what happens when we apply, for instance, ridge regression. In ridge regression, we minimize a loss function that is slightly different from square loss, hoping that this alternative gives us better out-of-sample performance on regular square loss than the model trained on regular square loss.
For an out-of-sample evaluation metric (perhaps even for in-sample training), you may be interested in relative error metrics. “Mean absolute percent deviation” is the easiest to understand. The Wikipedia page discusses shortcoming of this metric and potential alternatives. Our Stephan Kolassa has a nice discussion about this topic, too.
You also mentioned wanting to evaluate the accuracy of coefficient estimates. We can investigate this with another simulation, which will show the MSE-trained model (OLS) to have greater MSE when it comes to coefficient estimates than the MAE-trained model.
library(quantreg)
library(MASS)
library(ggplot2)
set.seed(2022)
N <- 100
R <- 1000
b0 <- 3
b1 <- -1
b2 <- 2
M1 <- M2 <- matrix(NA, R, 3) # Matrices for holding estimated coefficients
for (i in 1:R){
# Define correlated predictors
#
X <- MASS::mvrnorm(100, c(0, 0), matrix(c(1, 0.9, 0.9, 1), 2, 2))
# Define response variable
#
ye <- b0 + b1*X[, 1] + b2*X[, 2]
e <- rt(N, 1.1) # error term, t-distributed with heavy tails for outliers
y <- ye + e
# Define OLS and MAE regression models
#
L1 <- lm(y ~ X[, 1] + X[, 2])
L2 <- quantreg::rq(y ~ X[, 1] + X[, 2], tau = 0.5)
# Save the coefficients to a its model's respective matrix
#
M1[i, ] <- summary(L1)$coef[, 1]
M2[i, ] <- summary(L2)$coef[, 1]
print(i)
}
# Evaluate all six coefficient MSE values
#
ols_0 <- mean((b0 - M1[, 1])^2)
ols_1 <- mean((b1 - M1[, 2])^2)
ols_2 <- mean((b2 - M1[, 3])^2)
#
mae_0 <- mean((b0 - M2[, 1])^2)
mae_1 <- mean((b1 - M2[, 2])^2)
mae_2 <- mean((b2 - M2[, 3])^2)
print(paste(
"OLS has intercept MSE of ", ols_0 # I get ~51.2
))
print(paste(
"Quantile regression has intercept MSE of", mae_0 # I get ~0.03
))
print(paste(
"OLS has X1 MSE of ", ols_1 # I get ~203.1
))
print(paste(
"Quantile regression has X1 MSE of", mae_1 # I get ~0.14
))
print(paste(
"OLS has X2 MSE of ", ols_2 # I get ~188.2
))
print(paste(
"Quantile regression has X2 MSE of", mae_2 # I get ~0.15
))
For all three parameters, the OLS regression has a much higher parameter estimate MSE than the MAE-optimizing regression (quantile regression at the median).
No matter what you do, give it context. As I discuss here, there is no universal metric that lets you "grade" a model like we all got or still get (or assign, for those readers who teach) grades in school.
EDIT
(I liked what I wrote in the comments, so I’m adding it to my answer.)
As you can see in my answer, the choice of training loss function does not guarantee a particular result when it comes to out-of-sample performance. Therefore, if you have a reason to be interested in a particular type of out-of-sample performance, go with the mode that does the best on that metric. If the model happens to be the model that uses the out-of-sample metric as the training loss function, so be it, but you hardly assure yourself of a particular mode winning by deciding that out-of-sample MSE is the metric of interest. Perhaps think of the training loss as a hyperparameter.
In fact, when you tune a regularization hyperparameter to achieve the best out-of-sample performance, what you’re doing is exactly that: treating the training loss function as a hyperparameter you tune in order to achieve your goal of excellent out-of-sample performance, even at the expense of in-sample performance. I find it completely reasonable that this extends to markedly different loss functions like MSE vs MAE (as opposed to MSE vs MSE with the ridge regression penalty added), and my simulation shows that can work the way I suspect it can.
What your results show is that, when you pick MSE as the out-of-sample metric of interest, the best-performing model is the one that was trained with in-sample MSE. That’s fine. You didn’t guarantee that result by picking MSE as the out-of-sample metric; it just worked out that way.
EDIT 2
In our chat, you have expressed dissatisfaction with only showing my counterexample through a simulation, rather than with real data. However, it can happen with real data, too.
library(quantreg)
library(ModelMetrics)
set.seed(2022)
data(iris)
N <- dim(iris)[1]
idx <- sample(seq(1, N, 1), N, replace = F)
XY <- iris[idx, ]
x_test <- XY$Petal.Length[1:20]
y_test <- XY$Petal.Width[1:20]
#
x_train <- XY$Petal.Length[21:N]
y_train <- XY$Petal.Width[21:N]
L_ols <- lm(y_train ~ x_train)
L_mae <- quantreg::rq(y_train ~ x_train, tau = 0.5)
pred_ols <- predict(L_ols, data.frame(y_train = y_test, x_train = x_test))
pred_mae <- predict(L_mae, data.frame(y_train = y_test, x_train = x_test))
ModelMetrics::mse(y_test, pred_ols) - ModelMetrics::mse(y_test, pred_mae)
I get that the OLS-based out-of-sample MSE is $0.00149052$ higher than the MAE-trained model's MSE, showing with real data that training on square loss does not guarantee a winning model when it comes to out-of-sample square loss, even on real data. Therefore, when your coleagues argue that training on square loss guarantees a winner on out-of-sample square loss, they are wrong.
EDIT 3
If you run the same kind of regression but with Sepal instead of Petal, you get that the model trained with OLS outperforms (by $0.005632787$) the model trained on MAE when it comes to out-of-sample MAE!
library(quantreg)
library(ModelMetrics)
set.seed(2022)
data(iris)
N <- dim(iris)[1]
idx <- sample(seq(1, N, 1), N, replace = F)
XY <- iris[idx, ]
x_test <- XY$Sepal.Length[1:20]
y_test <- XY$Sepal.Width[1:20]
#
x_train <- XY$Sepal.Length[21:N]
y_train <- XY$Sepal.Width[21:N]
L_ols <- lm(y_train ~ x_train)
L_mae <- quantreg::rq(y_train ~ x_train, tau = 0.5)
pred_ols <- predict(L_ols, data.frame(y_train = y_test, x_train = x_test))
pred_mae <- predict(L_mae, data.frame(y_train = y_test, x_train = x_test))
ModelMetrics::mae(y_test, pred_ols) - ModelMetrics::mae(y_test, pred_mae)
Now you have examples, using real (not simulated) data, of an MSE-trained model outperforming an MAE-trained model on out-of-sample MAE and of an MAE-trained model outperforming an MSE-trained model on out-of-sample MSE. | A universal measure of the accuracy of linear regression models | You say that you don’t want to use a metric that is going to prefer a particular model, so out-of-sample (R)MSE is out because it will prefer the model that was trained with square loss. Au contraire! | A universal measure of the accuracy of linear regression models
You say that you don’t want to use a metric that is going to prefer a particular model, so out-of-sample (R)MSE is out because it will prefer the model that was trained with square loss. Au contraire! Let’s do a simulation and show a model trained by minimizing square loss having greater out-of-sample square loss than a model trained by minimizing absolute loss.
library(quantreg)
library(MASS)
set.seed(2022)
N <- 100
# Define correlated predictors
#
X <- MASS::mvrnorm(100, c(0, 0), matrix(c(1, 0.9, 0.9, 1), 2, 2))
# Define response variable
#
ye <- 3 - X[, 1] + 2*X[, 2]
e <- rt(N, 1.1) # error term, t-distributed with heavy tails for outliers
y <- ye + e
# Allocate the first 20 observations to testing and the rest to training
#
X_test <- X[1:20, ]
y_test <- y[1:20]
#
X_train <- X[21:N, ]
y_train <- y[21:N]
# Define an OLS linear model
#
x1 <- X_train[, 1]
x2 <- X_train[, 2]
L1 <- lm(y_train ~ x1 + x2)
# Define a linear model trained using MAE
#
L2 <- quantreg::rq(y_train ~ x1 + x2, tau = 0.5)
# Make predictions for the test set using each of the two models
#
p1 <- predict(L1, data.frame(x1 = X_test[, 1], x2 = X_test[, 2]))
p2 <- predict(L2, data.frame(x1 = X_test[, 1], x2 = X_test[, 2]))
# Calculate the MSE for both sets of predictions
#
mse1 <- mean((y_test - p1)^2)
mse2 <- mean((y_test - p2)^2)
print(paste(
"OLS has MSE = ", mse1 # I get ~10.1
))
print(paste(
"Median quantile regression has MSE = ", mse2 # I get ~7.7, so lower
# than the OLS model gave
))
In this setup, which uses correlated features and outliers (from the t-distributed error term), the OLS regression has worse out-of-sample MSE than the MAE-trained regression, showing that out-of-sample MSE need not prefer the model that minimized in-sample MSE.
Consequently, if you believe out-of-sample square loss to be the metric of interest, go with that. Since you are not assured of picking a model that you trained by minimizing square loss, it is worth your time to investigate such models.
It might be surprising for the model trained on the out-of-sample metric to lose to a mode trained on a different metric, but it should not be. This is what happens when we apply, for instance, ridge regression. In ridge regression, we minimize a loss function that is slightly different from square loss, hoping that this alternative gives us better out-of-sample performance on regular square loss than the model trained on regular square loss.
For an out-of-sample evaluation metric (perhaps even for in-sample training), you may be interested in relative error metrics. “Mean absolute percent deviation” is the easiest to understand. The Wikipedia page discusses shortcoming of this metric and potential alternatives. Our Stephan Kolassa has a nice discussion about this topic, too.
You also mentioned wanting to evaluate the accuracy of coefficient estimates. We can investigate this with another simulation, which will show the MSE-trained model (OLS) to have greater MSE when it comes to coefficient estimates than the MAE-trained model.
library(quantreg)
library(MASS)
library(ggplot2)
set.seed(2022)
N <- 100
R <- 1000
b0 <- 3
b1 <- -1
b2 <- 2
M1 <- M2 <- matrix(NA, R, 3) # Matrices for holding estimated coefficients
for (i in 1:R){
# Define correlated predictors
#
X <- MASS::mvrnorm(100, c(0, 0), matrix(c(1, 0.9, 0.9, 1), 2, 2))
# Define response variable
#
ye <- b0 + b1*X[, 1] + b2*X[, 2]
e <- rt(N, 1.1) # error term, t-distributed with heavy tails for outliers
y <- ye + e
# Define OLS and MAE regression models
#
L1 <- lm(y ~ X[, 1] + X[, 2])
L2 <- quantreg::rq(y ~ X[, 1] + X[, 2], tau = 0.5)
# Save the coefficients to a its model's respective matrix
#
M1[i, ] <- summary(L1)$coef[, 1]
M2[i, ] <- summary(L2)$coef[, 1]
print(i)
}
# Evaluate all six coefficient MSE values
#
ols_0 <- mean((b0 - M1[, 1])^2)
ols_1 <- mean((b1 - M1[, 2])^2)
ols_2 <- mean((b2 - M1[, 3])^2)
#
mae_0 <- mean((b0 - M2[, 1])^2)
mae_1 <- mean((b1 - M2[, 2])^2)
mae_2 <- mean((b2 - M2[, 3])^2)
print(paste(
"OLS has intercept MSE of ", ols_0 # I get ~51.2
))
print(paste(
"Quantile regression has intercept MSE of", mae_0 # I get ~0.03
))
print(paste(
"OLS has X1 MSE of ", ols_1 # I get ~203.1
))
print(paste(
"Quantile regression has X1 MSE of", mae_1 # I get ~0.14
))
print(paste(
"OLS has X2 MSE of ", ols_2 # I get ~188.2
))
print(paste(
"Quantile regression has X2 MSE of", mae_2 # I get ~0.15
))
For all three parameters, the OLS regression has a much higher parameter estimate MSE than the MAE-optimizing regression (quantile regression at the median).
No matter what you do, give it context. As I discuss here, there is no universal metric that lets you "grade" a model like we all got or still get (or assign, for those readers who teach) grades in school.
EDIT
(I liked what I wrote in the comments, so I’m adding it to my answer.)
As you can see in my answer, the choice of training loss function does not guarantee a particular result when it comes to out-of-sample performance. Therefore, if you have a reason to be interested in a particular type of out-of-sample performance, go with the mode that does the best on that metric. If the model happens to be the model that uses the out-of-sample metric as the training loss function, so be it, but you hardly assure yourself of a particular mode winning by deciding that out-of-sample MSE is the metric of interest. Perhaps think of the training loss as a hyperparameter.
In fact, when you tune a regularization hyperparameter to achieve the best out-of-sample performance, what you’re doing is exactly that: treating the training loss function as a hyperparameter you tune in order to achieve your goal of excellent out-of-sample performance, even at the expense of in-sample performance. I find it completely reasonable that this extends to markedly different loss functions like MSE vs MAE (as opposed to MSE vs MSE with the ridge regression penalty added), and my simulation shows that can work the way I suspect it can.
What your results show is that, when you pick MSE as the out-of-sample metric of interest, the best-performing model is the one that was trained with in-sample MSE. That’s fine. You didn’t guarantee that result by picking MSE as the out-of-sample metric; it just worked out that way.
EDIT 2
In our chat, you have expressed dissatisfaction with only showing my counterexample through a simulation, rather than with real data. However, it can happen with real data, too.
library(quantreg)
library(ModelMetrics)
set.seed(2022)
data(iris)
N <- dim(iris)[1]
idx <- sample(seq(1, N, 1), N, replace = F)
XY <- iris[idx, ]
x_test <- XY$Petal.Length[1:20]
y_test <- XY$Petal.Width[1:20]
#
x_train <- XY$Petal.Length[21:N]
y_train <- XY$Petal.Width[21:N]
L_ols <- lm(y_train ~ x_train)
L_mae <- quantreg::rq(y_train ~ x_train, tau = 0.5)
pred_ols <- predict(L_ols, data.frame(y_train = y_test, x_train = x_test))
pred_mae <- predict(L_mae, data.frame(y_train = y_test, x_train = x_test))
ModelMetrics::mse(y_test, pred_ols) - ModelMetrics::mse(y_test, pred_mae)
I get that the OLS-based out-of-sample MSE is $0.00149052$ higher than the MAE-trained model's MSE, showing with real data that training on square loss does not guarantee a winning model when it comes to out-of-sample square loss, even on real data. Therefore, when your coleagues argue that training on square loss guarantees a winner on out-of-sample square loss, they are wrong.
EDIT 3
If you run the same kind of regression but with Sepal instead of Petal, you get that the model trained with OLS outperforms (by $0.005632787$) the model trained on MAE when it comes to out-of-sample MAE!
library(quantreg)
library(ModelMetrics)
set.seed(2022)
data(iris)
N <- dim(iris)[1]
idx <- sample(seq(1, N, 1), N, replace = F)
XY <- iris[idx, ]
x_test <- XY$Sepal.Length[1:20]
y_test <- XY$Sepal.Width[1:20]
#
x_train <- XY$Sepal.Length[21:N]
y_train <- XY$Sepal.Width[21:N]
L_ols <- lm(y_train ~ x_train)
L_mae <- quantreg::rq(y_train ~ x_train, tau = 0.5)
pred_ols <- predict(L_ols, data.frame(y_train = y_test, x_train = x_test))
pred_mae <- predict(L_mae, data.frame(y_train = y_test, x_train = x_test))
ModelMetrics::mae(y_test, pred_ols) - ModelMetrics::mae(y_test, pred_mae)
Now you have examples, using real (not simulated) data, of an MSE-trained model outperforming an MAE-trained model on out-of-sample MAE and of an MAE-trained model outperforming an MSE-trained model on out-of-sample MSE. | A universal measure of the accuracy of linear regression models
You say that you don’t want to use a metric that is going to prefer a particular model, so out-of-sample (R)MSE is out because it will prefer the model that was trained with square loss. Au contraire! |
55,546 | A universal measure of the accuracy of linear regression models | RMSE tells us how far the model residuals are from zero on average, i.e. the average distance between the observed values and the predicate values. However, Willmott et. al. suggested that RMSE might be misleading to assess the model performance since RMSE is a function of the average error and the distribution of squared errors. Chai recommended to use both RMSE and mean absolute error (MAE). It is better to report both metrics. By the way, $R^2$ is misleading as well since it increases for higher number of predictors. I would recommend to use adjusted $R^2$. This metric is kinda gold standard goodness of fit test. To handle multicollinearity, compute the nonparametric Spearman's rank-order correlation and drop the variable close to 1. It will solve the problem. For the outliers, it is not a good practice to delete outliers because they may have valuable insights. Find the influential outliers and fit the model with and without them to see their impact on the model. Also, don't forget to check all four properties of linear model assumptions. Attached articles will give u more explanations.
Reference:
http://www.jstor.org/stable/24869236
https://gmd.copernicus.org/articles/7/1247/2014/ | A universal measure of the accuracy of linear regression models | RMSE tells us how far the model residuals are from zero on average, i.e. the average distance between the observed values and the predicate values. However, Willmott et. al. suggested that RMSE might | A universal measure of the accuracy of linear regression models
RMSE tells us how far the model residuals are from zero on average, i.e. the average distance between the observed values and the predicate values. However, Willmott et. al. suggested that RMSE might be misleading to assess the model performance since RMSE is a function of the average error and the distribution of squared errors. Chai recommended to use both RMSE and mean absolute error (MAE). It is better to report both metrics. By the way, $R^2$ is misleading as well since it increases for higher number of predictors. I would recommend to use adjusted $R^2$. This metric is kinda gold standard goodness of fit test. To handle multicollinearity, compute the nonparametric Spearman's rank-order correlation and drop the variable close to 1. It will solve the problem. For the outliers, it is not a good practice to delete outliers because they may have valuable insights. Find the influential outliers and fit the model with and without them to see their impact on the model. Also, don't forget to check all four properties of linear model assumptions. Attached articles will give u more explanations.
Reference:
http://www.jstor.org/stable/24869236
https://gmd.copernicus.org/articles/7/1247/2014/ | A universal measure of the accuracy of linear regression models
RMSE tells us how far the model residuals are from zero on average, i.e. the average distance between the observed values and the predicate values. However, Willmott et. al. suggested that RMSE might |
55,547 | Closed form for a markov chain, where transition probabilities depend on $n$? | Let $a_{n+1} = \log(1 - \Pr(A_{n+1}))$ for $n=d, d+1, \ldots.$ In these terms the recurrence reads
$$\begin{aligned}
a_{n+1} &= \log(1 - [\Pr(A_n) + 2^{-n}(1 - \Pr(A_n))])\\
&= \log((1-2^{-n})(1 - \Pr(A_n))\\
&= \log(1 - 2^{-n}) + \log(1-\Pr(A_n)) \\
&= \log(1 - 2^{-n}) + a_n
\end{aligned}$$
with initial condition $a_{d+1} = \log(1 - 2^{-d}).$
An easy induction now gives
$$\begin{aligned}
a_{n+d} &= \sum_{i=1}^{n-1} \log\left(1 - 2^{-(d+i)}\right) + a_{d+1}\\
&= \sum_{i=0}^{n-1} \log\left(1 - 2^{-(d+i)}\right).
\end{aligned}$$
Returning to the original formulation,
$$\Pr(A_{n+d}) = 1 - \exp(a_{n+d}) = 1 - \prod_{i=0}^{n-1} \left(1 - q^{-(d+i)}\right)= 1 - \prod_{k=d}^{n+d-1} \left(1 - q^{-k}\right)$$
where $q=1/2.$ Using the conventional notation for q-numbers
$$[k]_q = \frac{1 - q^k}{1-q} = 1 + q + q^2+ \cdots + q^{k-1}$$
we may write this as
$$\Pr(A_{n+d}) = 1 - 2^{-n}\prod_{k=d}^{n+d-1} [k]_{1/2} = 1 - 2^{-n}\binom{n+d-1}{n}_{1/2}\,[n]_{1/2}!$$
where the $q$ analogs to the Binomial coefficients and factorials are the Gaussian Binomial coefficients
$$\binom{m}{r}_q = \frac{[m]_q [m-1]_q \cdots [m-r+1]_q}{[r]_q [r-1]_q \cdots [2]_q [1]_q}$$
and their related factorials
$$[k]_q! = [k]_q [k-1]_q \cdots [2]_q [1]_q.$$
I believe this won't simplify further, but clearly the product converges rapidly and therefore is straightforward to compute and analyze. In particular, these $q$-numbers have combinatorial interpretations (closely related to sampling without replacement) and enjoy many beautiful mathematical properties that can help with that analysis. | Closed form for a markov chain, where transition probabilities depend on $n$? | Let $a_{n+1} = \log(1 - \Pr(A_{n+1}))$ for $n=d, d+1, \ldots.$ In these terms the recurrence reads
$$\begin{aligned}
a_{n+1} &= \log(1 - [\Pr(A_n) + 2^{-n}(1 - \Pr(A_n))])\\
&= \log((1-2^{-n})(1 - \P | Closed form for a markov chain, where transition probabilities depend on $n$?
Let $a_{n+1} = \log(1 - \Pr(A_{n+1}))$ for $n=d, d+1, \ldots.$ In these terms the recurrence reads
$$\begin{aligned}
a_{n+1} &= \log(1 - [\Pr(A_n) + 2^{-n}(1 - \Pr(A_n))])\\
&= \log((1-2^{-n})(1 - \Pr(A_n))\\
&= \log(1 - 2^{-n}) + \log(1-\Pr(A_n)) \\
&= \log(1 - 2^{-n}) + a_n
\end{aligned}$$
with initial condition $a_{d+1} = \log(1 - 2^{-d}).$
An easy induction now gives
$$\begin{aligned}
a_{n+d} &= \sum_{i=1}^{n-1} \log\left(1 - 2^{-(d+i)}\right) + a_{d+1}\\
&= \sum_{i=0}^{n-1} \log\left(1 - 2^{-(d+i)}\right).
\end{aligned}$$
Returning to the original formulation,
$$\Pr(A_{n+d}) = 1 - \exp(a_{n+d}) = 1 - \prod_{i=0}^{n-1} \left(1 - q^{-(d+i)}\right)= 1 - \prod_{k=d}^{n+d-1} \left(1 - q^{-k}\right)$$
where $q=1/2.$ Using the conventional notation for q-numbers
$$[k]_q = \frac{1 - q^k}{1-q} = 1 + q + q^2+ \cdots + q^{k-1}$$
we may write this as
$$\Pr(A_{n+d}) = 1 - 2^{-n}\prod_{k=d}^{n+d-1} [k]_{1/2} = 1 - 2^{-n}\binom{n+d-1}{n}_{1/2}\,[n]_{1/2}!$$
where the $q$ analogs to the Binomial coefficients and factorials are the Gaussian Binomial coefficients
$$\binom{m}{r}_q = \frac{[m]_q [m-1]_q \cdots [m-r+1]_q}{[r]_q [r-1]_q \cdots [2]_q [1]_q}$$
and their related factorials
$$[k]_q! = [k]_q [k-1]_q \cdots [2]_q [1]_q.$$
I believe this won't simplify further, but clearly the product converges rapidly and therefore is straightforward to compute and analyze. In particular, these $q$-numbers have combinatorial interpretations (closely related to sampling without replacement) and enjoy many beautiful mathematical properties that can help with that analysis. | Closed form for a markov chain, where transition probabilities depend on $n$?
Let $a_{n+1} = \log(1 - \Pr(A_{n+1}))$ for $n=d, d+1, \ldots.$ In these terms the recurrence reads
$$\begin{aligned}
a_{n+1} &= \log(1 - [\Pr(A_n) + 2^{-n}(1 - \Pr(A_n))])\\
&= \log((1-2^{-n})(1 - \P |
55,548 | Arguing against statistical power | IIUC, you interaction term is a hidden confounder? In this case, indeed you should compute your effects conditioned on that confounder. The average causal effect would then be computed with the standard formula for backdoor adjustment. | Arguing against statistical power | IIUC, you interaction term is a hidden confounder? In this case, indeed you should compute your effects conditioned on that confounder. The average causal effect would then be computed with the standa | Arguing against statistical power
IIUC, you interaction term is a hidden confounder? In this case, indeed you should compute your effects conditioned on that confounder. The average causal effect would then be computed with the standard formula for backdoor adjustment. | Arguing against statistical power
IIUC, you interaction term is a hidden confounder? In this case, indeed you should compute your effects conditioned on that confounder. The average causal effect would then be computed with the standa |
55,549 | Arguing against statistical power | If you have two populations, it doesn't make sense to model one mean.
I think you should model the difference as a fixed effect (i.e. as a covariate). This means you estimate two means, but gain the power from having one model - best of both worlds? | Arguing against statistical power | If you have two populations, it doesn't make sense to model one mean.
I think you should model the difference as a fixed effect (i.e. as a covariate). This means you estimate two means, but gain the p | Arguing against statistical power
If you have two populations, it doesn't make sense to model one mean.
I think you should model the difference as a fixed effect (i.e. as a covariate). This means you estimate two means, but gain the power from having one model - best of both worlds? | Arguing against statistical power
If you have two populations, it doesn't make sense to model one mean.
I think you should model the difference as a fixed effect (i.e. as a covariate). This means you estimate two means, but gain the p |
55,550 | When not to use the elastic net penalty in regression? | I am not aware of any practical situation where Ridge or Lasso are preferable to Elastic Net. The large-sample (asymptotic) theory for Ridge and Lasso seem to be better developed, so people may use them when they develop theory or if they want theoretical guarantees on the performance of their method.
As OP is already aware, Ridge and Lasso are a special case of Elastic Net. Elastic Net minimizes the function
$$ \hat{\beta} \equiv \underset{\beta}{\operatorname{argmin}}\left(\|y-X \beta\|^{2}+\lambda_{2}\|\beta\|^{2}+\lambda_{1}\|\beta\|_{1}\right).$$
When $\lambda_1 = 0$ and $\lambda_2 > 0$, Elastic Net becomes Ridge regression, and when $\lambda_2 = 0$ and $\lambda_1 >0$, it becomes Ridge regression. Generally, we select the best values of $\lambda_1$ and $\lambda_2$ using cross validation. If Ridge or Lasso were to outperform Elastic Net in a particular case, cross validation would choose a $\lambda_1$ or $\lambda_2$ that reduces the model to Ridge or Lasso.
In the special case when the design matrix is orthonormal, Zhou and Hastie give the closed form solution of each coefficient $\beta_i$ estimated by Elastic Net, Lasso, and Ridge:
$$\text{Elastic Net:} \enspace \hat{\beta}_{i} =\frac{\left(\mid \hat{\beta}_{i}(\text { OLS }) \mid-\lambda_{1} / 2\right)_{+}}{1+\lambda_{2}} \operatorname{sgn}\left\{\hat{\beta}_{i}(\text { OLS })\right\}$$
$$\text{Lasso:} \enspace \hat{\beta}_{i} \text { (lasso) }=\left(\mid \hat{\beta}_{i}(\text { OLS }) \mid-\lambda_{1} / 2\right)_{+} \operatorname{sgn}\left\{\hat{\beta}_{i}(\text { OLS })\right\}$$
$$\text{Ridge:} \enspace \hat{\boldsymbol{\beta}} \text { (ridge) }=\hat{\boldsymbol{\beta}} \text { (OLS) } /\left(1+\lambda_{2}\right)$$
Zhou and Hastie match up terms and plot the solution paths of each estimator to explain the behavior of Elastic Net. They state "elastic net can be viewed as a two-stage procedure: a ridge-type direct shrinkage followed by a lasso-type thresholding." This further supports the idea of Elastic Net always outperforming Ridge/Lasso. If either direct shrinkage or thresholding isn't necessary, you can simply omit this step with Elastic Net
The classic example of Ridge outperforming Lasso is when you have many correlated predictors. but this does not appear to negatively impact Elastic Net. In the same paper, Zhou and Hastie examine the problem of correlated predictors analytically and through simulations. They find that the predictive performance of Elastic Net isn't negatively impacted in the same way Lasso is by correlated predictors.
The only benefit of Ridge and Lasso over Elastic Net I'm aware of is that they can be fit faster than Elastic Net. Ridge and Lasso only have a single tuning parameter while Elastic Net has two tuning parameters. However, Elastic Net can be fit quickly with existing software so this may not be a meaningful increase in computation time.
There may be a pathological situation where Elastic Net performs worse than Ridge and Lasso. The only way this could happen is if something in the data causes a poor selection of $\lambda_1$ and $\lambda_2$.
EDIT: User @richard-hardy points out in a comment that Lasso/Ridge vs Elastic Net can be interpreted as a bias-variance tradeoff. The additional parameter in Elastic Net increases the variance of the model relative to Lasso and Ridge. That is, Elastic Net is more likely to overfit than Ridge or Lasso. Richard and I both suspect this isn't an issue in practice, and increased variance doesn't dominate the reduced bias of Elastic Net. | When not to use the elastic net penalty in regression? | I am not aware of any practical situation where Ridge or Lasso are preferable to Elastic Net. The large-sample (asymptotic) theory for Ridge and Lasso seem to be better developed, so people may use th | When not to use the elastic net penalty in regression?
I am not aware of any practical situation where Ridge or Lasso are preferable to Elastic Net. The large-sample (asymptotic) theory for Ridge and Lasso seem to be better developed, so people may use them when they develop theory or if they want theoretical guarantees on the performance of their method.
As OP is already aware, Ridge and Lasso are a special case of Elastic Net. Elastic Net minimizes the function
$$ \hat{\beta} \equiv \underset{\beta}{\operatorname{argmin}}\left(\|y-X \beta\|^{2}+\lambda_{2}\|\beta\|^{2}+\lambda_{1}\|\beta\|_{1}\right).$$
When $\lambda_1 = 0$ and $\lambda_2 > 0$, Elastic Net becomes Ridge regression, and when $\lambda_2 = 0$ and $\lambda_1 >0$, it becomes Ridge regression. Generally, we select the best values of $\lambda_1$ and $\lambda_2$ using cross validation. If Ridge or Lasso were to outperform Elastic Net in a particular case, cross validation would choose a $\lambda_1$ or $\lambda_2$ that reduces the model to Ridge or Lasso.
In the special case when the design matrix is orthonormal, Zhou and Hastie give the closed form solution of each coefficient $\beta_i$ estimated by Elastic Net, Lasso, and Ridge:
$$\text{Elastic Net:} \enspace \hat{\beta}_{i} =\frac{\left(\mid \hat{\beta}_{i}(\text { OLS }) \mid-\lambda_{1} / 2\right)_{+}}{1+\lambda_{2}} \operatorname{sgn}\left\{\hat{\beta}_{i}(\text { OLS })\right\}$$
$$\text{Lasso:} \enspace \hat{\beta}_{i} \text { (lasso) }=\left(\mid \hat{\beta}_{i}(\text { OLS }) \mid-\lambda_{1} / 2\right)_{+} \operatorname{sgn}\left\{\hat{\beta}_{i}(\text { OLS })\right\}$$
$$\text{Ridge:} \enspace \hat{\boldsymbol{\beta}} \text { (ridge) }=\hat{\boldsymbol{\beta}} \text { (OLS) } /\left(1+\lambda_{2}\right)$$
Zhou and Hastie match up terms and plot the solution paths of each estimator to explain the behavior of Elastic Net. They state "elastic net can be viewed as a two-stage procedure: a ridge-type direct shrinkage followed by a lasso-type thresholding." This further supports the idea of Elastic Net always outperforming Ridge/Lasso. If either direct shrinkage or thresholding isn't necessary, you can simply omit this step with Elastic Net
The classic example of Ridge outperforming Lasso is when you have many correlated predictors. but this does not appear to negatively impact Elastic Net. In the same paper, Zhou and Hastie examine the problem of correlated predictors analytically and through simulations. They find that the predictive performance of Elastic Net isn't negatively impacted in the same way Lasso is by correlated predictors.
The only benefit of Ridge and Lasso over Elastic Net I'm aware of is that they can be fit faster than Elastic Net. Ridge and Lasso only have a single tuning parameter while Elastic Net has two tuning parameters. However, Elastic Net can be fit quickly with existing software so this may not be a meaningful increase in computation time.
There may be a pathological situation where Elastic Net performs worse than Ridge and Lasso. The only way this could happen is if something in the data causes a poor selection of $\lambda_1$ and $\lambda_2$.
EDIT: User @richard-hardy points out in a comment that Lasso/Ridge vs Elastic Net can be interpreted as a bias-variance tradeoff. The additional parameter in Elastic Net increases the variance of the model relative to Lasso and Ridge. That is, Elastic Net is more likely to overfit than Ridge or Lasso. Richard and I both suspect this isn't an issue in practice, and increased variance doesn't dominate the reduced bias of Elastic Net. | When not to use the elastic net penalty in regression?
I am not aware of any practical situation where Ridge or Lasso are preferable to Elastic Net. The large-sample (asymptotic) theory for Ridge and Lasso seem to be better developed, so people may use th |
55,551 | Is there a clustering method that allows me to indicate the number of points desired per cluster? | Being some specific here, the question is not strictly speaking about clustering (i.e. discover underlying data structures) but rather for partitioning with general similarity constraints, to that extent this task is often referred at as balanced clustering. Finally to help one going forward terminology-wise: we care for "cluster cardinality", i.e. the number of elements in the cluster.
An approximate (almost) out-of-the-box solution can be to use $k$-means with some minor modifications; the ELKI data mining software has a great tutorial on how to perform same-size $k$-means variation, it contains example in Java. Without going into too much detail, we initialise our clustering our $k$-means variant with $k=\frac{n}{p}$ means/centroids, $p$ being the expected cluster cardinality. Then assign up to $p$ elements per cluster and iterate this procedure forward. This is essentially an E-M procedure same as the one done in "vanilla" $k$-means but with constraints during the $E$ step (Expectation - label assignment). The link present the whole procedure in with great care.
The above being said, the formal treatment of the problem is not trivial, there is a somewhat sparse technical work on the subject. Basically we need to reformulate this problem as an optimisation task with certain discrete constraints. For starters I would recommend looking at: Balanced K-Means for Clustering (2014) by Malinen and Fränti and Balanced Clustering: A Uniform Model and Fast Algorithm (2019) by Lin et al. Unfortunately I have not seen any curated Python implementations on any of these papers; you might want to directly reach out to the authors. | Is there a clustering method that allows me to indicate the number of points desired per cluster? | Being some specific here, the question is not strictly speaking about clustering (i.e. discover underlying data structures) but rather for partitioning with general similarity constraints, to that ext | Is there a clustering method that allows me to indicate the number of points desired per cluster?
Being some specific here, the question is not strictly speaking about clustering (i.e. discover underlying data structures) but rather for partitioning with general similarity constraints, to that extent this task is often referred at as balanced clustering. Finally to help one going forward terminology-wise: we care for "cluster cardinality", i.e. the number of elements in the cluster.
An approximate (almost) out-of-the-box solution can be to use $k$-means with some minor modifications; the ELKI data mining software has a great tutorial on how to perform same-size $k$-means variation, it contains example in Java. Without going into too much detail, we initialise our clustering our $k$-means variant with $k=\frac{n}{p}$ means/centroids, $p$ being the expected cluster cardinality. Then assign up to $p$ elements per cluster and iterate this procedure forward. This is essentially an E-M procedure same as the one done in "vanilla" $k$-means but with constraints during the $E$ step (Expectation - label assignment). The link present the whole procedure in with great care.
The above being said, the formal treatment of the problem is not trivial, there is a somewhat sparse technical work on the subject. Basically we need to reformulate this problem as an optimisation task with certain discrete constraints. For starters I would recommend looking at: Balanced K-Means for Clustering (2014) by Malinen and Fränti and Balanced Clustering: A Uniform Model and Fast Algorithm (2019) by Lin et al. Unfortunately I have not seen any curated Python implementations on any of these papers; you might want to directly reach out to the authors. | Is there a clustering method that allows me to indicate the number of points desired per cluster?
Being some specific here, the question is not strictly speaking about clustering (i.e. discover underlying data structures) but rather for partitioning with general similarity constraints, to that ext |
55,552 | Is there a clustering method that allows me to indicate the number of points desired per cluster? | @Eyal Shulman's python solution provides a K-means method that allows one to define cluster cardinality. | Is there a clustering method that allows me to indicate the number of points desired per cluster? | @Eyal Shulman's python solution provides a K-means method that allows one to define cluster cardinality. | Is there a clustering method that allows me to indicate the number of points desired per cluster?
@Eyal Shulman's python solution provides a K-means method that allows one to define cluster cardinality. | Is there a clustering method that allows me to indicate the number of points desired per cluster?
@Eyal Shulman's python solution provides a K-means method that allows one to define cluster cardinality. |
55,553 | Understanding Propensity Score Matching | What you described in the text before the images is just "matching". Propensity score matching is one type of matching that uses the difference between two units' propensity scores as the distance between them. There are several other popular ways of computing the distance between them, some of which do not involve the propensity score at all (e.g., Mahalanobis distance matching). The reason the propensity score difference is popular as a distance measure is that there is some theoretical support for its use (described in the original Rosenbaum and Rubin (1983) paper introducing the method) and it tends to work well in practice at creating balanced groups.
Propensity scores can be estimated using any method that produces predicted probabilities, including logistic regression and machine learning methods such as random forests. Other popular methods include gradient boosted trees, lasso logistic regression, and Bayesian additive regression trees. The way to choose which method to use is to see which one produces the best balance (measured broadly) in the matched dataset. See my answer here.
I highly encourage you to read some introductory papers on propensity score methods. I think Austin (2011) is an excellent start. If you're interested specifically in matching (as there are other ways to use the propensity score), Stuart (2010) is another excellent introduction. I also encourage you to read the MatchIt vignettes to see how matching can be done in practice. I've also written extensively about propensity scores and matching so you are welcome to peruse my contributions to the tag. | Understanding Propensity Score Matching | What you described in the text before the images is just "matching". Propensity score matching is one type of matching that uses the difference between two units' propensity scores as the distance bet | Understanding Propensity Score Matching
What you described in the text before the images is just "matching". Propensity score matching is one type of matching that uses the difference between two units' propensity scores as the distance between them. There are several other popular ways of computing the distance between them, some of which do not involve the propensity score at all (e.g., Mahalanobis distance matching). The reason the propensity score difference is popular as a distance measure is that there is some theoretical support for its use (described in the original Rosenbaum and Rubin (1983) paper introducing the method) and it tends to work well in practice at creating balanced groups.
Propensity scores can be estimated using any method that produces predicted probabilities, including logistic regression and machine learning methods such as random forests. Other popular methods include gradient boosted trees, lasso logistic regression, and Bayesian additive regression trees. The way to choose which method to use is to see which one produces the best balance (measured broadly) in the matched dataset. See my answer here.
I highly encourage you to read some introductory papers on propensity score methods. I think Austin (2011) is an excellent start. If you're interested specifically in matching (as there are other ways to use the propensity score), Stuart (2010) is another excellent introduction. I also encourage you to read the MatchIt vignettes to see how matching can be done in practice. I've also written extensively about propensity scores and matching so you are welcome to peruse my contributions to the tag. | Understanding Propensity Score Matching
What you described in the text before the images is just "matching". Propensity score matching is one type of matching that uses the difference between two units' propensity scores as the distance bet |
55,554 | Understanding Propensity Score Matching | My opinion is that adjusting by the PS is not a good ide. It is true that a circumstance under which a linear estimate will not change in expectation is if a covariate that is fitted is balanced between the two groups. However
It is not the only circumstance under which the estimate will not change in expectation. The other is that the covariate is not predictive.
It is not true that inference will not change. The standard error will be expected to change even if the covariate is balanced if it is predictive. For example the estimate from a matched pair design will be the same whether or not you fit 'pair' in the model but the inference will be quite different.
Putting these two together, it seems logical to choose covariates on which to condition not because they are predictive of assignment (PS) but because they are predictive of outcome (ANCOVA).
Also I would be wary of following the causal inference teaching on this. Some who are involved in this don't seem to care about standard errors. In my opinion this is a fundamental mistake.The importance of knowing how much you don't know
See
Senn SJ, Graf E, Caputo A. Stratification for the propensity score compared with linear regression techniques to assess the effect of treatment on exposure. Research. Statistics in Medicine. Dec 3 2007;26(30):5529-5544. | Understanding Propensity Score Matching | My opinion is that adjusting by the PS is not a good ide. It is true that a circumstance under which a linear estimate will not change in expectation is if a covariate that is fitted is balanced betwe | Understanding Propensity Score Matching
My opinion is that adjusting by the PS is not a good ide. It is true that a circumstance under which a linear estimate will not change in expectation is if a covariate that is fitted is balanced between the two groups. However
It is not the only circumstance under which the estimate will not change in expectation. The other is that the covariate is not predictive.
It is not true that inference will not change. The standard error will be expected to change even if the covariate is balanced if it is predictive. For example the estimate from a matched pair design will be the same whether or not you fit 'pair' in the model but the inference will be quite different.
Putting these two together, it seems logical to choose covariates on which to condition not because they are predictive of assignment (PS) but because they are predictive of outcome (ANCOVA).
Also I would be wary of following the causal inference teaching on this. Some who are involved in this don't seem to care about standard errors. In my opinion this is a fundamental mistake.The importance of knowing how much you don't know
See
Senn SJ, Graf E, Caputo A. Stratification for the propensity score compared with linear regression techniques to assess the effect of treatment on exposure. Research. Statistics in Medicine. Dec 3 2007;26(30):5529-5544. | Understanding Propensity Score Matching
My opinion is that adjusting by the PS is not a good ide. It is true that a circumstance under which a linear estimate will not change in expectation is if a covariate that is fitted is balanced betwe |
55,555 | Understanding Propensity Score Matching | Due to the Propensity Score Theorem, the propensity score can serve as a dimension reduction - especially if you have many covariates, matching based on covariates is not easily feasible. The propensity score has the additional advantage of matching only on covariates that actually determine selection - in a standard matching on all covariates, each cocariate would receive the same attention whether or not they actually contributed to the selection bias.
The reason why logistic regression is chosen is because it yields unbiased estimates of the propensity score. You can use other ML techniques, but these require modification to remove biases (I recommend https://docs.doubleml.org/stable/guide/basics.html and the associated paper for background details) | Understanding Propensity Score Matching | Due to the Propensity Score Theorem, the propensity score can serve as a dimension reduction - especially if you have many covariates, matching based on covariates is not easily feasible. The propensi | Understanding Propensity Score Matching
Due to the Propensity Score Theorem, the propensity score can serve as a dimension reduction - especially if you have many covariates, matching based on covariates is not easily feasible. The propensity score has the additional advantage of matching only on covariates that actually determine selection - in a standard matching on all covariates, each cocariate would receive the same attention whether or not they actually contributed to the selection bias.
The reason why logistic regression is chosen is because it yields unbiased estimates of the propensity score. You can use other ML techniques, but these require modification to remove biases (I recommend https://docs.doubleml.org/stable/guide/basics.html and the associated paper for background details) | Understanding Propensity Score Matching
Due to the Propensity Score Theorem, the propensity score can serve as a dimension reduction - especially if you have many covariates, matching based on covariates is not easily feasible. The propensi |
55,556 | What is the name of this kind of smoothing? | This technique is called kernel regression: https://en.wikipedia.org/wiki/Kernel_regression . I believe your variant is Nadaraya–Watson kernel regression with a Gaussian kernel. | What is the name of this kind of smoothing? | This technique is called kernel regression: https://en.wikipedia.org/wiki/Kernel_regression . I believe your variant is Nadaraya–Watson kernel regression with a Gaussian kernel. | What is the name of this kind of smoothing?
This technique is called kernel regression: https://en.wikipedia.org/wiki/Kernel_regression . I believe your variant is Nadaraya–Watson kernel regression with a Gaussian kernel. | What is the name of this kind of smoothing?
This technique is called kernel regression: https://en.wikipedia.org/wiki/Kernel_regression . I believe your variant is Nadaraya–Watson kernel regression with a Gaussian kernel. |
55,557 | What is the purpose of Add & Norm layers in Transformers? | Add & Norm are in fact two separate steps. The add step is a residual connection
It means that we take sum together the output of a layer with the input $\mathcal{F}(\mathbf{x}) + \mathbf{x}$. The idea was introduced by He et al (2005) with the ResNet model. It is one of the solutions for vanishing gradient problem.
The norm step is about layer normalization (Ba et al, 2016), it is another way of normalization. TL;DR it is one of the many computational tricks to make life easier for training the model, hence improve the performance and training time.
You can find more details on the Transformer model in the great The Annotated Transformer blog post that explains the paper in-depth and illustrates it with code. | What is the purpose of Add & Norm layers in Transformers? | Add & Norm are in fact two separate steps. The add step is a residual connection
It means that we take sum together the output of a layer with the input $\mathcal{F}(\mathbf{x}) + \mathbf{x}$. The id | What is the purpose of Add & Norm layers in Transformers?
Add & Norm are in fact two separate steps. The add step is a residual connection
It means that we take sum together the output of a layer with the input $\mathcal{F}(\mathbf{x}) + \mathbf{x}$. The idea was introduced by He et al (2005) with the ResNet model. It is one of the solutions for vanishing gradient problem.
The norm step is about layer normalization (Ba et al, 2016), it is another way of normalization. TL;DR it is one of the many computational tricks to make life easier for training the model, hence improve the performance and training time.
You can find more details on the Transformer model in the great The Annotated Transformer blog post that explains the paper in-depth and illustrates it with code. | What is the purpose of Add & Norm layers in Transformers?
Add & Norm are in fact two separate steps. The add step is a residual connection
It means that we take sum together the output of a layer with the input $\mathcal{F}(\mathbf{x}) + \mathbf{x}$. The id |
55,558 | Why does propensity score matching fail to estimate the true causal effect when OLS works? | As @CloseToC mentioned in the comments, this is because you have a nearly pathological data scenario here. There are a few things that make this scenario "unfair" to matching (i.e., not suitable for matching but well suited for regression). The greatest is that there is essentially no overlap in the propensity score distribution. This is a plot of the true propensity scores between the treatment groups:
There is no way matching, which relies on units of different groups having similar propensity scores, could ever hope to estimate the effect correctly in any population. Using the estimated propensity scores, the story is not much better, and the propensity scores are estimated essentially incorrectly because the distribution is not as extreme as that of the true propensity scores:
There is still a significant lack of overlap. When you perform standard matching (with replacement, as in Matching), almost every treated unit is matched to the very few control units with estimated propensity scores close 1. Indeed, the effective sample size (ESS) of the control group after matching for the ATT is less than 4 (out of an original control sample of 2477). If we look at covariate balance after matching for the ATT, we see significant imbalance remaining in the covariates:
> cobalt::bal.tab(df[c("E0", "V", "Y0")], treat = Tr,
weights = cobalt::get.w(rr), un = T,
method = "m")
Balance Measures
Type Diff.Un Diff.Adj
E0 Contin. 1.3866 -0.0876
V Contin. 0.5777 0.1847
Y0 Contin. 2.0520 0.3426
Sample sizes
Control Treated
All 2477. 2523
Matched (ESS) 3.45 2523
Matched (Unweighted) 928. 2523
Unmatched 1549. 0
Let's use a matching method that is actually equipped to deal with poor overlap: matching with a caliper. When we set a very small caliper, so only treated units with control units within their caliper widths are matched and the rest are dropped, we actually get good balance:
> m <- matchit(Dbin ~ Y0 + V + E0, data = df, caliper = .01)
> cobalt::bal.tab(m)
Call
matchit(formula = Dbin ~ Y0 + V + E0, data = df, caliper = 0.01)
Balance Measures
Type Diff.Adj
distance Distance 0.0071
Y0 Contin. 0.0175
V Contin. -0.0306
E0 Contin. -0.0048
Sample sizes
Control Treated
All 2477 2523
Matched 437 437
Unmatched 2040 2086
And when estimating the treatment effect in this sample, we actually get the right answer (because in this case the treatment effect is constant):
> summary(lm(Y1 ~ Dbin, data = match.data(m)))
Call:
lm(formula = Y1 ~ Dbin, data = match.data(m))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.2705 0.5619 0.481 0.63
Dbin -9.0438 0.7946 -11.381 <2e-16 ***
---
See also this question, which is almost identical and for which the solution is the same. I will reiterate what I said in that post: treating propensity score matching as a blunt instrument you can just apply without looking at the data is the wrong way to use it. You need to tailor the matching method to the data scenario at hand. By looking at the data, we could see that we were in a low-overlap scenario, so methods that explicitly deal with low overlap should be used. In this case, using caliper matching was successful, but other matching methods like coarsened exact matching (CEM) and cardinality matching, both available in MatchIt, are able to return the correct answer. Methods not well suited for low overlap, like nearest neighbor matching without a caliper or full matching, will not be successful, as you witnessed. | Why does propensity score matching fail to estimate the true causal effect when OLS works? | As @CloseToC mentioned in the comments, this is because you have a nearly pathological data scenario here. There are a few things that make this scenario "unfair" to matching (i.e., not suitable for m | Why does propensity score matching fail to estimate the true causal effect when OLS works?
As @CloseToC mentioned in the comments, this is because you have a nearly pathological data scenario here. There are a few things that make this scenario "unfair" to matching (i.e., not suitable for matching but well suited for regression). The greatest is that there is essentially no overlap in the propensity score distribution. This is a plot of the true propensity scores between the treatment groups:
There is no way matching, which relies on units of different groups having similar propensity scores, could ever hope to estimate the effect correctly in any population. Using the estimated propensity scores, the story is not much better, and the propensity scores are estimated essentially incorrectly because the distribution is not as extreme as that of the true propensity scores:
There is still a significant lack of overlap. When you perform standard matching (with replacement, as in Matching), almost every treated unit is matched to the very few control units with estimated propensity scores close 1. Indeed, the effective sample size (ESS) of the control group after matching for the ATT is less than 4 (out of an original control sample of 2477). If we look at covariate balance after matching for the ATT, we see significant imbalance remaining in the covariates:
> cobalt::bal.tab(df[c("E0", "V", "Y0")], treat = Tr,
weights = cobalt::get.w(rr), un = T,
method = "m")
Balance Measures
Type Diff.Un Diff.Adj
E0 Contin. 1.3866 -0.0876
V Contin. 0.5777 0.1847
Y0 Contin. 2.0520 0.3426
Sample sizes
Control Treated
All 2477. 2523
Matched (ESS) 3.45 2523
Matched (Unweighted) 928. 2523
Unmatched 1549. 0
Let's use a matching method that is actually equipped to deal with poor overlap: matching with a caliper. When we set a very small caliper, so only treated units with control units within their caliper widths are matched and the rest are dropped, we actually get good balance:
> m <- matchit(Dbin ~ Y0 + V + E0, data = df, caliper = .01)
> cobalt::bal.tab(m)
Call
matchit(formula = Dbin ~ Y0 + V + E0, data = df, caliper = 0.01)
Balance Measures
Type Diff.Adj
distance Distance 0.0071
Y0 Contin. 0.0175
V Contin. -0.0306
E0 Contin. -0.0048
Sample sizes
Control Treated
All 2477 2523
Matched 437 437
Unmatched 2040 2086
And when estimating the treatment effect in this sample, we actually get the right answer (because in this case the treatment effect is constant):
> summary(lm(Y1 ~ Dbin, data = match.data(m)))
Call:
lm(formula = Y1 ~ Dbin, data = match.data(m))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.2705 0.5619 0.481 0.63
Dbin -9.0438 0.7946 -11.381 <2e-16 ***
---
See also this question, which is almost identical and for which the solution is the same. I will reiterate what I said in that post: treating propensity score matching as a blunt instrument you can just apply without looking at the data is the wrong way to use it. You need to tailor the matching method to the data scenario at hand. By looking at the data, we could see that we were in a low-overlap scenario, so methods that explicitly deal with low overlap should be used. In this case, using caliper matching was successful, but other matching methods like coarsened exact matching (CEM) and cardinality matching, both available in MatchIt, are able to return the correct answer. Methods not well suited for low overlap, like nearest neighbor matching without a caliper or full matching, will not be successful, as you witnessed. | Why does propensity score matching fail to estimate the true causal effect when OLS works?
As @CloseToC mentioned in the comments, this is because you have a nearly pathological data scenario here. There are a few things that make this scenario "unfair" to matching (i.e., not suitable for m |
55,559 | Best way to compare two treatment groups to a control | Your variable group is a factor with three levels, control, Fertilizer_A, Fertilizer_B, and control is used as reference )or baseline) level, so its implied coefficient is zero. See What to do in a multinomial logistic regression when all levels of DV are of interest?.
The coefficients for the two non-reference levels, Fertilizer_A and Fertilizer_B represent in fact contrast comparing those levels to the control. You say:
The only significant result is the control.
and I do not understand what you come to that conclusion. The last line in your output
F-statistic: 4.846 on 2 and 27 DF, p-value: 0.01591
represent a comparison between your model and the intercept-only model, which is the null hypothesis model representing the null that the fertilizer treatment has no effect. That hypothesis test has the p-value
0.01591, which is often considered significant, for instance, at the conventional 5% level you can reject the null that the fertilizer treatment has no effect.
Edit
The two individual p-values in the output table each test two separate contrasts, comparing A with control and comparing B with control. That neither of those are significant at the conventional 5% level, while the overall test is significant, only says that we have some information to say that fertilizer use is different from control, but not sufficient information to say which fertilizer is different from control. This might be seen as a paradox, but is not: An equivalent example is that police might be able to prove that either A or B murdered the victim, but not enough to say which of them.
In this case, from the point estimates, A seems worse than control while B seems better. Detailed interpretation will depend on knowledge of the fertilizers used etc, but might also indicate need of further replicate experiments to give a better conclusion. Note that the conclusion from the overall test (which is two-sided) only is that fertilizer id different from control, not better than.
As for comparing the two treatments, that is also a contrast, and all contrasts can be tested. In the following I will abbreviate the levels with A, B, C. The two printed contrasts are A-C and B-C, note that we can write A-B as A-C - (B-C), so from the output you can find the point estimate as A-C = -0.3710 - 0.4940, but to calculate the t-test you also need then standard error, which you need further information from. But you can calculate it yourself using the covariance matrix of the coefficient vector, which you get by the call vcov(plants_lm), or you can use use a function like car::linearHypothesis to do it. For another way see the code at Categorical variable coding to compare all levels to all levels | Best way to compare two treatment groups to a control | Your variable group is a factor with three levels, control, Fertilizer_A, Fertilizer_B, and control is used as reference )or baseline) level, so its implied coefficient is zero. See What to do in a mu | Best way to compare two treatment groups to a control
Your variable group is a factor with three levels, control, Fertilizer_A, Fertilizer_B, and control is used as reference )or baseline) level, so its implied coefficient is zero. See What to do in a multinomial logistic regression when all levels of DV are of interest?.
The coefficients for the two non-reference levels, Fertilizer_A and Fertilizer_B represent in fact contrast comparing those levels to the control. You say:
The only significant result is the control.
and I do not understand what you come to that conclusion. The last line in your output
F-statistic: 4.846 on 2 and 27 DF, p-value: 0.01591
represent a comparison between your model and the intercept-only model, which is the null hypothesis model representing the null that the fertilizer treatment has no effect. That hypothesis test has the p-value
0.01591, which is often considered significant, for instance, at the conventional 5% level you can reject the null that the fertilizer treatment has no effect.
Edit
The two individual p-values in the output table each test two separate contrasts, comparing A with control and comparing B with control. That neither of those are significant at the conventional 5% level, while the overall test is significant, only says that we have some information to say that fertilizer use is different from control, but not sufficient information to say which fertilizer is different from control. This might be seen as a paradox, but is not: An equivalent example is that police might be able to prove that either A or B murdered the victim, but not enough to say which of them.
In this case, from the point estimates, A seems worse than control while B seems better. Detailed interpretation will depend on knowledge of the fertilizers used etc, but might also indicate need of further replicate experiments to give a better conclusion. Note that the conclusion from the overall test (which is two-sided) only is that fertilizer id different from control, not better than.
As for comparing the two treatments, that is also a contrast, and all contrasts can be tested. In the following I will abbreviate the levels with A, B, C. The two printed contrasts are A-C and B-C, note that we can write A-B as A-C - (B-C), so from the output you can find the point estimate as A-C = -0.3710 - 0.4940, but to calculate the t-test you also need then standard error, which you need further information from. But you can calculate it yourself using the covariance matrix of the coefficient vector, which you get by the call vcov(plants_lm), or you can use use a function like car::linearHypothesis to do it. For another way see the code at Categorical variable coding to compare all levels to all levels | Best way to compare two treatment groups to a control
Your variable group is a factor with three levels, control, Fertilizer_A, Fertilizer_B, and control is used as reference )or baseline) level, so its implied coefficient is zero. See What to do in a mu |
55,560 | Best way to compare two treatment groups to a control | kjetil's answer explains well the interpretation of the two coefficients from your model as contrasts between each fertilizer and control.
You can use the package contrast to explicitly perform the final contrast, between the two fertilizers. First, simulating data and fitting a model as you did:
library("contrast")
library("ggplot2")
set.seed(42)
## simulate a plants df with minor differences between each group
plants <- data.frame(
weight = c(rnorm(50), rnorm(50, mean = 0.2), rnorm(50, mean = 0.3)),
group = factor(rep(c("control", "treat1", "treat2"), each = 50))
)
fit <- lm(
weight ~ group,
data = plants
)
summary(fit)
#>
#> Call:
#> lm(formula = weight ~ group, data = plants)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -3.09379 -0.57597 0.00902 0.57207 2.85314
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) -0.03567 0.14241 -0.250 0.803
#> grouptreat1 0.33637 0.20139 1.670 0.097 .
#> grouptreat2 0.18442 0.20139 0.916 0.361
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 1.007 on 147 degrees of freedom
#> Multiple R-squared: 0.01868, Adjusted R-squared: 0.00533
#> F-statistic: 1.399 on 2 and 147 DF, p-value: 0.2501
Now, to perform contrasts. Note, the first two are redundant because they correspond to the latter two rows of the summary table above, but they're shown for clarity.
contrast(fit,
a = list(group = "treat1"),
b = list(group = "control")
)
#> lm model parameter contrast
#>
#> Contrast S.E. Lower Upper t df Pr(>|t|)
#> 1 0.3363732 0.2013913 -0.06162303 0.7343694 1.67 147 0.097
contrast(fit,
a = list(group = "treat2"),
b = list(group = "control")
)
#> lm model parameter contrast
#>
#> Contrast S.E. Lower Upper t df Pr(>|t|)
#> 1 0.1844207 0.2013913 -0.2135755 0.5824169 0.92 147 0.3613
contrast(fit,
a = list(group = "treat2"),
b = list(group = "treat1")
)
#> lm model parameter contrast
#>
#> Contrast S.E. Lower Upper t df Pr(>|t|)
#> 1 -0.1519525 0.2013913 -0.5499487 0.2460437 -0.75 147 0.4517
ggplot(plants) +
aes(group, weight) +
geom_boxplot()
Disclaimer: I'm the maintainer, though not the developer, of contrast. | Best way to compare two treatment groups to a control | kjetil's answer explains well the interpretation of the two coefficients from your model as contrasts between each fertilizer and control.
You can use the package contrast to explicitly perform the fi | Best way to compare two treatment groups to a control
kjetil's answer explains well the interpretation of the two coefficients from your model as contrasts between each fertilizer and control.
You can use the package contrast to explicitly perform the final contrast, between the two fertilizers. First, simulating data and fitting a model as you did:
library("contrast")
library("ggplot2")
set.seed(42)
## simulate a plants df with minor differences between each group
plants <- data.frame(
weight = c(rnorm(50), rnorm(50, mean = 0.2), rnorm(50, mean = 0.3)),
group = factor(rep(c("control", "treat1", "treat2"), each = 50))
)
fit <- lm(
weight ~ group,
data = plants
)
summary(fit)
#>
#> Call:
#> lm(formula = weight ~ group, data = plants)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -3.09379 -0.57597 0.00902 0.57207 2.85314
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) -0.03567 0.14241 -0.250 0.803
#> grouptreat1 0.33637 0.20139 1.670 0.097 .
#> grouptreat2 0.18442 0.20139 0.916 0.361
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 1.007 on 147 degrees of freedom
#> Multiple R-squared: 0.01868, Adjusted R-squared: 0.00533
#> F-statistic: 1.399 on 2 and 147 DF, p-value: 0.2501
Now, to perform contrasts. Note, the first two are redundant because they correspond to the latter two rows of the summary table above, but they're shown for clarity.
contrast(fit,
a = list(group = "treat1"),
b = list(group = "control")
)
#> lm model parameter contrast
#>
#> Contrast S.E. Lower Upper t df Pr(>|t|)
#> 1 0.3363732 0.2013913 -0.06162303 0.7343694 1.67 147 0.097
contrast(fit,
a = list(group = "treat2"),
b = list(group = "control")
)
#> lm model parameter contrast
#>
#> Contrast S.E. Lower Upper t df Pr(>|t|)
#> 1 0.1844207 0.2013913 -0.2135755 0.5824169 0.92 147 0.3613
contrast(fit,
a = list(group = "treat2"),
b = list(group = "treat1")
)
#> lm model parameter contrast
#>
#> Contrast S.E. Lower Upper t df Pr(>|t|)
#> 1 -0.1519525 0.2013913 -0.5499487 0.2460437 -0.75 147 0.4517
ggplot(plants) +
aes(group, weight) +
geom_boxplot()
Disclaimer: I'm the maintainer, though not the developer, of contrast. | Best way to compare two treatment groups to a control
kjetil's answer explains well the interpretation of the two coefficients from your model as contrasts between each fertilizer and control.
You can use the package contrast to explicitly perform the fi |
55,561 | Best way to compare two treatment groups to a control | This question comes up a lot and there is a method that answers all the questions you have. Look at Ways of comparing linear regression interepts and slopes?. It explains how to compare slopes and intercepts of as many groups as you want and will tell you the difference between groups. | Best way to compare two treatment groups to a control | This question comes up a lot and there is a method that answers all the questions you have. Look at Ways of comparing linear regression interepts and slopes?. It explains how to compare slopes and int | Best way to compare two treatment groups to a control
This question comes up a lot and there is a method that answers all the questions you have. Look at Ways of comparing linear regression interepts and slopes?. It explains how to compare slopes and intercepts of as many groups as you want and will tell you the difference between groups. | Best way to compare two treatment groups to a control
This question comes up a lot and there is a method that answers all the questions you have. Look at Ways of comparing linear regression interepts and slopes?. It explains how to compare slopes and int |
55,562 | When should we use lag variable in a regression? | When a lagged explanatory variable is used in a model, this represents a situation where the analyst thinks that the explanatory variable might have a statistical relationship with the response, but they believe that there may be a "lag" in the relationship. This could occur when the explanatory variable has a causal effect on the response variable, but the causal effect occurs gradually, and manifests in changes to the response later in time.
When a lagged response variable is used in a model, this represents a kind of proxy for auto-correlation in the response variable, and the remaining explanatory variables are then included to see if there is any remaining statistical relationship between these variables and the response, after the effect of auto-correlation are removed.
Both of these situations can occur in a wide variety of econometric settings, since variables in those settings are commonly auto-correlated, and they also often have causal effects on each other that manifest gradually over time. In terms of when to include these kinds of terms in models, that is a complicated judgment relating to underlying theoretical considerations and diagnostic analysis of the data. Putting aside theoretical issues, you can look for auto-correlation in regression residuals and you can also look for lagged correlation between explanatory variables and residuals, so this allows you to see if an existing fitted model might benefit from the addition of a lagged model term. | When should we use lag variable in a regression? | When a lagged explanatory variable is used in a model, this represents a situation where the analyst thinks that the explanatory variable might have a statistical relationship with the response, but t | When should we use lag variable in a regression?
When a lagged explanatory variable is used in a model, this represents a situation where the analyst thinks that the explanatory variable might have a statistical relationship with the response, but they believe that there may be a "lag" in the relationship. This could occur when the explanatory variable has a causal effect on the response variable, but the causal effect occurs gradually, and manifests in changes to the response later in time.
When a lagged response variable is used in a model, this represents a kind of proxy for auto-correlation in the response variable, and the remaining explanatory variables are then included to see if there is any remaining statistical relationship between these variables and the response, after the effect of auto-correlation are removed.
Both of these situations can occur in a wide variety of econometric settings, since variables in those settings are commonly auto-correlated, and they also often have causal effects on each other that manifest gradually over time. In terms of when to include these kinds of terms in models, that is a complicated judgment relating to underlying theoretical considerations and diagnostic analysis of the data. Putting aside theoretical issues, you can look for auto-correlation in regression residuals and you can also look for lagged correlation between explanatory variables and residuals, so this allows you to see if an existing fitted model might benefit from the addition of a lagged model term. | When should we use lag variable in a regression?
When a lagged explanatory variable is used in a model, this represents a situation where the analyst thinks that the explanatory variable might have a statistical relationship with the response, but t |
55,563 | Preparing data for modelling | Would it be correct to use the following data format for such modelling?
Yes, the model:
offer ~ year + population + (1 | county)
adjusts for the repeated measures within each county
Or is this a problem that every county has multiple rows in data?
No, that's precisely why we use a mixed model in such cases.
Preparing data in different ways gives me different number of rows, possibly leading to narrower confidence interval
I don't know what you mean by that. This is the format needed to fit a mixed model - one row per unit of measurement (country in this case)
What other biases such data manipulations may cause?
What other biases ? You haven't mentioned anything to do with bias and neither have I, because there is no bias due to the data format. As mentioned above you just need to prepare the data to have 1 row per unit of measurement. If you had repeated measures within subjects then you would need to have one row per subject.
What is a good practice here?
The only practice is to have one row per unit of measurement. At least, this is the case with every mixed model software that I have used. | Preparing data for modelling | Would it be correct to use the following data format for such modelling?
Yes, the model:
offer ~ year + population + (1 | county)
adjusts for the repeated measures within each county
Or is this a p | Preparing data for modelling
Would it be correct to use the following data format for such modelling?
Yes, the model:
offer ~ year + population + (1 | county)
adjusts for the repeated measures within each county
Or is this a problem that every county has multiple rows in data?
No, that's precisely why we use a mixed model in such cases.
Preparing data in different ways gives me different number of rows, possibly leading to narrower confidence interval
I don't know what you mean by that. This is the format needed to fit a mixed model - one row per unit of measurement (country in this case)
What other biases such data manipulations may cause?
What other biases ? You haven't mentioned anything to do with bias and neither have I, because there is no bias due to the data format. As mentioned above you just need to prepare the data to have 1 row per unit of measurement. If you had repeated measures within subjects then you would need to have one row per subject.
What is a good practice here?
The only practice is to have one row per unit of measurement. At least, this is the case with every mixed model software that I have used. | Preparing data for modelling
Would it be correct to use the following data format for such modelling?
Yes, the model:
offer ~ year + population + (1 | county)
adjusts for the repeated measures within each county
Or is this a p |
55,564 | Conditional variance of the absolute sum of zero-mean i.i.d. random variables | $\newcommand{\Var}{\operatorname{Var}}$
In response to the edited version.
I think there's a glitch in that the "$=$" sign ought to be a "$\geq$" sign. With that change, everything seems to be (at least locally) OK.
A general fact is that $\Var(|X|)\leq \Var(X)$ for any random variable $X$. (Since $\Var(X)-\Var(|X|)=E(|X|)^2 - E(X)^2$,
and $|E(X)|\leq E(|X|)$.)
So (applying the above conditional on $D_n$) we get that $\Var\left(|\sum_{i=1}^n U_i| \big| D_n\right)
\leq
\Var\left(\sum_{i=1}^n U_i \big| D_n\right)$.
Now we use the asserted fact (first line of the excerpt you quoted) that, conditional on $D_n$, the $U_i$ are i.i.d. (N.B. it's important that the $U_i$ are i.i.d. conditional on $D_n$; this is a different statement from saying that that $U_i$ are unconditionally i.i.d.)
That gives us that
$\Var\left(\sum_{i=1}^n U_i \big| D_n\right)
=
\sum_{i=1}^n\Var\left( U_i \big| D_n\right)
=
n\Var\left(U_1\big|D_n\right)$.
Finally the next line is using the fact that $\Var(U_1\big| D_n)\leq 1/4$ (because conditional on $D_n$, $U_1$ is a shift of a Bernoulli random variable, and any Bernoulli random variable has variance at most $1/4$). | Conditional variance of the absolute sum of zero-mean i.i.d. random variables | $\newcommand{\Var}{\operatorname{Var}}$
In response to the edited version.
I think there's a glitch in that the "$=$" sign ought to be a "$\geq$" sign. With that change, everything seems to be (at lea | Conditional variance of the absolute sum of zero-mean i.i.d. random variables
$\newcommand{\Var}{\operatorname{Var}}$
In response to the edited version.
I think there's a glitch in that the "$=$" sign ought to be a "$\geq$" sign. With that change, everything seems to be (at least locally) OK.
A general fact is that $\Var(|X|)\leq \Var(X)$ for any random variable $X$. (Since $\Var(X)-\Var(|X|)=E(|X|)^2 - E(X)^2$,
and $|E(X)|\leq E(|X|)$.)
So (applying the above conditional on $D_n$) we get that $\Var\left(|\sum_{i=1}^n U_i| \big| D_n\right)
\leq
\Var\left(\sum_{i=1}^n U_i \big| D_n\right)$.
Now we use the asserted fact (first line of the excerpt you quoted) that, conditional on $D_n$, the $U_i$ are i.i.d. (N.B. it's important that the $U_i$ are i.i.d. conditional on $D_n$; this is a different statement from saying that that $U_i$ are unconditionally i.i.d.)
That gives us that
$\Var\left(\sum_{i=1}^n U_i \big| D_n\right)
=
\sum_{i=1}^n\Var\left( U_i \big| D_n\right)
=
n\Var\left(U_1\big|D_n\right)$.
Finally the next line is using the fact that $\Var(U_1\big| D_n)\leq 1/4$ (because conditional on $D_n$, $U_1$ is a shift of a Bernoulli random variable, and any Bernoulli random variable has variance at most $1/4$). | Conditional variance of the absolute sum of zero-mean i.i.d. random variables
$\newcommand{\Var}{\operatorname{Var}}$
In response to the edited version.
I think there's a glitch in that the "$=$" sign ought to be a "$\geq$" sign. With that change, everything seems to be (at lea |
55,565 | Conditional variance of the absolute sum of zero-mean i.i.d. random variables | Your equation doesn't hold in general. As a simple counter-example, consider random variables that are conditionally IID with distribution $\mathbb{P}(U_i=-1|D_n) = \mathbb{P}(U_i=1|D_n)=\tfrac{1}{2}$.$^\dagger$ In this simple case you get:
$$\begin{align}
\mathbb{E}(U_1 + U_2 | D_n)
&= (-2) \cdot \frac{1}{4} + 0 \cdot \frac{1}{2} + 2 \cdot \frac{1}{4} = 0, \\[12pt]
\mathbb{V}(U_1 + U_2 | D_n)
&= (-2)^2 \cdot \frac{1}{4} + 0^2 \cdot \frac{1}{2} + 2^2 \cdot \frac{1}{4} = 2, \\[12pt]
\mathbb{E}(|U_1 + U_2| | D_n)
&= |-2| \cdot \frac{1}{4} + |0| \cdot \frac{1}{2} + |2| \cdot \frac{1}{4} = 1, \\[12pt]
\mathbb{V}(|U_1 + U_2| | D_n)
&= (|-2|-1)^2 \cdot \frac{1}{4} + (|0|-1)^2 \cdot \frac{1}{2} + (|2|-1)^2 \cdot \frac{1}{4} = 1, \\[12pt]
\end{align}$$
so you have:
$$\mathbb{V}(|U_1 + U_2| | D_n) = 1 \neq 2 = \mathbb{V}(U_1 + U_2 | D_n).$$
In fact, as a general rule (stated here unconditionally but it also holds conditionally), you get:
$$\begin{align}
\mathbb{V} \Bigg( \bigg| \sum_{i=1}^n U_i \bigg| \Bigg)
&= \mathbb{E} \Bigg( \bigg| \sum_{i=1}^n U_i \bigg| ^2 \Bigg) - \mathbb{E} \Bigg( \bigg| \sum_{i=1}^n U_i \bigg| \Bigg)^2 \\[6pt]
&\leqslant \mathbb{E} \Bigg( \bigg| \sum_{i=1}^n U_i \bigg| ^2 \Bigg) - \mathbb{E} \Bigg( \sum_{i=1}^n U_i \Bigg)^2 \\[6pt]
&= \mathbb{E} \Bigg( \bigg( \sum_{i=1}^n U_i \bigg) ^2 \Bigg) - \mathbb{E} \Bigg( \sum_{i=1}^n U_i \Bigg)^2 \\[6pt]
&= \mathbb{V} \Bigg( \sum_{i=1}^n U_i \Bigg) \\[6pt]
&= \sum_{i=1}^n \mathbb{V}(U_i) + \sum_{i \neq j} \mathbb{C}(U_i,U_j). \\[6pt]
\end{align}$$
In regard to your updated information, the result still does not hold with the additional information. (And indeed, the additional information shows that the situation is just a scaled version of what I have described above, so the result is generally false in that case.) It appears to me that this is probably just an error in the proof, using an invalid step. Substiting the correct inequality in the erroneous step still gives a valid proof, so the proof itself still works once corrected.
$^\dagger$ You haven't given us any information on what $D_n$ is, so it is essentially irrelevant here. Since you haven't told us anything about $D_n$ you can just posit that the values for $U_i$ are marginally IID and that $D_n$ is some irrelevant variable that is independent of these. | Conditional variance of the absolute sum of zero-mean i.i.d. random variables | Your equation doesn't hold in general. As a simple counter-example, consider random variables that are conditionally IID with distribution $\mathbb{P}(U_i=-1|D_n) = \mathbb{P}(U_i=1|D_n)=\tfrac{1}{2} | Conditional variance of the absolute sum of zero-mean i.i.d. random variables
Your equation doesn't hold in general. As a simple counter-example, consider random variables that are conditionally IID with distribution $\mathbb{P}(U_i=-1|D_n) = \mathbb{P}(U_i=1|D_n)=\tfrac{1}{2}$.$^\dagger$ In this simple case you get:
$$\begin{align}
\mathbb{E}(U_1 + U_2 | D_n)
&= (-2) \cdot \frac{1}{4} + 0 \cdot \frac{1}{2} + 2 \cdot \frac{1}{4} = 0, \\[12pt]
\mathbb{V}(U_1 + U_2 | D_n)
&= (-2)^2 \cdot \frac{1}{4} + 0^2 \cdot \frac{1}{2} + 2^2 \cdot \frac{1}{4} = 2, \\[12pt]
\mathbb{E}(|U_1 + U_2| | D_n)
&= |-2| \cdot \frac{1}{4} + |0| \cdot \frac{1}{2} + |2| \cdot \frac{1}{4} = 1, \\[12pt]
\mathbb{V}(|U_1 + U_2| | D_n)
&= (|-2|-1)^2 \cdot \frac{1}{4} + (|0|-1)^2 \cdot \frac{1}{2} + (|2|-1)^2 \cdot \frac{1}{4} = 1, \\[12pt]
\end{align}$$
so you have:
$$\mathbb{V}(|U_1 + U_2| | D_n) = 1 \neq 2 = \mathbb{V}(U_1 + U_2 | D_n).$$
In fact, as a general rule (stated here unconditionally but it also holds conditionally), you get:
$$\begin{align}
\mathbb{V} \Bigg( \bigg| \sum_{i=1}^n U_i \bigg| \Bigg)
&= \mathbb{E} \Bigg( \bigg| \sum_{i=1}^n U_i \bigg| ^2 \Bigg) - \mathbb{E} \Bigg( \bigg| \sum_{i=1}^n U_i \bigg| \Bigg)^2 \\[6pt]
&\leqslant \mathbb{E} \Bigg( \bigg| \sum_{i=1}^n U_i \bigg| ^2 \Bigg) - \mathbb{E} \Bigg( \sum_{i=1}^n U_i \Bigg)^2 \\[6pt]
&= \mathbb{E} \Bigg( \bigg( \sum_{i=1}^n U_i \bigg) ^2 \Bigg) - \mathbb{E} \Bigg( \sum_{i=1}^n U_i \Bigg)^2 \\[6pt]
&= \mathbb{V} \Bigg( \sum_{i=1}^n U_i \Bigg) \\[6pt]
&= \sum_{i=1}^n \mathbb{V}(U_i) + \sum_{i \neq j} \mathbb{C}(U_i,U_j). \\[6pt]
\end{align}$$
In regard to your updated information, the result still does not hold with the additional information. (And indeed, the additional information shows that the situation is just a scaled version of what I have described above, so the result is generally false in that case.) It appears to me that this is probably just an error in the proof, using an invalid step. Substiting the correct inequality in the erroneous step still gives a valid proof, so the proof itself still works once corrected.
$^\dagger$ You haven't given us any information on what $D_n$ is, so it is essentially irrelevant here. Since you haven't told us anything about $D_n$ you can just posit that the values for $U_i$ are marginally IID and that $D_n$ is some irrelevant variable that is independent of these. | Conditional variance of the absolute sum of zero-mean i.i.d. random variables
Your equation doesn't hold in general. As a simple counter-example, consider random variables that are conditionally IID with distribution $\mathbb{P}(U_i=-1|D_n) = \mathbb{P}(U_i=1|D_n)=\tfrac{1}{2} |
55,566 | Conditional variance of the absolute sum of zero-mean i.i.d. random variables | This is a new answer looking at the later question supplemented by your additional information. This new answer should be seen as augmenting my previous general observations in the other answer.
With the new information added to the question, you now have a specific distribution for the values under consideration. Suppose we define the parameter $\theta = \mathbb{P}(\tilde{f}(X_i') \neq Y_i'|D_n)$ and note that this quantity is fixed for all $i$.$^\dagger$ From the information specified in the excerpt, we have the conditional distribution:
$$U_1,...,U_n | D_n \sim \text{IID Bern}(\theta) - \theta,$$
which gives $\sum_{i=1}^n U_i | D_n \sim \text{Bin}(n, \theta) - n \theta$.
Consequently, we get:
$$\begin{align}
\mathbb{E} \Bigg( \bigg| \sum_{i=1}^n U_i \bigg| \Bigg| D_n \Bigg)
&= \sum_{k=0}^n |k-n\theta| \cdot \text{Bin}(k|n, \theta) \\[6pt]
&= \sum_{k=0}^{\lfloor n \theta \rfloor} (n\theta-k) \cdot \text{Bin}(k|n, \theta) + \sum_{k=\lfloor n \theta \rfloor+1}^n (k-n\theta) \cdot \text{Bin}(k|n, \theta), \\[6pt]
\end{align}$$
and:
$$\begin{align}
\mathbb{V} \Bigg( \bigg| \sum_{i=1}^n U_i \bigg| \Bigg| D_n \Bigg)
&= \mathbb{E} \Bigg( \bigg( \sum_{i=1}^n U_i \bigg)^2 \Bigg| D_n \Bigg) - \mathbb{E} \Bigg( \bigg| \sum_{i=1}^n U_i \bigg| \Bigg| D_n \Bigg) ^2\\[6pt]
&= n \theta (1-\theta + n\theta) \\[6pt]
&\quad + \Bigg( \sum_{k=0}^{\lfloor n \theta \rfloor} (n\theta-k) \cdot \text{Bin}(k|n, \theta) + \sum_{k=\lfloor n \theta \rfloor+1}^n (k-n\theta) \cdot \text{Bin}(k|n, \theta) \Bigg)^2. \\[6pt]
\end{align}$$
The latter quantity is not generally equivalent to:
$$\mathbb{V} \Bigg( \sum_{i=1}^n U_i \Bigg| D_n \Bigg)
= n \mathbb{V}(U_i|D_n)
= n \theta (1-\theta),$$
so this step in the paper appears to be wrong to me. Most likely, the author meant to assert the inequality $\mathbb{V}(| \sum U_i | |D_n) \geqslant n \mathbb{V}(U_i|D_n)$ here rather than asserting equivalence (which is wrong). The overall proof still works if you use the inequality relation instead, so this is just one of those cases where a step in the proof is written incorrectly, but the proof itself still works.
$^\dagger$ The information specifies that the values $U_1,...,U_n | D_n$ are IID, which can only be the case if the quantity $\theta$ is fixed over all $i=1,...,n$. | Conditional variance of the absolute sum of zero-mean i.i.d. random variables | This is a new answer looking at the later question supplemented by your additional information. This new answer should be seen as augmenting my previous general observations in the other answer.
With | Conditional variance of the absolute sum of zero-mean i.i.d. random variables
This is a new answer looking at the later question supplemented by your additional information. This new answer should be seen as augmenting my previous general observations in the other answer.
With the new information added to the question, you now have a specific distribution for the values under consideration. Suppose we define the parameter $\theta = \mathbb{P}(\tilde{f}(X_i') \neq Y_i'|D_n)$ and note that this quantity is fixed for all $i$.$^\dagger$ From the information specified in the excerpt, we have the conditional distribution:
$$U_1,...,U_n | D_n \sim \text{IID Bern}(\theta) - \theta,$$
which gives $\sum_{i=1}^n U_i | D_n \sim \text{Bin}(n, \theta) - n \theta$.
Consequently, we get:
$$\begin{align}
\mathbb{E} \Bigg( \bigg| \sum_{i=1}^n U_i \bigg| \Bigg| D_n \Bigg)
&= \sum_{k=0}^n |k-n\theta| \cdot \text{Bin}(k|n, \theta) \\[6pt]
&= \sum_{k=0}^{\lfloor n \theta \rfloor} (n\theta-k) \cdot \text{Bin}(k|n, \theta) + \sum_{k=\lfloor n \theta \rfloor+1}^n (k-n\theta) \cdot \text{Bin}(k|n, \theta), \\[6pt]
\end{align}$$
and:
$$\begin{align}
\mathbb{V} \Bigg( \bigg| \sum_{i=1}^n U_i \bigg| \Bigg| D_n \Bigg)
&= \mathbb{E} \Bigg( \bigg( \sum_{i=1}^n U_i \bigg)^2 \Bigg| D_n \Bigg) - \mathbb{E} \Bigg( \bigg| \sum_{i=1}^n U_i \bigg| \Bigg| D_n \Bigg) ^2\\[6pt]
&= n \theta (1-\theta + n\theta) \\[6pt]
&\quad + \Bigg( \sum_{k=0}^{\lfloor n \theta \rfloor} (n\theta-k) \cdot \text{Bin}(k|n, \theta) + \sum_{k=\lfloor n \theta \rfloor+1}^n (k-n\theta) \cdot \text{Bin}(k|n, \theta) \Bigg)^2. \\[6pt]
\end{align}$$
The latter quantity is not generally equivalent to:
$$\mathbb{V} \Bigg( \sum_{i=1}^n U_i \Bigg| D_n \Bigg)
= n \mathbb{V}(U_i|D_n)
= n \theta (1-\theta),$$
so this step in the paper appears to be wrong to me. Most likely, the author meant to assert the inequality $\mathbb{V}(| \sum U_i | |D_n) \geqslant n \mathbb{V}(U_i|D_n)$ here rather than asserting equivalence (which is wrong). The overall proof still works if you use the inequality relation instead, so this is just one of those cases where a step in the proof is written incorrectly, but the proof itself still works.
$^\dagger$ The information specifies that the values $U_1,...,U_n | D_n$ are IID, which can only be the case if the quantity $\theta$ is fixed over all $i=1,...,n$. | Conditional variance of the absolute sum of zero-mean i.i.d. random variables
This is a new answer looking at the later question supplemented by your additional information. This new answer should be seen as augmenting my previous general observations in the other answer.
With |
55,567 | When does complexity in machine learning algorithms actually become an issue? | I use least-squares support vector machines a fair bit, which are $\mathcal{O}(n^3)$ for the most obvious implementation, and problems with up to a ten thousand or so examples are just about practical on my machine, but memory usage is not insignificant either, and 8192*8192 matrix in double precision format is 0.5Gb of memory.
Gaussian processes are not dissimilar to LS-SVMs (bit more expensive).
However as well as fitting the model, you will need to perform model selection as well (tuning the hyper-parameters) and that is where things get expensive as you have to refit the model many times (not so bad if you can perform the calculations in parallel).
If you have more data than that, you will probably want to look into sparse approximation. | When does complexity in machine learning algorithms actually become an issue? | I use least-squares support vector machines a fair bit, which are $\mathcal{O}(n^3)$ for the most obvious implementation, and problems with up to a ten thousand or so examples are just about practical | When does complexity in machine learning algorithms actually become an issue?
I use least-squares support vector machines a fair bit, which are $\mathcal{O}(n^3)$ for the most obvious implementation, and problems with up to a ten thousand or so examples are just about practical on my machine, but memory usage is not insignificant either, and 8192*8192 matrix in double precision format is 0.5Gb of memory.
Gaussian processes are not dissimilar to LS-SVMs (bit more expensive).
However as well as fitting the model, you will need to perform model selection as well (tuning the hyper-parameters) and that is where things get expensive as you have to refit the model many times (not so bad if you can perform the calculations in parallel).
If you have more data than that, you will probably want to look into sparse approximation. | When does complexity in machine learning algorithms actually become an issue?
I use least-squares support vector machines a fair bit, which are $\mathcal{O}(n^3)$ for the most obvious implementation, and problems with up to a ten thousand or so examples are just about practical |
55,568 | Are interactions with quadratic terms in MARS possible? | Yes with some modification.
On its own, MARS won't try to take higher-order functions of the predictors. MARS can only include three types of basis functions:
A constant
A hinge function
Interactions of hinge functions
You can "trick" MARS into including a quadratic term by making a new variable xnew = x1*x1 then fitting MARS. MARS will not derive this xnew variable on its own, though. | Are interactions with quadratic terms in MARS possible? | Yes with some modification.
On its own, MARS won't try to take higher-order functions of the predictors. MARS can only include three types of basis functions:
A constant
A hinge function
Interactions | Are interactions with quadratic terms in MARS possible?
Yes with some modification.
On its own, MARS won't try to take higher-order functions of the predictors. MARS can only include three types of basis functions:
A constant
A hinge function
Interactions of hinge functions
You can "trick" MARS into including a quadratic term by making a new variable xnew = x1*x1 then fitting MARS. MARS will not derive this xnew variable on its own, though. | Are interactions with quadratic terms in MARS possible?
Yes with some modification.
On its own, MARS won't try to take higher-order functions of the predictors. MARS can only include three types of basis functions:
A constant
A hinge function
Interactions |
55,569 | Is there any reason to use LIME now that shap is available? | I wouldn't say that LIME is a flawed half-solution and that SHAP is a perfect full solution.
If anything, I would say both solutions are inherently flawed but perhaps are the best we have. If you are going to use a locally correct linear approximation of your machine learning model for the purpose of explaining predictions, then I would choose whichever software tool has the least bugs and most features that you like. Perhaps SHAP offers some theoretical properties which LIME doesn't, but it's not clear that these imply correctness of explanation. Perhaps they disallow some kinds of fishy explanations.
Most people are looking for a quick-fix for understanding their models, and LIME and SHAP do that. Sometimes regulators even require it. Does that mean you truly understand your models? I don't think so.
I don't see any reason to use LIME over SHAP unless the idea of locally approximating a function with a linear function and creating augmented examples for the purpose of training appeals to you.
Besides for that, I would recommend not using SHAP or LIME if your data is not always (especially if locally - I can think of some examples like if you're using categorical features with int encoding) linear.
I think a fair approach to model explanations is one which is very broad and approaches the question from many different angles. Are you looking to see whether there are hidden confounders? unfair biases? There are a lot of sources and usually it is recommended to use a broad range of solutions in order to understand your models and it is not as simple as choosing between lime and shap. Here is an example of IBM explaining their approach https://www.ibm.com/watson/explainable-ai | Is there any reason to use LIME now that shap is available? | I wouldn't say that LIME is a flawed half-solution and that SHAP is a perfect full solution.
If anything, I would say both solutions are inherently flawed but perhaps are the best we have. If you are | Is there any reason to use LIME now that shap is available?
I wouldn't say that LIME is a flawed half-solution and that SHAP is a perfect full solution.
If anything, I would say both solutions are inherently flawed but perhaps are the best we have. If you are going to use a locally correct linear approximation of your machine learning model for the purpose of explaining predictions, then I would choose whichever software tool has the least bugs and most features that you like. Perhaps SHAP offers some theoretical properties which LIME doesn't, but it's not clear that these imply correctness of explanation. Perhaps they disallow some kinds of fishy explanations.
Most people are looking for a quick-fix for understanding their models, and LIME and SHAP do that. Sometimes regulators even require it. Does that mean you truly understand your models? I don't think so.
I don't see any reason to use LIME over SHAP unless the idea of locally approximating a function with a linear function and creating augmented examples for the purpose of training appeals to you.
Besides for that, I would recommend not using SHAP or LIME if your data is not always (especially if locally - I can think of some examples like if you're using categorical features with int encoding) linear.
I think a fair approach to model explanations is one which is very broad and approaches the question from many different angles. Are you looking to see whether there are hidden confounders? unfair biases? There are a lot of sources and usually it is recommended to use a broad range of solutions in order to understand your models and it is not as simple as choosing between lime and shap. Here is an example of IBM explaining their approach https://www.ibm.com/watson/explainable-ai | Is there any reason to use LIME now that shap is available?
I wouldn't say that LIME is a flawed half-solution and that SHAP is a perfect full solution.
If anything, I would say both solutions are inherently flawed but perhaps are the best we have. If you are |
55,570 | Interpretation of the autocorrelation of a binary process | Your intuition is correct.
There are various definitions of the autocorrelation: see Understanding this acf output for a discussion of some of the pitfalls. But for a sufficiently long sequence all definitions will produce essentially the same value. One of these definitions is that the autocorrelation of the sequence at a small lag $h$ is the (usual) Pearson correlation coefficient when each term of the sequence is paired with the term occurring $h$ time steps later.
There are only four possible values of such pairs. Letting $q$ be the proportion of ones in the sequence,
Let $\alpha$ be the proportion of $(1,1)$ pairs. This is the fraction of the time the device was in state $1$ and, $h$ time steps later, it was also in state $1.$
The proportion of $(1,0)$ pairs therefore must be very close to $q-\alpha.$
Likewise, the proportion of $(0,1)$ pairs must also be very close to $q-\alpha.$
Thus, the proportion of $(0,0)$ pairs must be the amount needed to make the total equal to unity, $1+\alpha-2q.$
These proportions determine the correlation coefficient $\rho$: just apply the formula. Thinking of the pairs as a bivariate random variable $(X,Y),$ you will obtain
$$\rho = \frac{E[XY]- E[X]E[Y]}{\sqrt{\left(E[X^2]-E[X]^2\right)\left(E[Y^2]-E[Y]^2\right)}} = \frac{\alpha - q^2}{q(1-q)}. $$
The more directly interpretable statistic is $\alpha/q,$ the chance the device will output a $1$ given that it output a $1$ $h$ steps earlier. We can express this as
$$\Pr(1\mid 1) = \frac{\alpha}{q} = q + \rho(1-q).$$
If there were no correlation ($\rho=0$), the conditional probability (of $1$ following a $1$ with lag $h$) would be the same as the unconditional probability of $1:$ that's independence. Otherwise,
The correlation coefficient $\rho$ expresses how much the conditional probability $\Pr(1\mid 1)$ differs from the proportion of ones ($q$) as a multiple of the proportion of zeros, $1-q.$
Equivalently, we may formulate an interpretation in terms of the conditional chance of making a transition from $1$ to $0,$ since
$$\Pr(0 \mid 1) = 1 - \Pr(1\mid 1) = 1 - q - \rho(1-q) = (1-q)(1-\rho).$$
In this fashion $\rho$ (along with the proportion of ones) is directly related to the frequency of such transitions. In words: the proportion of zeros following a one is $1-\rho$ times the proportion of all zeros (to an excellent approximation).
There's nothing special about the roles of $0$ and $1$ in this interpretation: since interchanging $0$ and $1$ is a linear function $x \to 1-x,$ it doesn't change the correlation. It merely changes $q$ to $1-q.$ Thus, in the foregoing interpretation you may switch all occurrences of "$0$" and "$1$" provided you replace $q$ by $1-q$ everywhere and replace $\alpha$ by $1+\alpha-2q.$ Thus
$$\Pr(0\mid 0) = \frac{1+\alpha-2q}{1-q} = 1-q + \rho(q)$$
and
$$\Pr(1 \mid 0) = q(1-\rho).$$
Although I have sometimes used the language and symbolism of probabilities, these statements are, strictly speaking, only about proportions (and they are approximations that are most accurate for long sequences and short lags, because they ignore effects at the ends of the sequences). Thus, nothing is assumed or implied about the nature of the process that generated the sequence. In particular, although we have related transition rates at any lag to the autocorrelation coefficient at that lag, this does not imply that the sequence was generated by any kind of Markov process (of any order), or even that it is stationary. | Interpretation of the autocorrelation of a binary process | Your intuition is correct.
There are various definitions of the autocorrelation: see Understanding this acf output for a discussion of some of the pitfalls. But for a sufficiently long sequence all d | Interpretation of the autocorrelation of a binary process
Your intuition is correct.
There are various definitions of the autocorrelation: see Understanding this acf output for a discussion of some of the pitfalls. But for a sufficiently long sequence all definitions will produce essentially the same value. One of these definitions is that the autocorrelation of the sequence at a small lag $h$ is the (usual) Pearson correlation coefficient when each term of the sequence is paired with the term occurring $h$ time steps later.
There are only four possible values of such pairs. Letting $q$ be the proportion of ones in the sequence,
Let $\alpha$ be the proportion of $(1,1)$ pairs. This is the fraction of the time the device was in state $1$ and, $h$ time steps later, it was also in state $1.$
The proportion of $(1,0)$ pairs therefore must be very close to $q-\alpha.$
Likewise, the proportion of $(0,1)$ pairs must also be very close to $q-\alpha.$
Thus, the proportion of $(0,0)$ pairs must be the amount needed to make the total equal to unity, $1+\alpha-2q.$
These proportions determine the correlation coefficient $\rho$: just apply the formula. Thinking of the pairs as a bivariate random variable $(X,Y),$ you will obtain
$$\rho = \frac{E[XY]- E[X]E[Y]}{\sqrt{\left(E[X^2]-E[X]^2\right)\left(E[Y^2]-E[Y]^2\right)}} = \frac{\alpha - q^2}{q(1-q)}. $$
The more directly interpretable statistic is $\alpha/q,$ the chance the device will output a $1$ given that it output a $1$ $h$ steps earlier. We can express this as
$$\Pr(1\mid 1) = \frac{\alpha}{q} = q + \rho(1-q).$$
If there were no correlation ($\rho=0$), the conditional probability (of $1$ following a $1$ with lag $h$) would be the same as the unconditional probability of $1:$ that's independence. Otherwise,
The correlation coefficient $\rho$ expresses how much the conditional probability $\Pr(1\mid 1)$ differs from the proportion of ones ($q$) as a multiple of the proportion of zeros, $1-q.$
Equivalently, we may formulate an interpretation in terms of the conditional chance of making a transition from $1$ to $0,$ since
$$\Pr(0 \mid 1) = 1 - \Pr(1\mid 1) = 1 - q - \rho(1-q) = (1-q)(1-\rho).$$
In this fashion $\rho$ (along with the proportion of ones) is directly related to the frequency of such transitions. In words: the proportion of zeros following a one is $1-\rho$ times the proportion of all zeros (to an excellent approximation).
There's nothing special about the roles of $0$ and $1$ in this interpretation: since interchanging $0$ and $1$ is a linear function $x \to 1-x,$ it doesn't change the correlation. It merely changes $q$ to $1-q.$ Thus, in the foregoing interpretation you may switch all occurrences of "$0$" and "$1$" provided you replace $q$ by $1-q$ everywhere and replace $\alpha$ by $1+\alpha-2q.$ Thus
$$\Pr(0\mid 0) = \frac{1+\alpha-2q}{1-q} = 1-q + \rho(q)$$
and
$$\Pr(1 \mid 0) = q(1-\rho).$$
Although I have sometimes used the language and symbolism of probabilities, these statements are, strictly speaking, only about proportions (and they are approximations that are most accurate for long sequences and short lags, because they ignore effects at the ends of the sequences). Thus, nothing is assumed or implied about the nature of the process that generated the sequence. In particular, although we have related transition rates at any lag to the autocorrelation coefficient at that lag, this does not imply that the sequence was generated by any kind of Markov process (of any order), or even that it is stationary. | Interpretation of the autocorrelation of a binary process
Your intuition is correct.
There are various definitions of the autocorrelation: see Understanding this acf output for a discussion of some of the pitfalls. But for a sufficiently long sequence all d |
55,571 | How to implement a mixed-model with a beta distribution? | Please note that there is no requirement, condition, or assumption regarding the distribution of the variables in any regression model. When the data are strictly positive and bounded then the beta distribution is often a very good choice.
GLMMadaptive and glmmTMB both allow for the beta distribution. Since you seem to be familiar with glmer then glmmTMB would be the easist choice for you since all you have to do is specify family = beta_family()
As for the residuals, since it's a beta model there is not expectation that the residuals would be normally distributed. The DHARMa package has some good functionality for assessing the residuals from a beta model. | How to implement a mixed-model with a beta distribution? | Please note that there is no requirement, condition, or assumption regarding the distribution of the variables in any regression model. When the data are strictly positive and bounded then the beta di | How to implement a mixed-model with a beta distribution?
Please note that there is no requirement, condition, or assumption regarding the distribution of the variables in any regression model. When the data are strictly positive and bounded then the beta distribution is often a very good choice.
GLMMadaptive and glmmTMB both allow for the beta distribution. Since you seem to be familiar with glmer then glmmTMB would be the easist choice for you since all you have to do is specify family = beta_family()
As for the residuals, since it's a beta model there is not expectation that the residuals would be normally distributed. The DHARMa package has some good functionality for assessing the residuals from a beta model. | How to implement a mixed-model with a beta distribution?
Please note that there is no requirement, condition, or assumption regarding the distribution of the variables in any regression model. When the data are strictly positive and bounded then the beta di |
55,572 | Likelihood function as number of observations increases | Here are three slides from my statistical modelling course illustrating why the average log-likelihood function concentrates with the number $n$ of iid observations:
This second picture represents $L_n(\theta;\mathbf x)^{1/n}$ as $n$ increases. This function stabilises around its (entropy) limiting function
$$\exp\int\log p(x|\theta) \text dF(x)$$ | Likelihood function as number of observations increases | Here are three slides from my statistical modelling course illustrating why the average log-likelihood function concentrates with the number $n$ of iid observations:
This second picture represents | Likelihood function as number of observations increases
Here are three slides from my statistical modelling course illustrating why the average log-likelihood function concentrates with the number $n$ of iid observations:
This second picture represents $L_n(\theta;\mathbf x)^{1/n}$ as $n$ increases. This function stabilises around its (entropy) limiting function
$$\exp\int\log p(x|\theta) \text dF(x)$$ | Likelihood function as number of observations increases
Here are three slides from my statistical modelling course illustrating why the average log-likelihood function concentrates with the number $n$ of iid observations:
This second picture represents |
55,573 | mutual information and maximual infomation coffecient | Mutual information is well known, sklearn has a good implementation here and in R package entropy.
Regarding MIC, MICtools and minerva are Python/R good implementations. See references given in MICtools repo description for further papers.
Edit: This is a dummy example how to compute MIC score for $x$ and $y$ conditionally given that $z=1$.
from minepy import MINE
import numpy as np
mine = MINE(alpha=0.6, c=15)
np.random.seed(42)
x = np.random.random(100)
y = np.random.random(100)
z = np.random.binomial(1, 0.5, 100)
condition_z_is_one = np.where(z > 0)[0]
mine.compute_score(x[condition_z_is_one],
y[condition_z_is_one])
mic_score = mine.mic() # 0.21421355042023246 | mutual information and maximual infomation coffecient | Mutual information is well known, sklearn has a good implementation here and in R package entropy.
Regarding MIC, MICtools and minerva are Python/R good implementations. See references given in MICtoo | mutual information and maximual infomation coffecient
Mutual information is well known, sklearn has a good implementation here and in R package entropy.
Regarding MIC, MICtools and minerva are Python/R good implementations. See references given in MICtools repo description for further papers.
Edit: This is a dummy example how to compute MIC score for $x$ and $y$ conditionally given that $z=1$.
from minepy import MINE
import numpy as np
mine = MINE(alpha=0.6, c=15)
np.random.seed(42)
x = np.random.random(100)
y = np.random.random(100)
z = np.random.binomial(1, 0.5, 100)
condition_z_is_one = np.where(z > 0)[0]
mine.compute_score(x[condition_z_is_one],
y[condition_z_is_one])
mic_score = mine.mic() # 0.21421355042023246 | mutual information and maximual infomation coffecient
Mutual information is well known, sklearn has a good implementation here and in R package entropy.
Regarding MIC, MICtools and minerva are Python/R good implementations. See references given in MICtoo |
55,574 | Why am I observing non-uniformly distributed (negatively skewed) p-values for two-sample tests of mixture distributions when the null is true? | You aren't generating two samples of independent observations from a Gaussian mixture, because you are fixing the number taken from each component rather than making it random.
If $X$ is a 50/50 mixture of $N(.5,0.05^2)$ and $N(0.075,0.05^2)$, and you sample $n$ observations, the number of observations from the first component is Binomial$(n, 0.5)$, not $n/2$. When you fix the number of observations from each component at $n/2$, the observations within each sample are not independent, so the two samples are more similar than you'd expect from independent observations and the p-values are larger than $U[0,1]$
I modified your code to sample the two components randomly
numsims <- 1000
repNums <- 1:numsims
pvalDF <- data.frame(ii = rep(0, length(repNums)),
pvals = rep(0, length(repNums)),
seed = rep(0, length(repNums)))
for (ii in 1:numsims) {
set.seed(123 + ii)
m1<-rbinom(1,50,.5)
sample1.1 <- rnorm(m1, mean = 0.5, sd = 0.05) # Here are the new two samples each
sample1.2 <- rnorm(50-m1, mean = 0.75, sd = 0.05) # generated as a mixture from two
sample1 <- c(sample1.1, sample1.2) # identical Gaussian distributions.
m2<-rbinom(1,50,.5)
sample2.1 <- rnorm(m2, mean = 0.5, sd = 0.05) #
sample2.2 <- rnorm(50-m2, mean = 0.75, sd = 0.05) #
sample2 <- c(sample2.1, sample2.2) #
n1 = length(sample1)
n2 = length(sample2)
samples <- scale(c(sample1, sample2))
sample1 <- samples[1:n1]
sample2 <- samples[n1+1:n2]
ad_out <- ad.test(sample1, sample2)
pvalDF$ii[ii] <- ii
pvalDF$pvals[ii] <- ad_out$ad["version 1:", 3]
pvalDF$seed[ii] <- 321 + ii
}
hist(pvalDF$pvals, xlim = c(0, 1))
and the p-value distribution is boring again | Why am I observing non-uniformly distributed (negatively skewed) p-values for two-sample tests of mi | You aren't generating two samples of independent observations from a Gaussian mixture, because you are fixing the number taken from each component rather than making it random.
If $X$ is a 50/50 mixtu | Why am I observing non-uniformly distributed (negatively skewed) p-values for two-sample tests of mixture distributions when the null is true?
You aren't generating two samples of independent observations from a Gaussian mixture, because you are fixing the number taken from each component rather than making it random.
If $X$ is a 50/50 mixture of $N(.5,0.05^2)$ and $N(0.075,0.05^2)$, and you sample $n$ observations, the number of observations from the first component is Binomial$(n, 0.5)$, not $n/2$. When you fix the number of observations from each component at $n/2$, the observations within each sample are not independent, so the two samples are more similar than you'd expect from independent observations and the p-values are larger than $U[0,1]$
I modified your code to sample the two components randomly
numsims <- 1000
repNums <- 1:numsims
pvalDF <- data.frame(ii = rep(0, length(repNums)),
pvals = rep(0, length(repNums)),
seed = rep(0, length(repNums)))
for (ii in 1:numsims) {
set.seed(123 + ii)
m1<-rbinom(1,50,.5)
sample1.1 <- rnorm(m1, mean = 0.5, sd = 0.05) # Here are the new two samples each
sample1.2 <- rnorm(50-m1, mean = 0.75, sd = 0.05) # generated as a mixture from two
sample1 <- c(sample1.1, sample1.2) # identical Gaussian distributions.
m2<-rbinom(1,50,.5)
sample2.1 <- rnorm(m2, mean = 0.5, sd = 0.05) #
sample2.2 <- rnorm(50-m2, mean = 0.75, sd = 0.05) #
sample2 <- c(sample2.1, sample2.2) #
n1 = length(sample1)
n2 = length(sample2)
samples <- scale(c(sample1, sample2))
sample1 <- samples[1:n1]
sample2 <- samples[n1+1:n2]
ad_out <- ad.test(sample1, sample2)
pvalDF$ii[ii] <- ii
pvalDF$pvals[ii] <- ad_out$ad["version 1:", 3]
pvalDF$seed[ii] <- 321 + ii
}
hist(pvalDF$pvals, xlim = c(0, 1))
and the p-value distribution is boring again | Why am I observing non-uniformly distributed (negatively skewed) p-values for two-sample tests of mi
You aren't generating two samples of independent observations from a Gaussian mixture, because you are fixing the number taken from each component rather than making it random.
If $X$ is a 50/50 mixtu |
55,575 | Asymptotic distribution of OLS standard errors | Let me start by giving you a hint, and if that is enough, then no need to read through the rest of the solution.
Hint: $\beta=(X'X)^{-1}(X'Y)=n^{-1}\sum_i^nY_i=\bar Y$. So the sample variance is given by,
$$\hat\sigma^2=\sum_{i=1}^n(Y_i - X_i \beta)^2=\sum_{i=1}^n(Y_i - \bar Y)^2$$
Solution:
Okay, if that hint is not enough here is a solution. I will omit some algebra so you might have some verification to do on your own. Notice, there is a couple of ways to accomplish this but here is my preferred solution.
Note that,
$$\hat\sigma^2 = \frac{1}{n-1}\sum_{i=1}^n(Y_i - \bar Y)^2 = \frac{n}{n-1}(\sum_i Y_i^2)-(2n\bar Y n^{-1}\sum_i Y_i)+n\bar Y^2 = \frac{n}{n-1}(\frac{1}{n}(Y_i^2-\bar Y^2)=\frac{n}{n-1}(\bar{Y^2}-\bar Y^2)$$
Note that $\bar{Y^2}:=n^{-1}\sum_i (Y_i^2)$.
Now we will use the delta-method with $h(a,b)=b-a^2$. So that we have,
$$h(\bar Y, \bar{Y^2} ) = \bar{Y^2} - (\bar Y)^2=\frac{n}{n-1}\hat\sigma^2$$
and
$$h(E[Y], E[Y^2] ) = E[Y^2] - E[Y]^2=\sigma^2$$
Then we start with,
$$\sqrt{n}(h(\bar Y, \bar{Y^2}) - h(E[Y], E[Y^2]))\overset{d}{\to} \nabla h(E[Y], E[Y^2])\cdot N(0,V) = N(0, (\nabla h) V (\nabla h)^T)$$
(Note that we should first check the joint normality with the MVT CLT this follows from the fact that $Y_i$ is iid.)
So all that is left is to identify this covariance matrix. Recall,
$$V = \begin{bmatrix}Var[Y]& Cov[Y^2, Y]\\Cov[Y^2, Y]&Var[Y^2]\end{bmatrix}$$
So
$$(\nabla h )V (\nabla h)^T=\begin{bmatrix}-2E[Y] & 1\end{bmatrix} V \begin{bmatrix}-2E[Y] \\ 1\end{bmatrix} \\= \begin{bmatrix}-2E[Y]Var[Y] + Cov[Y^2, Y] & -2E[Y]Cov[Y^2 Y]+Var[Y^2]\end{bmatrix}\begin{bmatrix}-2E[Y] \\ 1\end{bmatrix}
\\= -2E[Y](-2E[Y]Var[Y] + Cov[Y^2, Y])-2E[Y]Cov[Y, Y^2] + Var[Y^2]$$
Before we proceed we need to compute some variances,
$$E[Y^2]=Var[Y]+E[Y]^2$$
$$Cov(Y, Y^2)=E[(Y-E[Y])(Y^2-E[Y^2])]=E[Y^3]-E[Y]E[Y^2]=E[Y^3]-\mu(\sigma^2+\mu^2)$$
$$Var[Y^2]=E[Y^4]-E[Y^2]^2 = E[Y^4]-(\sigma^2+\mu^2)^2$$
So plugging in and simplifying we eventually should get,
$$\nabla h V \nabla h^T= E[(Y-\mu)^4]-\sigma^4$$
So putting this together (using Slutsky's theorem and noticing $\frac{n}{n-1} \overset{p}{\to} 1$) we get,
$$\sqrt{n}(\hat\sigma^2-\sigma^2)\overset{d}{\to} N(0,E[(Y-\mu)^4]-\sigma^4)$$
Or setting $\mu_4=E[(Y-\mu)^4]$ (the centered fourth moment) we get,
$$\sqrt{n}(\hat\sigma^2-\sigma^2)\overset{d}{\to} N(0,\mu_4-\sigma^4)$$
Which is what you have (I think the extra power of 4 is a typo?).
Also, a nice way to motivate this formula is to realize that it is exactly equal to $Var[(Y-\mu)^2]=E[(Y-\mu)^4]-E[(Y-\mu)^2]^2=\mu_4-\sigma^4$. Which is great because if we knew $\mu$ our variance estimator would be $n^{-1}\sum_i (Y_i-\mu)^2$. | Asymptotic distribution of OLS standard errors | Let me start by giving you a hint, and if that is enough, then no need to read through the rest of the solution.
Hint: $\beta=(X'X)^{-1}(X'Y)=n^{-1}\sum_i^nY_i=\bar Y$. So the sample variance is given | Asymptotic distribution of OLS standard errors
Let me start by giving you a hint, and if that is enough, then no need to read through the rest of the solution.
Hint: $\beta=(X'X)^{-1}(X'Y)=n^{-1}\sum_i^nY_i=\bar Y$. So the sample variance is given by,
$$\hat\sigma^2=\sum_{i=1}^n(Y_i - X_i \beta)^2=\sum_{i=1}^n(Y_i - \bar Y)^2$$
Solution:
Okay, if that hint is not enough here is a solution. I will omit some algebra so you might have some verification to do on your own. Notice, there is a couple of ways to accomplish this but here is my preferred solution.
Note that,
$$\hat\sigma^2 = \frac{1}{n-1}\sum_{i=1}^n(Y_i - \bar Y)^2 = \frac{n}{n-1}(\sum_i Y_i^2)-(2n\bar Y n^{-1}\sum_i Y_i)+n\bar Y^2 = \frac{n}{n-1}(\frac{1}{n}(Y_i^2-\bar Y^2)=\frac{n}{n-1}(\bar{Y^2}-\bar Y^2)$$
Note that $\bar{Y^2}:=n^{-1}\sum_i (Y_i^2)$.
Now we will use the delta-method with $h(a,b)=b-a^2$. So that we have,
$$h(\bar Y, \bar{Y^2} ) = \bar{Y^2} - (\bar Y)^2=\frac{n}{n-1}\hat\sigma^2$$
and
$$h(E[Y], E[Y^2] ) = E[Y^2] - E[Y]^2=\sigma^2$$
Then we start with,
$$\sqrt{n}(h(\bar Y, \bar{Y^2}) - h(E[Y], E[Y^2]))\overset{d}{\to} \nabla h(E[Y], E[Y^2])\cdot N(0,V) = N(0, (\nabla h) V (\nabla h)^T)$$
(Note that we should first check the joint normality with the MVT CLT this follows from the fact that $Y_i$ is iid.)
So all that is left is to identify this covariance matrix. Recall,
$$V = \begin{bmatrix}Var[Y]& Cov[Y^2, Y]\\Cov[Y^2, Y]&Var[Y^2]\end{bmatrix}$$
So
$$(\nabla h )V (\nabla h)^T=\begin{bmatrix}-2E[Y] & 1\end{bmatrix} V \begin{bmatrix}-2E[Y] \\ 1\end{bmatrix} \\= \begin{bmatrix}-2E[Y]Var[Y] + Cov[Y^2, Y] & -2E[Y]Cov[Y^2 Y]+Var[Y^2]\end{bmatrix}\begin{bmatrix}-2E[Y] \\ 1\end{bmatrix}
\\= -2E[Y](-2E[Y]Var[Y] + Cov[Y^2, Y])-2E[Y]Cov[Y, Y^2] + Var[Y^2]$$
Before we proceed we need to compute some variances,
$$E[Y^2]=Var[Y]+E[Y]^2$$
$$Cov(Y, Y^2)=E[(Y-E[Y])(Y^2-E[Y^2])]=E[Y^3]-E[Y]E[Y^2]=E[Y^3]-\mu(\sigma^2+\mu^2)$$
$$Var[Y^2]=E[Y^4]-E[Y^2]^2 = E[Y^4]-(\sigma^2+\mu^2)^2$$
So plugging in and simplifying we eventually should get,
$$\nabla h V \nabla h^T= E[(Y-\mu)^4]-\sigma^4$$
So putting this together (using Slutsky's theorem and noticing $\frac{n}{n-1} \overset{p}{\to} 1$) we get,
$$\sqrt{n}(\hat\sigma^2-\sigma^2)\overset{d}{\to} N(0,E[(Y-\mu)^4]-\sigma^4)$$
Or setting $\mu_4=E[(Y-\mu)^4]$ (the centered fourth moment) we get,
$$\sqrt{n}(\hat\sigma^2-\sigma^2)\overset{d}{\to} N(0,\mu_4-\sigma^4)$$
Which is what you have (I think the extra power of 4 is a typo?).
Also, a nice way to motivate this formula is to realize that it is exactly equal to $Var[(Y-\mu)^2]=E[(Y-\mu)^4]-E[(Y-\mu)^2]^2=\mu_4-\sigma^4$. Which is great because if we knew $\mu$ our variance estimator would be $n^{-1}\sum_i (Y_i-\mu)^2$. | Asymptotic distribution of OLS standard errors
Let me start by giving you a hint, and if that is enough, then no need to read through the rest of the solution.
Hint: $\beta=(X'X)^{-1}(X'Y)=n^{-1}\sum_i^nY_i=\bar Y$. So the sample variance is given |
55,576 | Analytical expression of the log-likelihood of the Binomial model with unknown $n$ and known $y$ and $p$ and its conjugate prior | I completely concur with Sycorax's comment that Adrian Raftery's 1988 Biometrika paper is the canon on this topic.
How to derive analytically the negative log-likelihood (and its
first-order conditions)?
The likelihood is the same whether or not $n$ is unknown:
$$L(n|y_1,\ldots,y_I)=\prod_{i=1}^I {n \choose y_i}p^{y_i}(1-p)^{n-y_i}
\propto \dfrac{(n!)^I(1-p)^{nI}}{\prod_{i=1}^I(n-y_i)!}$$
and the log-likelihood is the logarithm of the above
$$\ell(n|y_1,\ldots,y_I)=C+I\log n!-\sum_{i=1}^I \log (n-y_i)!+nI\log(1-p) $$
Maximum likelihood estimation of $n$ is covered in this earlier answer of mine and by Ben.
What is an uninformative prior for $n$ in this case (e.g., for $p$ one
can use a Uniform$(0,1)$)?
Note that the default prior on $p$ is Jeffreys' $\pi(p)\propto 1/\sqrt{p(1-p)}$ rather than the Uniform distribution. In one's answer in the Bernoulli case, kjetil b halvorsen explains why using a Uniform improper prior on $n$ leads to the posterior being decreasing quite slowly (while being proper) and why another improper prior like $\pi(n)=1/n$ or $\pi(n)=1/(n+1)$ has a more appropriate behaviour in the tails. This is connected to the fact that $n$, while being an integer, is a scale parameter in the Bernoulli distribution, in the sense that the random variable $Y\sim\mathcal B(n,p)$ is of order $\mathrm O(n)$. Scale parameters are usually modeled by priors like $\pi(n)=1/n$ (even though I refer you to my earlier answer as to why there is no such thing as a noninformative prior).
Is there a conjugate prior for $n$?
Since the collection of $\mathcal B(n,p)$ distributions is not an exponential family when $n$ varies, since its support depends on $n$, there is no conjugate prior family.
What if the prior on $n$ is improper, i.e. discrete prior on
$\{y_\max,\mathbb N\}⊂\mathbb N$? Is there a proper solution?
It depends on the improper prior. The answer by kjetil b halvorsen in the Bernoulli case shows there exist improper priors leading to well-defined posterior distributions. And there also exist improper priors leading to non-defined posterior distributions for all sample sizes $I$. For instance, $\pi(n)\propto\exp\{\exp(n)\}$ should lead to an infinite mass posterior. | Analytical expression of the log-likelihood of the Binomial model with unknown $n$ and known $y$ and | I completely concur with Sycorax's comment that Adrian Raftery's 1988 Biometrika paper is the canon on this topic.
How to derive analytically the negative log-likelihood (and its
first-order conditi | Analytical expression of the log-likelihood of the Binomial model with unknown $n$ and known $y$ and $p$ and its conjugate prior
I completely concur with Sycorax's comment that Adrian Raftery's 1988 Biometrika paper is the canon on this topic.
How to derive analytically the negative log-likelihood (and its
first-order conditions)?
The likelihood is the same whether or not $n$ is unknown:
$$L(n|y_1,\ldots,y_I)=\prod_{i=1}^I {n \choose y_i}p^{y_i}(1-p)^{n-y_i}
\propto \dfrac{(n!)^I(1-p)^{nI}}{\prod_{i=1}^I(n-y_i)!}$$
and the log-likelihood is the logarithm of the above
$$\ell(n|y_1,\ldots,y_I)=C+I\log n!-\sum_{i=1}^I \log (n-y_i)!+nI\log(1-p) $$
Maximum likelihood estimation of $n$ is covered in this earlier answer of mine and by Ben.
What is an uninformative prior for $n$ in this case (e.g., for $p$ one
can use a Uniform$(0,1)$)?
Note that the default prior on $p$ is Jeffreys' $\pi(p)\propto 1/\sqrt{p(1-p)}$ rather than the Uniform distribution. In one's answer in the Bernoulli case, kjetil b halvorsen explains why using a Uniform improper prior on $n$ leads to the posterior being decreasing quite slowly (while being proper) and why another improper prior like $\pi(n)=1/n$ or $\pi(n)=1/(n+1)$ has a more appropriate behaviour in the tails. This is connected to the fact that $n$, while being an integer, is a scale parameter in the Bernoulli distribution, in the sense that the random variable $Y\sim\mathcal B(n,p)$ is of order $\mathrm O(n)$. Scale parameters are usually modeled by priors like $\pi(n)=1/n$ (even though I refer you to my earlier answer as to why there is no such thing as a noninformative prior).
Is there a conjugate prior for $n$?
Since the collection of $\mathcal B(n,p)$ distributions is not an exponential family when $n$ varies, since its support depends on $n$, there is no conjugate prior family.
What if the prior on $n$ is improper, i.e. discrete prior on
$\{y_\max,\mathbb N\}⊂\mathbb N$? Is there a proper solution?
It depends on the improper prior. The answer by kjetil b halvorsen in the Bernoulli case shows there exist improper priors leading to well-defined posterior distributions. And there also exist improper priors leading to non-defined posterior distributions for all sample sizes $I$. For instance, $\pi(n)\propto\exp\{\exp(n)\}$ should lead to an infinite mass posterior. | Analytical expression of the log-likelihood of the Binomial model with unknown $n$ and known $y$ and
I completely concur with Sycorax's comment that Adrian Raftery's 1988 Biometrika paper is the canon on this topic.
How to derive analytically the negative log-likelihood (and its
first-order conditi |
55,577 | Why is the formula for the density of a transformed random variable expressed in terms of the derivative of the inverse? | It seems that the heuristic described by @whuber in their answer to the linked problem can be modified slightly to yield the change of variables formula for the density in its more familiar form. Consider a finite sum approximation to the probability elements; the "conservation of mass" requirement stipulates that $$h_X(x_j) \Delta_X(x_j) = h_Y(y_j) \Delta_Y(y_j).$$ Here $h_X(x_j)$ is the height and $\Delta(x_j)$ is the width of the interval on which $x_j$ is the center.
Suppose that $h_X(x)$ is known and $y = g(x)$ for a monotone continuous function $g(\cdot)$. The goal is to solve for $h_Y(y)$ in terms of $g(\cdot)$ and $h_X(\cdot)$. To do so, we will fix either $\Delta_X(x_j)$ or $\Delta_Y(y_j)$ to be some constant $\Delta$ for all values of its argument. Then we will solve for $h_Y(y)$ and take a limit as $\Delta \rightarrow 0$. Which of $\Delta_X(x_j)$ or $\Delta_Y(y_j)$ is set to the constant determines which of the two forms of the formula is arrived at.
Setting $\Delta_Y(y_j) = \Delta$ gives the more common form.
$$\begin{aligned}
h_Y(y) \Delta &= h_X(x)\left [g^{-1} \left(y + \dfrac{\Delta}{2} \right) - g^{-1} \left(y - \dfrac{\Delta}{2} \right) \right ],\\
h_Y(y) &= h_X(g^{-1}(y))\frac{\left [g^{-1} \left(y + \dfrac{\Delta}{2} \right) - g^{-1} \left(y - \dfrac{\Delta}{2} \right) \right ]}{\Delta},\\
h_Y(y) &\rightarrow h_X(g^{-1}(y)) (g^{-1})'(y).
\end{aligned}
$$
Setting $\Delta_X(x_j) = \Delta$ gives the other (equivalent) expression.
$$\begin{aligned}
h_X(x) \Delta &= h_Y(y) \left [g \left(x + \dfrac{\Delta}{2} \right) - g \left(x - \dfrac{\Delta}{2} \right) \right ],\\
h_Y(y) &= h_X(g^{-1}(y)) \frac{ \Delta}{g \left(x + \dfrac{\Delta}{2} \right) - g \left(x - \dfrac{\Delta}{2} \right) },\\
h_Y(y) &\rightarrow \frac{h_X(g^{-1}(y))}{g'(g^{-1}(y))}.
\end{aligned}
$$
Presumably this argument fails when Riemann sums fail and more measure theory is called for, but this line of reasoning satisfies my curiosity well enough. Specifically, the first approach, setting $\Delta_Y(y) = \Delta$ at the outset, inherits the same intuition as explained in @whuber's answer to the other question, but arrives at an expression that will match most other texts (which is desirable to me for pragmatic reasons). Of course, intuition is very personal, so YMMV. | Why is the formula for the density of a transformed random variable expressed in terms of the deriva | It seems that the heuristic described by @whuber in their answer to the linked problem can be modified slightly to yield the change of variables formula for the density in its more familiar form. Cons | Why is the formula for the density of a transformed random variable expressed in terms of the derivative of the inverse?
It seems that the heuristic described by @whuber in their answer to the linked problem can be modified slightly to yield the change of variables formula for the density in its more familiar form. Consider a finite sum approximation to the probability elements; the "conservation of mass" requirement stipulates that $$h_X(x_j) \Delta_X(x_j) = h_Y(y_j) \Delta_Y(y_j).$$ Here $h_X(x_j)$ is the height and $\Delta(x_j)$ is the width of the interval on which $x_j$ is the center.
Suppose that $h_X(x)$ is known and $y = g(x)$ for a monotone continuous function $g(\cdot)$. The goal is to solve for $h_Y(y)$ in terms of $g(\cdot)$ and $h_X(\cdot)$. To do so, we will fix either $\Delta_X(x_j)$ or $\Delta_Y(y_j)$ to be some constant $\Delta$ for all values of its argument. Then we will solve for $h_Y(y)$ and take a limit as $\Delta \rightarrow 0$. Which of $\Delta_X(x_j)$ or $\Delta_Y(y_j)$ is set to the constant determines which of the two forms of the formula is arrived at.
Setting $\Delta_Y(y_j) = \Delta$ gives the more common form.
$$\begin{aligned}
h_Y(y) \Delta &= h_X(x)\left [g^{-1} \left(y + \dfrac{\Delta}{2} \right) - g^{-1} \left(y - \dfrac{\Delta}{2} \right) \right ],\\
h_Y(y) &= h_X(g^{-1}(y))\frac{\left [g^{-1} \left(y + \dfrac{\Delta}{2} \right) - g^{-1} \left(y - \dfrac{\Delta}{2} \right) \right ]}{\Delta},\\
h_Y(y) &\rightarrow h_X(g^{-1}(y)) (g^{-1})'(y).
\end{aligned}
$$
Setting $\Delta_X(x_j) = \Delta$ gives the other (equivalent) expression.
$$\begin{aligned}
h_X(x) \Delta &= h_Y(y) \left [g \left(x + \dfrac{\Delta}{2} \right) - g \left(x - \dfrac{\Delta}{2} \right) \right ],\\
h_Y(y) &= h_X(g^{-1}(y)) \frac{ \Delta}{g \left(x + \dfrac{\Delta}{2} \right) - g \left(x - \dfrac{\Delta}{2} \right) },\\
h_Y(y) &\rightarrow \frac{h_X(g^{-1}(y))}{g'(g^{-1}(y))}.
\end{aligned}
$$
Presumably this argument fails when Riemann sums fail and more measure theory is called for, but this line of reasoning satisfies my curiosity well enough. Specifically, the first approach, setting $\Delta_Y(y) = \Delta$ at the outset, inherits the same intuition as explained in @whuber's answer to the other question, but arrives at an expression that will match most other texts (which is desirable to me for pragmatic reasons). Of course, intuition is very personal, so YMMV. | Why is the formula for the density of a transformed random variable expressed in terms of the deriva
It seems that the heuristic described by @whuber in their answer to the linked problem can be modified slightly to yield the change of variables formula for the density in its more familiar form. Cons |
55,578 | Why is the formula for the density of a transformed random variable expressed in terms of the derivative of the inverse? | One heuristic way to look at this is to consider the probability density as a scaled probability by considering an "infinitesimally small" region encompassing a point. For any infinitesimally small distances $\Delta_X > 0$ and $\Delta_Y > 0$ you have:
$$\begin{align}
\Delta_X \times f_X(x) &= \mathbb{P}(x \leqslant X \leqslant x + \Delta_X)
\quad \quad \quad \quad (1) \\[12pt]
\Delta_Y \times f_Y(y) &= \mathbb{P}(y \leqslant Y \leqslant y + \Delta_Y)
\quad \quad \quad \quad \ (2) \\[12pt]
\end{align}$$
Now, suppose we consider a point $y$ where $g^{-1}$ is differentiable. To facilitate our analysis, we will define the infinitesimal quantity $\Delta_X \equiv g^{-1}(y + \Delta_Y) - g^{-1}(y)$. We then have:
$$\begin{align}
f_Y(y)
&= \frac{\mathbb{P}(y \leqslant Y \leqslant y + \Delta_Y)}{\Delta_Y}
\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \text{from } (2) \\[6pt]
&= \frac{\mathbb{P}(y \leqslant g(X) \leqslant y + \Delta_Y)}{\Delta_Y} \\[6pt]
&= \frac{\mathbb{P}(g^{-1}(y) \leqslant X \leqslant g^{-1}(y + \Delta_Y))}{\Delta} \\[6pt]
&= f_X(g^{-1}(y)) \times \frac{g^{-1}(y + \Delta_Y) - g^{-1}(y)}{\Delta_Y}
\quad \quad \quad \quad \text{from } (1) \\[8pt]
&= f_X(g^{-1}(y)) \times \frac{\Delta_X}{\Delta_Y} \\[12pt]
&= f_X(g^{-1}(y)) \times (g^{-1})'(y) \\[12pt]
\end{align}$$
(The step from the third to the fourth line follows from taking $x = y+\Delta_Y$ and applying equation $(2)$ to express the probability as a scaled density.)
Alternatively, letting $\Delta_X$ be the free infinitesimal and defining $\Delta_Y \equiv g(x+\Delta_X) - g(x)$ then we have:
$$\begin{align}
f_X(x)
&= \frac{\mathbb{P}(x \leqslant X \leqslant x + \Delta_X)}{\Delta_X}
\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \text{from } (1) \\[6pt]
&= \frac{\mathbb{P}(g(x) \leqslant g(X) \leqslant g(x + \Delta_X))}{\Delta_X} \\[6pt]
&= \frac{\mathbb{P}(g(x) \leqslant Y \leqslant g(x + \Delta_X))}{\Delta_X} \\[6pt]
&= \frac{\mathbb{P}(g(x) \leqslant Y \leqslant g(x) + \Delta_Y)}{\Delta_Y} \times \frac{\Delta_Y}{\Delta_X} \\[6pt]
&= f_Y(g(x)) \times \frac{\Delta_Y}{\Delta_X}
\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \ \ \text{from } (2) \\[8pt]
&= f_Y(g(x)) \times g'(x) \\[12pt]
\end{align}$$
Now, this argument can be tightened to give a formal demonstration of the result, but the heuristic version shows how the derivative term arises. It arises from the fact that the region $[y, y+\Delta_Y]$ for the original random variable $Y$ corresponds to the region $[g^{-1}(y), g^{-1}(y + \Delta_Y)]$ for the random variable $X$. The derivative term is just the ratio of the lengths of the latter region over the length of the former region, when $\Delta_Y$ is small. | Why is the formula for the density of a transformed random variable expressed in terms of the deriva | One heuristic way to look at this is to consider the probability density as a scaled probability by considering an "infinitesimally small" region encompassing a point. For any infinitesimally small d | Why is the formula for the density of a transformed random variable expressed in terms of the derivative of the inverse?
One heuristic way to look at this is to consider the probability density as a scaled probability by considering an "infinitesimally small" region encompassing a point. For any infinitesimally small distances $\Delta_X > 0$ and $\Delta_Y > 0$ you have:
$$\begin{align}
\Delta_X \times f_X(x) &= \mathbb{P}(x \leqslant X \leqslant x + \Delta_X)
\quad \quad \quad \quad (1) \\[12pt]
\Delta_Y \times f_Y(y) &= \mathbb{P}(y \leqslant Y \leqslant y + \Delta_Y)
\quad \quad \quad \quad \ (2) \\[12pt]
\end{align}$$
Now, suppose we consider a point $y$ where $g^{-1}$ is differentiable. To facilitate our analysis, we will define the infinitesimal quantity $\Delta_X \equiv g^{-1}(y + \Delta_Y) - g^{-1}(y)$. We then have:
$$\begin{align}
f_Y(y)
&= \frac{\mathbb{P}(y \leqslant Y \leqslant y + \Delta_Y)}{\Delta_Y}
\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \text{from } (2) \\[6pt]
&= \frac{\mathbb{P}(y \leqslant g(X) \leqslant y + \Delta_Y)}{\Delta_Y} \\[6pt]
&= \frac{\mathbb{P}(g^{-1}(y) \leqslant X \leqslant g^{-1}(y + \Delta_Y))}{\Delta} \\[6pt]
&= f_X(g^{-1}(y)) \times \frac{g^{-1}(y + \Delta_Y) - g^{-1}(y)}{\Delta_Y}
\quad \quad \quad \quad \text{from } (1) \\[8pt]
&= f_X(g^{-1}(y)) \times \frac{\Delta_X}{\Delta_Y} \\[12pt]
&= f_X(g^{-1}(y)) \times (g^{-1})'(y) \\[12pt]
\end{align}$$
(The step from the third to the fourth line follows from taking $x = y+\Delta_Y$ and applying equation $(2)$ to express the probability as a scaled density.)
Alternatively, letting $\Delta_X$ be the free infinitesimal and defining $\Delta_Y \equiv g(x+\Delta_X) - g(x)$ then we have:
$$\begin{align}
f_X(x)
&= \frac{\mathbb{P}(x \leqslant X \leqslant x + \Delta_X)}{\Delta_X}
\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \text{from } (1) \\[6pt]
&= \frac{\mathbb{P}(g(x) \leqslant g(X) \leqslant g(x + \Delta_X))}{\Delta_X} \\[6pt]
&= \frac{\mathbb{P}(g(x) \leqslant Y \leqslant g(x + \Delta_X))}{\Delta_X} \\[6pt]
&= \frac{\mathbb{P}(g(x) \leqslant Y \leqslant g(x) + \Delta_Y)}{\Delta_Y} \times \frac{\Delta_Y}{\Delta_X} \\[6pt]
&= f_Y(g(x)) \times \frac{\Delta_Y}{\Delta_X}
\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \ \ \text{from } (2) \\[8pt]
&= f_Y(g(x)) \times g'(x) \\[12pt]
\end{align}$$
Now, this argument can be tightened to give a formal demonstration of the result, but the heuristic version shows how the derivative term arises. It arises from the fact that the region $[y, y+\Delta_Y]$ for the original random variable $Y$ corresponds to the region $[g^{-1}(y), g^{-1}(y + \Delta_Y)]$ for the random variable $X$. The derivative term is just the ratio of the lengths of the latter region over the length of the former region, when $\Delta_Y$ is small. | Why is the formula for the density of a transformed random variable expressed in terms of the deriva
One heuristic way to look at this is to consider the probability density as a scaled probability by considering an "infinitesimally small" region encompassing a point. For any infinitesimally small d |
55,579 | Convergence of uniformly distributed random variables on a sphere | In outline: one approach is to think of generating $U_n$ by generating $n$ iid standard Normals $Z_{n,1},\ldots,Z_{n,n}$ and defining
$$U_{n,i}=\frac{Z_{n,i}}{\sqrt{\sum_j Z_{n,j}^2}}$$
As $n\to\infty$, the denominator converges to its expected value (eg, by Chebyshev's inequality) and can be treated as a constant. The expected value is a multiple of $\sqrt{n}$, so rescaling any finite set of $U_{n,i}$ by $\sqrt{n}$ will asymptotically give independent Gaussians that are just multiples of the corresponding $Z_{n,i}$.
Update: the result is fairly straightforward but the implications are non-intuitive. $U_{n,1}= O_p(n^{-1/2})$, for $U_n$ uniformly distributed on $S^n$, so nearly all of the area of $S^n$ is within $O(n^{-1/2})$ of the equator for large $n$(!!). | Convergence of uniformly distributed random variables on a sphere | In outline: one approach is to think of generating $U_n$ by generating $n$ iid standard Normals $Z_{n,1},\ldots,Z_{n,n}$ and defining
$$U_{n,i}=\frac{Z_{n,i}}{\sqrt{\sum_j Z_{n,j}^2}}$$
As $n\to\infty | Convergence of uniformly distributed random variables on a sphere
In outline: one approach is to think of generating $U_n$ by generating $n$ iid standard Normals $Z_{n,1},\ldots,Z_{n,n}$ and defining
$$U_{n,i}=\frac{Z_{n,i}}{\sqrt{\sum_j Z_{n,j}^2}}$$
As $n\to\infty$, the denominator converges to its expected value (eg, by Chebyshev's inequality) and can be treated as a constant. The expected value is a multiple of $\sqrt{n}$, so rescaling any finite set of $U_{n,i}$ by $\sqrt{n}$ will asymptotically give independent Gaussians that are just multiples of the corresponding $Z_{n,i}$.
Update: the result is fairly straightforward but the implications are non-intuitive. $U_{n,1}= O_p(n^{-1/2})$, for $U_n$ uniformly distributed on $S^n$, so nearly all of the area of $S^n$ is within $O(n^{-1/2})$ of the equator for large $n$(!!). | Convergence of uniformly distributed random variables on a sphere
In outline: one approach is to think of generating $U_n$ by generating $n$ iid standard Normals $Z_{n,1},\ldots,Z_{n,n}$ and defining
$$U_{n,i}=\frac{Z_{n,i}}{\sqrt{\sum_j Z_{n,j}^2}}$$
As $n\to\infty |
55,580 | Convergence of uniformly distributed random variables on a sphere | This answer is essentially similar to @Thomas Lumley's, but hopefully to add more clarity by explicitly justifying some key steps.
Let $X_n = (X_{n, 1}, \ldots, X_{n, n}) \sim N_n(0, I_{(n)})$ (i.e., $X_{n, 1}, \ldots, X_{n, n} \text{ i.i.d.} \sim N(0, 1)$), then it follows by a property of spherical distribution (see, e.g., Theorem 1.5.6 in Aspects of Multivariate Statistical Theory by Robb J. Muirhead) that $X_n/\|X_n\| \sim \text{Uniform}(S_{n - 1})$, hence
\begin{align}
\sqrt{n}(U_{n, 1}, U_{n, 2}) \overset{d}{=}
\frac{\sqrt{n}}{\|X_n\|}(X_{n, 1}, X_{n, 2})
= \frac{1}{\sqrt{\frac{X_{n, 1}^2 + \cdots + X_{n, n}^2}{n}}}(X_{n, 1}, X_{n, 2}).
\tag{1}
\end{align}
By the weak law of large numbers, $\frac{X_{n, 1}^2 + \cdots + X_{n, n}^2}{n}$ converges to $E[Z^2] = 1$ in probability, where $Z \sim N(0, 1)$, whence
$\frac{1}{\sqrt{\frac{X_{n, 1}^2 + \cdots + X_{n, n}^2}{n}}}$ converges to $1$ in probability by the continuous mapping theorem. On the other hand, $(X_{n, 1}, X_{n, 2}) \sim N_2(0, I_{(2)})$ for all $n$. It thus follows by Slutsky's theorem that
\begin{align}
\frac{1}{\sqrt{\frac{X_{n, 1}^2 + \cdots + X_{n, n}^2}{n}}}(X_{n, 1}, X_{n, 2})
\to_d N_2(0, I_{(2)}). \tag{2}
\end{align}
Combining $(1)$ and $(2)$ gives $\sqrt{n}(U_{n, 1}, U_{n, 2}) \to_d N_2(0, I_{(2)})$. | Convergence of uniformly distributed random variables on a sphere | This answer is essentially similar to @Thomas Lumley's, but hopefully to add more clarity by explicitly justifying some key steps.
Let $X_n = (X_{n, 1}, \ldots, X_{n, n}) \sim N_n(0, I_{(n)})$ (i.e., | Convergence of uniformly distributed random variables on a sphere
This answer is essentially similar to @Thomas Lumley's, but hopefully to add more clarity by explicitly justifying some key steps.
Let $X_n = (X_{n, 1}, \ldots, X_{n, n}) \sim N_n(0, I_{(n)})$ (i.e., $X_{n, 1}, \ldots, X_{n, n} \text{ i.i.d.} \sim N(0, 1)$), then it follows by a property of spherical distribution (see, e.g., Theorem 1.5.6 in Aspects of Multivariate Statistical Theory by Robb J. Muirhead) that $X_n/\|X_n\| \sim \text{Uniform}(S_{n - 1})$, hence
\begin{align}
\sqrt{n}(U_{n, 1}, U_{n, 2}) \overset{d}{=}
\frac{\sqrt{n}}{\|X_n\|}(X_{n, 1}, X_{n, 2})
= \frac{1}{\sqrt{\frac{X_{n, 1}^2 + \cdots + X_{n, n}^2}{n}}}(X_{n, 1}, X_{n, 2}).
\tag{1}
\end{align}
By the weak law of large numbers, $\frac{X_{n, 1}^2 + \cdots + X_{n, n}^2}{n}$ converges to $E[Z^2] = 1$ in probability, where $Z \sim N(0, 1)$, whence
$\frac{1}{\sqrt{\frac{X_{n, 1}^2 + \cdots + X_{n, n}^2}{n}}}$ converges to $1$ in probability by the continuous mapping theorem. On the other hand, $(X_{n, 1}, X_{n, 2}) \sim N_2(0, I_{(2)})$ for all $n$. It thus follows by Slutsky's theorem that
\begin{align}
\frac{1}{\sqrt{\frac{X_{n, 1}^2 + \cdots + X_{n, n}^2}{n}}}(X_{n, 1}, X_{n, 2})
\to_d N_2(0, I_{(2)}). \tag{2}
\end{align}
Combining $(1)$ and $(2)$ gives $\sqrt{n}(U_{n, 1}, U_{n, 2}) \to_d N_2(0, I_{(2)})$. | Convergence of uniformly distributed random variables on a sphere
This answer is essentially similar to @Thomas Lumley's, but hopefully to add more clarity by explicitly justifying some key steps.
Let $X_n = (X_{n, 1}, \ldots, X_{n, n}) \sim N_n(0, I_{(n)})$ (i.e., |
55,581 | Weights from inverse probability treatment weighting in regression model in R? | You need to be clear about the quantity you want to estimate. Most causal inference applications are concerned with the average marginal effect of the treatment on the outcome. This does not correspond to the coefficient on treatment in a logistic regression model. The way to use regression to estimate causal effects is to use g-computation, which involves fitting a model for the outcome given the treatment and covariates (and their interaction), then using this model to predict the potential outcomes under treatment and under control for all units, then taking the difference in means between those predict potential outcomes. G-computation is consistent if the outcome model is consistent.
Below is how you would do g-computation in R:
#Fit the outcome model with an interaction between the treatment and covariates
fit <- glm(y ~ t * (x1 + x2), data = data, family = quasibinomial)
#Estimate potential outcomes under treatment
data$t <- 1
pred1 <- predict(fit, newdata = data, type = "response")
#Estimate potential outcomes under control
data$t <- 0
pred0 <- predict(fit, newdata = data, type = "response")
#Compute risk difference
mean(pred1) - mean(pred0)
The coefficient on treatment in a logistic regression of the outcome on the treatment and covariates is an estimate of the conditional effect of the treatment assuming no effect heterogeneity on the odds ratio scale. This is an extremely strong and unnecessary assumption to make. The reason the effect is the conditional rather than the marginal effect is because of the noncollapsibility of the odds ratio. This is not the case for linear regression; it is possible (by correctly parameterizing the linear model) to have the coefficient on treatment be a valid estimate of the marginal treatment effect. There must be an interaction between treatment and the mean-centered covariates for this to work.
I will assume that you are now using g-computation and want to know how you can incorporate weights into it. You can include the propensity score weights into an outcome model and then perform g-computation using that model. This method is doubly robust. Kang and Schafer (2007) call this method "Regression Estimation with Inverse-Propensity Weighted Coefficients" (regression estimation is another name for g-computation). Simply add weights to the glm() model exactly as you specified, and insert the code below into its appropriate place in the g-computation code above.
#G-computation with a weighted outcome model
fit <- glm(y ~ t * (x1 + x2), data = data, family = quasibinomial,
weights = weights)
There are many ways to produce doubly-robust estimators that are consistent if either the propensity score model or outcome model are correctly specified. The way you proposed is one, but you also have to get the conditional relationship between the outcome and propensity score correct, so simply adding it in as a covariate will not do. You need to allow it to be flexibly modeled. Using rcs() from rms is one way to do that.
library(rms)
#G-computation with PS as an additional covariate, modeled with a
#restricted cubic spline (rcs)
fit <- glm(y ~ t * (x1 + x2 + rcs(ps)), data = data, family = quasibinomial)
This model has a chance of being doubly robust.
Estimating confidence intervals is a little challenging with g-computation; in some cases, you can use the delta method/estimating equations, but the most straightforward and recommended way is to bootstrap. You need to include the propensity score estimation in the bootstrap. I demonstrate this below using the boot package.
library(boot)
boot_fun <- function(d, i) {
d_boot <- d[i,]
#Estimate the PS
ps_fit <- glm(t ~ x1 + x2, data = d_boot, family = binomial)
d_boot$ps <- fitted(ps_fit)
d_boot$ps_w <- with(d_boot, t/ps + (1-t)/(1-ps))
#Fit the weighted outcome model with an interaction between the treatment and covariates
fit <- glm(y ~ t * (x1 + x2), data = d_boot, family = quasibinomial,
weights = ps_w)
#Estimate potential outcomes under treatment
d_boot$t <- 1
pred1 <- predict(fit, newdata = d_boot, type = "response")
#Estimate potential outcomes under control
d_boot$t <- 0
pred0 <- predict(fit, newdata = d_boot, type = "response")
#Compute risk difference
mean(pred1) - mean(pred0)
}
#Effect estimate ("original" column)
(boot_fit <- boot(data, boot_fun, R = 999))
#Confidence interval (percentile)
boot.ci(boot_fit, type = "perc")
This is for the average treatment effect in the population. Things are a little different for the average treatment effect in the treated. If you're using matching, see the guide on estimating effects after matching with MatchIt here. I also recommend you check out the WeightIt package, which makes estimating weights very straightforward. Both packages have extensive documentation explaining how to estimate effects. | Weights from inverse probability treatment weighting in regression model in R? | You need to be clear about the quantity you want to estimate. Most causal inference applications are concerned with the average marginal effect of the treatment on the outcome. This does not correspon | Weights from inverse probability treatment weighting in regression model in R?
You need to be clear about the quantity you want to estimate. Most causal inference applications are concerned with the average marginal effect of the treatment on the outcome. This does not correspond to the coefficient on treatment in a logistic regression model. The way to use regression to estimate causal effects is to use g-computation, which involves fitting a model for the outcome given the treatment and covariates (and their interaction), then using this model to predict the potential outcomes under treatment and under control for all units, then taking the difference in means between those predict potential outcomes. G-computation is consistent if the outcome model is consistent.
Below is how you would do g-computation in R:
#Fit the outcome model with an interaction between the treatment and covariates
fit <- glm(y ~ t * (x1 + x2), data = data, family = quasibinomial)
#Estimate potential outcomes under treatment
data$t <- 1
pred1 <- predict(fit, newdata = data, type = "response")
#Estimate potential outcomes under control
data$t <- 0
pred0 <- predict(fit, newdata = data, type = "response")
#Compute risk difference
mean(pred1) - mean(pred0)
The coefficient on treatment in a logistic regression of the outcome on the treatment and covariates is an estimate of the conditional effect of the treatment assuming no effect heterogeneity on the odds ratio scale. This is an extremely strong and unnecessary assumption to make. The reason the effect is the conditional rather than the marginal effect is because of the noncollapsibility of the odds ratio. This is not the case for linear regression; it is possible (by correctly parameterizing the linear model) to have the coefficient on treatment be a valid estimate of the marginal treatment effect. There must be an interaction between treatment and the mean-centered covariates for this to work.
I will assume that you are now using g-computation and want to know how you can incorporate weights into it. You can include the propensity score weights into an outcome model and then perform g-computation using that model. This method is doubly robust. Kang and Schafer (2007) call this method "Regression Estimation with Inverse-Propensity Weighted Coefficients" (regression estimation is another name for g-computation). Simply add weights to the glm() model exactly as you specified, and insert the code below into its appropriate place in the g-computation code above.
#G-computation with a weighted outcome model
fit <- glm(y ~ t * (x1 + x2), data = data, family = quasibinomial,
weights = weights)
There are many ways to produce doubly-robust estimators that are consistent if either the propensity score model or outcome model are correctly specified. The way you proposed is one, but you also have to get the conditional relationship between the outcome and propensity score correct, so simply adding it in as a covariate will not do. You need to allow it to be flexibly modeled. Using rcs() from rms is one way to do that.
library(rms)
#G-computation with PS as an additional covariate, modeled with a
#restricted cubic spline (rcs)
fit <- glm(y ~ t * (x1 + x2 + rcs(ps)), data = data, family = quasibinomial)
This model has a chance of being doubly robust.
Estimating confidence intervals is a little challenging with g-computation; in some cases, you can use the delta method/estimating equations, but the most straightforward and recommended way is to bootstrap. You need to include the propensity score estimation in the bootstrap. I demonstrate this below using the boot package.
library(boot)
boot_fun <- function(d, i) {
d_boot <- d[i,]
#Estimate the PS
ps_fit <- glm(t ~ x1 + x2, data = d_boot, family = binomial)
d_boot$ps <- fitted(ps_fit)
d_boot$ps_w <- with(d_boot, t/ps + (1-t)/(1-ps))
#Fit the weighted outcome model with an interaction between the treatment and covariates
fit <- glm(y ~ t * (x1 + x2), data = d_boot, family = quasibinomial,
weights = ps_w)
#Estimate potential outcomes under treatment
d_boot$t <- 1
pred1 <- predict(fit, newdata = d_boot, type = "response")
#Estimate potential outcomes under control
d_boot$t <- 0
pred0 <- predict(fit, newdata = d_boot, type = "response")
#Compute risk difference
mean(pred1) - mean(pred0)
}
#Effect estimate ("original" column)
(boot_fit <- boot(data, boot_fun, R = 999))
#Confidence interval (percentile)
boot.ci(boot_fit, type = "perc")
This is for the average treatment effect in the population. Things are a little different for the average treatment effect in the treated. If you're using matching, see the guide on estimating effects after matching with MatchIt here. I also recommend you check out the WeightIt package, which makes estimating weights very straightforward. Both packages have extensive documentation explaining how to estimate effects. | Weights from inverse probability treatment weighting in regression model in R?
You need to be clear about the quantity you want to estimate. Most causal inference applications are concerned with the average marginal effect of the treatment on the outcome. This does not correspon |
55,582 | Beginner Bayesian question - which statement is false? | The problem isn't that you have a false statement. It's that you're measuring the wrong quantity. It’s helpful to remember that there are two events here: birth 1 and birth 2. We care about what happens in birth 2, given what we know about birth 1.
You're finding $p(\textrm{Birth2}=\textrm{twins} \mid \textrm{speciesA})$ or $p(\textrm{Birth2}=\textrm{twins} \mid \textrm{speciesB})$. These are the posterior distributions over $\textrm{Birth2}$ given the latent (meaning unobserved) variable $\textrm{Species}$. What you really ought to do is use the posterior predictive distribution, which marginalizes out the nuisance variable $\textrm{Species}$.
$$
\begin{align}
p(\textrm{Birth2}=\textrm{twins} \mid \textrm{Birth1}=\textrm{twins}) &= \sum_{s \in \textrm{Species}} \underbrace{p(\textrm{Birth2}=\textrm{twins} \mid s)}_{\text{next birth given species}} \times \underbrace{p(s \mid \textrm{Birth1}=\textrm{twins})}_{\text{species given previous birth}} \\
&= p(\textrm{Birth2}=\textrm{twins} \mid a) p(a \mid \textrm{Birth1}=\textrm{twins}) + p(\textrm{Birth2}=\textrm{twins} \mid b) p(b \mid \textrm{Birth1}=\textrm{twins}) \\
&= 0.1 \times \frac{1}{3} + 0.2 \times \frac{2}{3} \\
&=\frac{1}{6} \\
&\approx 0.167
\end{align}
$$
This expression measures your belief about the second birth, given the first. It handles both cases of what $\textrm{Species}$ could be.
Note that you already have computed the relevant quantities properly by Bayes's rule; well done! | Beginner Bayesian question - which statement is false? | The problem isn't that you have a false statement. It's that you're measuring the wrong quantity. It’s helpful to remember that there are two events here: birth 1 and birth 2. We care about what happe | Beginner Bayesian question - which statement is false?
The problem isn't that you have a false statement. It's that you're measuring the wrong quantity. It’s helpful to remember that there are two events here: birth 1 and birth 2. We care about what happens in birth 2, given what we know about birth 1.
You're finding $p(\textrm{Birth2}=\textrm{twins} \mid \textrm{speciesA})$ or $p(\textrm{Birth2}=\textrm{twins} \mid \textrm{speciesB})$. These are the posterior distributions over $\textrm{Birth2}$ given the latent (meaning unobserved) variable $\textrm{Species}$. What you really ought to do is use the posterior predictive distribution, which marginalizes out the nuisance variable $\textrm{Species}$.
$$
\begin{align}
p(\textrm{Birth2}=\textrm{twins} \mid \textrm{Birth1}=\textrm{twins}) &= \sum_{s \in \textrm{Species}} \underbrace{p(\textrm{Birth2}=\textrm{twins} \mid s)}_{\text{next birth given species}} \times \underbrace{p(s \mid \textrm{Birth1}=\textrm{twins})}_{\text{species given previous birth}} \\
&= p(\textrm{Birth2}=\textrm{twins} \mid a) p(a \mid \textrm{Birth1}=\textrm{twins}) + p(\textrm{Birth2}=\textrm{twins} \mid b) p(b \mid \textrm{Birth1}=\textrm{twins}) \\
&= 0.1 \times \frac{1}{3} + 0.2 \times \frac{2}{3} \\
&=\frac{1}{6} \\
&\approx 0.167
\end{align}
$$
This expression measures your belief about the second birth, given the first. It handles both cases of what $\textrm{Species}$ could be.
Note that you already have computed the relevant quantities properly by Bayes's rule; well done! | Beginner Bayesian question - which statement is false?
The problem isn't that you have a false statement. It's that you're measuring the wrong quantity. It’s helpful to remember that there are two events here: birth 1 and birth 2. We care about what happe |
55,583 | Beginner Bayesian question - which statement is false? | You are very close to the answer, but there is a key word in the quantity that is being asked of you to calculate.
You have a new female panda of unknown species, and she has just given birth to twins. What is the probability that her next birth will also be twins?
That is, you are not interested in the evidence, $Pr(\text{twins})$ but the posterior predictive probability $Pr(X_{2} = \text{twins} \ | X_{1} = \text{twins}),$ where $X_{1}$ is the "first birth", i.e. your only observation for this panda bear, and $X_{2}$ is the second birth in the future.
In other words, calculate:
$$\sum_{\text{all species i}} Pr(X_{2} = \text{twins} \ | \ \text{species i}) \times Pr(\text{species i} \ | \ X_{1} =\text{twins}) .$$
Note that the last term in the summation is the posterior probability that you have already calculated. | Beginner Bayesian question - which statement is false? | You are very close to the answer, but there is a key word in the quantity that is being asked of you to calculate.
You have a new female panda of unknown species, and she has just given birth to twins | Beginner Bayesian question - which statement is false?
You are very close to the answer, but there is a key word in the quantity that is being asked of you to calculate.
You have a new female panda of unknown species, and she has just given birth to twins. What is the probability that her next birth will also be twins?
That is, you are not interested in the evidence, $Pr(\text{twins})$ but the posterior predictive probability $Pr(X_{2} = \text{twins} \ | X_{1} = \text{twins}),$ where $X_{1}$ is the "first birth", i.e. your only observation for this panda bear, and $X_{2}$ is the second birth in the future.
In other words, calculate:
$$\sum_{\text{all species i}} Pr(X_{2} = \text{twins} \ | \ \text{species i}) \times Pr(\text{species i} \ | \ X_{1} =\text{twins}) .$$
Note that the last term in the summation is the posterior probability that you have already calculated. | Beginner Bayesian question - which statement is false?
You are very close to the answer, but there is a key word in the quantity that is being asked of you to calculate.
You have a new female panda of unknown species, and she has just given birth to twins |
55,584 | Minimum sample size required in paired t-tests and statistic significance | What is the minimum sample size depends on the question: "Minimum sample size to accomplish what?".
A paired-t test can be done on as few as 2 pairs if the only goal is to be able do some computations and get an answer (and you do not care about the quality of the answer).
If the question is how big a sample size do you need for the Central Limit Theorem to allow you to use normal based tests like the paired-t when the population is not normal? then this depends on how non-normal your population of differences is. Intro stats classes and text books use a rule of thumb with numbers like 30, but those are not really justified other than keeping things simple in an introductory class. In some cases 6 is big enough, in other cases 10,000 is not big enough. An important thing to remember is that for the paired test it is the amount of skewness/outliers in the differences, not the original values that will be important. This is one of the reasons for using paired tests.
One question that I don't see asked or answered in your description is how much time before and after are you going to measure water consumption for? I would expect much more normality and lower variability in the average daily usage for 3 months worth of data before and 3 months after compared to a single day before and after.
If your question is the minimum sample size to have a certain power to detect a given effect size, then this really depends on the effect size that you want to see and how much variation you expect (the standard deviation of the differences). If you have no idea what these may be then you need to either do some more research, talk with an expert, or do a pilot study of some sort (or better, all 3). Think about what effect size will be meaningful, a large enough study could show a reduction in water consumption of 1 table spoon, but I doubt many people would care about that small of a change.
If you can get some information on current water consumption, then one approach to explore some of these issues is to simulate some data based on the data you can get and some assumptions about what may change (try different effect sizes, etc.), then analyze your simulated data to see if it gives you meaningful results (confidence intervals are precise enough to be useful, power, etc.).
Another issue to think about is seasonality of when you collect your data. Do the households in the area of interest consume different amounts of water during different seasons (if water used to water the lawn/garden is included in your measurements then this is probably a strong yes). If your before time points may differ significantly from your after time points in weather/temperature/etc. then you should make an effort to address this in your experimental design and analysis. One option would be to include another 50 "control" households that do not receive any device but are measured before and after to give an estimate of natural differences between before and after periods.
For the analysis, you can do paired-t tests, but it would probably be better to do a randomized block ANOVA design, or mixed-effects model (or a Bayesian Hierarchical model) with households as the blocks/random effects to still give you the pairing but also allow you to compare between the different machines (and control) and look at other factors.
You also ask why paired tests require smaller sample sizes than unpaired. Simply put, the sample size depends a lot on the amount of residual variation (variability after accounting for other factors), if pairing is natural, then it will also reduce that residual variation. In your case you are going to have household to household variation (a family of 4 will likely consume much more water than a single person), in a non-paired study that household to household variation will be included in the residual standard deviation, but proper pairing will remove/adjust for most of the household to household variation, so the paired sample size calculation will be based on a much smaller standard deviation than the unpaired equivalent. | Minimum sample size required in paired t-tests and statistic significance | What is the minimum sample size depends on the question: "Minimum sample size to accomplish what?".
A paired-t test can be done on as few as 2 pairs if the only goal is to be able do some computations | Minimum sample size required in paired t-tests and statistic significance
What is the minimum sample size depends on the question: "Minimum sample size to accomplish what?".
A paired-t test can be done on as few as 2 pairs if the only goal is to be able do some computations and get an answer (and you do not care about the quality of the answer).
If the question is how big a sample size do you need for the Central Limit Theorem to allow you to use normal based tests like the paired-t when the population is not normal? then this depends on how non-normal your population of differences is. Intro stats classes and text books use a rule of thumb with numbers like 30, but those are not really justified other than keeping things simple in an introductory class. In some cases 6 is big enough, in other cases 10,000 is not big enough. An important thing to remember is that for the paired test it is the amount of skewness/outliers in the differences, not the original values that will be important. This is one of the reasons for using paired tests.
One question that I don't see asked or answered in your description is how much time before and after are you going to measure water consumption for? I would expect much more normality and lower variability in the average daily usage for 3 months worth of data before and 3 months after compared to a single day before and after.
If your question is the minimum sample size to have a certain power to detect a given effect size, then this really depends on the effect size that you want to see and how much variation you expect (the standard deviation of the differences). If you have no idea what these may be then you need to either do some more research, talk with an expert, or do a pilot study of some sort (or better, all 3). Think about what effect size will be meaningful, a large enough study could show a reduction in water consumption of 1 table spoon, but I doubt many people would care about that small of a change.
If you can get some information on current water consumption, then one approach to explore some of these issues is to simulate some data based on the data you can get and some assumptions about what may change (try different effect sizes, etc.), then analyze your simulated data to see if it gives you meaningful results (confidence intervals are precise enough to be useful, power, etc.).
Another issue to think about is seasonality of when you collect your data. Do the households in the area of interest consume different amounts of water during different seasons (if water used to water the lawn/garden is included in your measurements then this is probably a strong yes). If your before time points may differ significantly from your after time points in weather/temperature/etc. then you should make an effort to address this in your experimental design and analysis. One option would be to include another 50 "control" households that do not receive any device but are measured before and after to give an estimate of natural differences between before and after periods.
For the analysis, you can do paired-t tests, but it would probably be better to do a randomized block ANOVA design, or mixed-effects model (or a Bayesian Hierarchical model) with households as the blocks/random effects to still give you the pairing but also allow you to compare between the different machines (and control) and look at other factors.
You also ask why paired tests require smaller sample sizes than unpaired. Simply put, the sample size depends a lot on the amount of residual variation (variability after accounting for other factors), if pairing is natural, then it will also reduce that residual variation. In your case you are going to have household to household variation (a family of 4 will likely consume much more water than a single person), in a non-paired study that household to household variation will be included in the residual standard deviation, but proper pairing will remove/adjust for most of the household to household variation, so the paired sample size calculation will be based on a much smaller standard deviation than the unpaired equivalent. | Minimum sample size required in paired t-tests and statistic significance
What is the minimum sample size depends on the question: "Minimum sample size to accomplish what?".
A paired-t test can be done on as few as 2 pairs if the only goal is to be able do some computations |
55,585 | Minimum sample size required in paired t-tests and statistic significance | According to what I have learned there is no minimum sample size for a t-test. In fact the t-test is suitable for cases where the $n$ sample size is: 3 and more.
Even $n=2$ would work.
A paired t-test on observations $\{X_{1i}\}_{i=1}^n$ and $\{X_{2i}\}_{i=1}^n$ is the same as a one-sample t test on differences. *
You choose t-test well, since you don't know the $\sigma$ of the population, i.e. z-test would not work for your case unless you somehow interact with the God and get $\sigma$ from it.
In other words, you should do t-test as you do, where t-distribution is a sample distribution behind the sampling.
This distribution further assumes you relay on sample SD $S$ (standard deviation of the sample).
Since $S$ brings uncertainty, unless $n$ is big (where we usually assume $\sigma \approx S$) we decrease the degrees of freedom.
$$
\frac{\bar{x}-\mu}{\sigma / \sqrt{n}} \sim z \longrightarrow \frac{\bar{x}-\mu}{s / \sqrt{n}} \sim t_{\mathrm{n}-1}
$$
Once we calculate the sample mean $\bar X$ we can estimate the confidence interval.
$$
\bar X \pm t \frac{S}{\sqrt{n}}
$$
Where $t$ you can get in R via the 95% confidence interval rule:
t = qt(0.975,df=n-1)
If the normal distribution assumption, in the end, turns out to be false, can I turn to Wilcoxon signed-rank in the end?
You need to have normal distribution or something like close to normal if your number of samples is relatively small say smaller than 30.
Someone said I should not intimidate with the 30 number, but for now I assume if I have at least 30 samples samples adhere to the Normal distribution based on the Central Limit Theorem.
I plan to calculate soon why the 30 is the important number in statistics, but I don't have the power to ask more questions at the moment :).
This will be possible based on KL divergence, but for now say that with $n=30$ we have the enough power to say the t-distribution is close to Normal.
To calculate the power I found this R code:
power.t.test(n = 20, delta = 1)
power.t.test(power = .90, delta = 1)
First one should answer the question what is the power of 20 samples, and second how many samples do you need to gain the 0.9 power.
I don't know what is delta in here, but it must me something important, documentation is lacking some detailed facts so I need to examine.
So with 30+ samples you will have the normal distribution assumption, no need for rank tests.
why are sample sizes in paired t-tests much lower when comparing to tests like two-way ANOVA (for example)? I see paired t-tests of size 30 while two-way ANOVA (with a control group) is around >200
ANOVA simplified have to be the same as t-test but for 3 or more samples we compare. So if you have some strange results you may share the R or Python code to replicate.
Ref | Minimum sample size required in paired t-tests and statistic significance | According to what I have learned there is no minimum sample size for a t-test. In fact the t-test is suitable for cases where the $n$ sample size is: 3 and more.
Even $n=2$ would work.
A paired t-test | Minimum sample size required in paired t-tests and statistic significance
According to what I have learned there is no minimum sample size for a t-test. In fact the t-test is suitable for cases where the $n$ sample size is: 3 and more.
Even $n=2$ would work.
A paired t-test on observations $\{X_{1i}\}_{i=1}^n$ and $\{X_{2i}\}_{i=1}^n$ is the same as a one-sample t test on differences. *
You choose t-test well, since you don't know the $\sigma$ of the population, i.e. z-test would not work for your case unless you somehow interact with the God and get $\sigma$ from it.
In other words, you should do t-test as you do, where t-distribution is a sample distribution behind the sampling.
This distribution further assumes you relay on sample SD $S$ (standard deviation of the sample).
Since $S$ brings uncertainty, unless $n$ is big (where we usually assume $\sigma \approx S$) we decrease the degrees of freedom.
$$
\frac{\bar{x}-\mu}{\sigma / \sqrt{n}} \sim z \longrightarrow \frac{\bar{x}-\mu}{s / \sqrt{n}} \sim t_{\mathrm{n}-1}
$$
Once we calculate the sample mean $\bar X$ we can estimate the confidence interval.
$$
\bar X \pm t \frac{S}{\sqrt{n}}
$$
Where $t$ you can get in R via the 95% confidence interval rule:
t = qt(0.975,df=n-1)
If the normal distribution assumption, in the end, turns out to be false, can I turn to Wilcoxon signed-rank in the end?
You need to have normal distribution or something like close to normal if your number of samples is relatively small say smaller than 30.
Someone said I should not intimidate with the 30 number, but for now I assume if I have at least 30 samples samples adhere to the Normal distribution based on the Central Limit Theorem.
I plan to calculate soon why the 30 is the important number in statistics, but I don't have the power to ask more questions at the moment :).
This will be possible based on KL divergence, but for now say that with $n=30$ we have the enough power to say the t-distribution is close to Normal.
To calculate the power I found this R code:
power.t.test(n = 20, delta = 1)
power.t.test(power = .90, delta = 1)
First one should answer the question what is the power of 20 samples, and second how many samples do you need to gain the 0.9 power.
I don't know what is delta in here, but it must me something important, documentation is lacking some detailed facts so I need to examine.
So with 30+ samples you will have the normal distribution assumption, no need for rank tests.
why are sample sizes in paired t-tests much lower when comparing to tests like two-way ANOVA (for example)? I see paired t-tests of size 30 while two-way ANOVA (with a control group) is around >200
ANOVA simplified have to be the same as t-test but for 3 or more samples we compare. So if you have some strange results you may share the R or Python code to replicate.
Ref | Minimum sample size required in paired t-tests and statistic significance
According to what I have learned there is no minimum sample size for a t-test. In fact the t-test is suitable for cases where the $n$ sample size is: 3 and more.
Even $n=2$ would work.
A paired t-test |
55,586 | Minimum sample size required in paired t-tests and statistic significance | Note: The question was subjected to multiple round of edits, when other answers were made in between. This answer is made after Edit 2 was posted, and refrained from dealing with the part on Wilcoxon and ANOVA as it is unlikely to add on what existing answers have.
In the world of experiment design involving $t$-tests, one will need to have a rough idea on what the following five things may be:
The desired significance level ($\alpha$)
The desired test power ($\pi_{\min}$)
The effect size (in real terms, $\theta = \textrm{consumption}_{\textrm{after}} - \textrm{consumption}_{\textrm{before}}$)
The spread of the responses ($\sigma^2$ - it can be the pooled variance); and
The sample size ($n$)
In practice, given the (rough) formula for determining minimum sample size assuming normality assumptions and/or CLT practically apply [1]:
$$n_{\min} = \left(\frac{z_{1-\alpha} - z_{1-\pi_{\min}}}{\theta}\right)^2 \sigma^2,$$
where $z_{q}$ is the $q$th quantile of a standard normal, if you specify four of the five quantities above, you are basically constrained on the one left. Usually, $\alpha$ and $\pi_{\min}$ is assumed to be of certain value (0.05 and 0.8 in my field), and you mentioned the sample size is more or less fixed. This leaves the effect size and the spread as unknowns.
You then ask:
Since I have no idea how much will the water consumption mean will be in the end how can I do this? Maybe there are some pilot studies where I can get a standard deviation estimation, but is that enough?
which suggest to me that it is easier for you to estimate the variance / standard deviation than the effect size. Furthermore, I (as a layman in water technologies) would imagine is it easier to influence how much water a device can save on average than how spread out the water savings are.
Thus, if you can get a standard deviation estimate, the formula will be able to tell you what effect size you will need to obtain a statistically significant result. (A side note that here my effect size is in real terms, i.e. average number of litres the water device can save a day, instead of Cohen's d, which is quoted in your question.) I personally will try and vary the estimate in both directions a bit and see how that affects the effect size.
This leads back to your key question:
Is a sample size of 50 enough for the paired t-tests?
Look at the effect size that comes out from above - is that a realistic amount of water your machine can save on average? If so, yes.
If not, i.e. you are expecting a smaller effect size, you might need to consider:
Having more samples (which you said is pretty much constrained);
Settling for a lower test power (i.e. have a lower chance to see a significant result if there is indeed a saving);
Choosing a higher significance level (i.e. reject H_0 when p<0.1 instead of 0.05, risking more false positives); or
Praying and hoping the test subjects' water usage behaviour (and hence the water savings) are more consistent, reducing the spread of the responses.
All of the above are just ways to balance the system / equation showing the relationship between the five quantities. The key takeaway is that sample size is not the only consideration when it comes to designing experiments, though it is often the most easily manipulatable parameter.
[1] From background material of one of my previous work (Section 3) - unfortunately I was unable to get it out quick enough and hence it remains as a pre-print: https://arxiv.org/pdf/1803.06258.pdf | Minimum sample size required in paired t-tests and statistic significance | Note: The question was subjected to multiple round of edits, when other answers were made in between. This answer is made after Edit 2 was posted, and refrained from dealing with the part on Wilcoxon | Minimum sample size required in paired t-tests and statistic significance
Note: The question was subjected to multiple round of edits, when other answers were made in between. This answer is made after Edit 2 was posted, and refrained from dealing with the part on Wilcoxon and ANOVA as it is unlikely to add on what existing answers have.
In the world of experiment design involving $t$-tests, one will need to have a rough idea on what the following five things may be:
The desired significance level ($\alpha$)
The desired test power ($\pi_{\min}$)
The effect size (in real terms, $\theta = \textrm{consumption}_{\textrm{after}} - \textrm{consumption}_{\textrm{before}}$)
The spread of the responses ($\sigma^2$ - it can be the pooled variance); and
The sample size ($n$)
In practice, given the (rough) formula for determining minimum sample size assuming normality assumptions and/or CLT practically apply [1]:
$$n_{\min} = \left(\frac{z_{1-\alpha} - z_{1-\pi_{\min}}}{\theta}\right)^2 \sigma^2,$$
where $z_{q}$ is the $q$th quantile of a standard normal, if you specify four of the five quantities above, you are basically constrained on the one left. Usually, $\alpha$ and $\pi_{\min}$ is assumed to be of certain value (0.05 and 0.8 in my field), and you mentioned the sample size is more or less fixed. This leaves the effect size and the spread as unknowns.
You then ask:
Since I have no idea how much will the water consumption mean will be in the end how can I do this? Maybe there are some pilot studies where I can get a standard deviation estimation, but is that enough?
which suggest to me that it is easier for you to estimate the variance / standard deviation than the effect size. Furthermore, I (as a layman in water technologies) would imagine is it easier to influence how much water a device can save on average than how spread out the water savings are.
Thus, if you can get a standard deviation estimate, the formula will be able to tell you what effect size you will need to obtain a statistically significant result. (A side note that here my effect size is in real terms, i.e. average number of litres the water device can save a day, instead of Cohen's d, which is quoted in your question.) I personally will try and vary the estimate in both directions a bit and see how that affects the effect size.
This leads back to your key question:
Is a sample size of 50 enough for the paired t-tests?
Look at the effect size that comes out from above - is that a realistic amount of water your machine can save on average? If so, yes.
If not, i.e. you are expecting a smaller effect size, you might need to consider:
Having more samples (which you said is pretty much constrained);
Settling for a lower test power (i.e. have a lower chance to see a significant result if there is indeed a saving);
Choosing a higher significance level (i.e. reject H_0 when p<0.1 instead of 0.05, risking more false positives); or
Praying and hoping the test subjects' water usage behaviour (and hence the water savings) are more consistent, reducing the spread of the responses.
All of the above are just ways to balance the system / equation showing the relationship between the five quantities. The key takeaway is that sample size is not the only consideration when it comes to designing experiments, though it is often the most easily manipulatable parameter.
[1] From background material of one of my previous work (Section 3) - unfortunately I was unable to get it out quick enough and hence it remains as a pre-print: https://arxiv.org/pdf/1803.06258.pdf | Minimum sample size required in paired t-tests and statistic significance
Note: The question was subjected to multiple round of edits, when other answers were made in between. This answer is made after Edit 2 was posted, and refrained from dealing with the part on Wilcoxon |
55,587 | Paired and repeated measures! Now what? ANOVA or mixed model? | This is basically an analysis of change model.
2 measurements on each subject were taken at baseline, and 2 more at follow-up. I will refrain from calling this "control" and "intervention" as that could be somewhat misleading.
We have repeated measures within patients. So we could consider a model that fits random intercepts for patients, to control for this. There are also repeated measures within each kidney of each patient. I would suggest the following model:
measure ~ time + LR + (1 | PatientID)
In order to fit this model, it is necessary to "reshape" the data as follows:
PatientID time LR measure
1 -0.5 L 19
1 -0.5 R 29
1 0.5 L 27
1 0.5 R 20
2 -0.5 L 14
2 -0.5 R 13
2 0.5 L 13
2 0.5 R 11
The estimate for time will answer the research question: What is the change in measure associated with the intervention, while controlling for the repeated measures within patients, and within the same kidney's of each patient.
Another approach is to fit nested random effects, and treat LR as a random factor:
measure ~ time + (1 | PatientID/LR) | Paired and repeated measures! Now what? ANOVA or mixed model? | This is basically an analysis of change model.
2 measurements on each subject were taken at baseline, and 2 more at follow-up. I will refrain from calling this "control" and "intervention" as that cou | Paired and repeated measures! Now what? ANOVA or mixed model?
This is basically an analysis of change model.
2 measurements on each subject were taken at baseline, and 2 more at follow-up. I will refrain from calling this "control" and "intervention" as that could be somewhat misleading.
We have repeated measures within patients. So we could consider a model that fits random intercepts for patients, to control for this. There are also repeated measures within each kidney of each patient. I would suggest the following model:
measure ~ time + LR + (1 | PatientID)
In order to fit this model, it is necessary to "reshape" the data as follows:
PatientID time LR measure
1 -0.5 L 19
1 -0.5 R 29
1 0.5 L 27
1 0.5 R 20
2 -0.5 L 14
2 -0.5 R 13
2 0.5 L 13
2 0.5 R 11
The estimate for time will answer the research question: What is the change in measure associated with the intervention, while controlling for the repeated measures within patients, and within the same kidney's of each patient.
Another approach is to fit nested random effects, and treat LR as a random factor:
measure ~ time + (1 | PatientID/LR) | Paired and repeated measures! Now what? ANOVA or mixed model?
This is basically an analysis of change model.
2 measurements on each subject were taken at baseline, and 2 more at follow-up. I will refrain from calling this "control" and "intervention" as that cou |
55,588 | Trimmed, weighted mean | This is even more complicated than you think.
Let's start with sampling weights: the data are sampled from a larger population and $w_i$ is the reciprocal of the sampling probability for observation $i$.
Now, it could be that you have a 'gross error contamination' model: units are sampled from the population and measured, and something goes wrong with the measurement process sometimes. In that case, the error contamination happens at measurement time, after sampling, and your trimming shouldn't depend on the weights. You'll just need to take account of the discarded weights to rescale. If you sort by measured $x$, then
$$\bar X = \frac{\sum_{2}^{n-1} w_ix_i}{\sum_2^{n-1} w_i}$$
On the other hand, you might just have a model that says $x$ is long-tailed, so that you want to reduce the impact of (correctly measured) outliers. In that case, the trimming should happen to the population variable and should depend on the weights. You still want to sort by $x_i$, but
$$\bar X = \frac{\sum_{2}^{n-1} w^*_ix_i}{\sum_2^{n-1} w^*_i}$$
where $w^*_i$ are defined to remove the first 1/nth and last 1/nth from the weights. If $w_1$ (after ordering by $x$) is greater than $1/n$, $w^*_1=w_1-1/n$, otherwise $w^*_1=0$ and $w^*_2$ gets reduced as well. You're estimating the population $\alpha$-trimmed mean functional $\int_\alpha^{1-\alpha} x\, d\mathbb{F}(x)$, with $\alpha=1/n$.
With frequency weights (as in @gung's comment), the 'gross error' idea doesn't really make sense. If you have $w_i$ identical observations it's unlikely that they are contaminated by gross errors. Errors would be more likely to give you single observations. Even the model of long-tailed $x$ is a bit strange with frequency weights, since long tails will tend to give you unique observations in the tails. If you did have frequency weights, you'd probably want to treat them like the second case of sampling weights, but you'd also probably want to look carefully at what was going on.
Precision weights make more sense, but there you might not need to trim, since you're already giving less weight to observations that deserve less weight. If you did want to trim, you would probably want the trimming fraction to depend on the weight, so that a high-weight observation needed to be more extreme to get trimmed. | Trimmed, weighted mean | This is even more complicated than you think.
Let's start with sampling weights: the data are sampled from a larger population and $w_i$ is the reciprocal of the sampling probability for observation $ | Trimmed, weighted mean
This is even more complicated than you think.
Let's start with sampling weights: the data are sampled from a larger population and $w_i$ is the reciprocal of the sampling probability for observation $i$.
Now, it could be that you have a 'gross error contamination' model: units are sampled from the population and measured, and something goes wrong with the measurement process sometimes. In that case, the error contamination happens at measurement time, after sampling, and your trimming shouldn't depend on the weights. You'll just need to take account of the discarded weights to rescale. If you sort by measured $x$, then
$$\bar X = \frac{\sum_{2}^{n-1} w_ix_i}{\sum_2^{n-1} w_i}$$
On the other hand, you might just have a model that says $x$ is long-tailed, so that you want to reduce the impact of (correctly measured) outliers. In that case, the trimming should happen to the population variable and should depend on the weights. You still want to sort by $x_i$, but
$$\bar X = \frac{\sum_{2}^{n-1} w^*_ix_i}{\sum_2^{n-1} w^*_i}$$
where $w^*_i$ are defined to remove the first 1/nth and last 1/nth from the weights. If $w_1$ (after ordering by $x$) is greater than $1/n$, $w^*_1=w_1-1/n$, otherwise $w^*_1=0$ and $w^*_2$ gets reduced as well. You're estimating the population $\alpha$-trimmed mean functional $\int_\alpha^{1-\alpha} x\, d\mathbb{F}(x)$, with $\alpha=1/n$.
With frequency weights (as in @gung's comment), the 'gross error' idea doesn't really make sense. If you have $w_i$ identical observations it's unlikely that they are contaminated by gross errors. Errors would be more likely to give you single observations. Even the model of long-tailed $x$ is a bit strange with frequency weights, since long tails will tend to give you unique observations in the tails. If you did have frequency weights, you'd probably want to treat them like the second case of sampling weights, but you'd also probably want to look carefully at what was going on.
Precision weights make more sense, but there you might not need to trim, since you're already giving less weight to observations that deserve less weight. If you did want to trim, you would probably want the trimming fraction to depend on the weight, so that a high-weight observation needed to be more extreme to get trimmed. | Trimmed, weighted mean
This is even more complicated than you think.
Let's start with sampling weights: the data are sampled from a larger population and $w_i$ is the reciprocal of the sampling probability for observation $ |
55,589 | theoretical confidence interval depending on sample size [closed] | Nice experiment. The blue lines will be at $\mu \pm z_{\alpha/2} \sigma/\sqrt{n}$ where $\alpha = 0.05$ and $\alpha \mapsto z_\alpha$ is the upper quantile function of the standard normal and $n$ is sample_size_. In this case, this is exact since you are generating from the normal distribution and the sample mean will have the distribution $N(\mu, \sigma^2/n)$. In general, it holds approximately based on the CLT for large $n$.
If you are trying to interpret the CI, that would be a bit different. Putting a CI around each of the points vertically, for each sample_size_, roughly 95% of those intervals will cross the dotted red line (you cover the true mean $\approx$ 95% of the time, hence the coverage probability is 0.95). Since your replication size is small (10), this is not that accurate. Try increasing sample_number_ to say 1000 to see it better. What you will observe is that although the length of the CIs will be smaller as you increase $n$, still $\approx$ 95% of them keep covering the red line no matter what $n$ is.
PS. I am assuming that instead of these two lines:
population <- rnorm(10000000, mean=mean, sd=sd)
sample_ <- sample(population, sample_size_)
you would do something like this:
sample_ <- rnorm(sample_size_, mean=mean, sd=sd)
That is, sample from the normal distribution, not a finite population drawn from the normal distribution as you are doing here, although the results will be close in your case (unless your sample size starts approaching 10000000). | theoretical confidence interval depending on sample size [closed] | Nice experiment. The blue lines will be at $\mu \pm z_{\alpha/2} \sigma/\sqrt{n}$ where $\alpha = 0.05$ and $\alpha \mapsto z_\alpha$ is the upper quantile function of the standard normal and $n$ is | theoretical confidence interval depending on sample size [closed]
Nice experiment. The blue lines will be at $\mu \pm z_{\alpha/2} \sigma/\sqrt{n}$ where $\alpha = 0.05$ and $\alpha \mapsto z_\alpha$ is the upper quantile function of the standard normal and $n$ is sample_size_. In this case, this is exact since you are generating from the normal distribution and the sample mean will have the distribution $N(\mu, \sigma^2/n)$. In general, it holds approximately based on the CLT for large $n$.
If you are trying to interpret the CI, that would be a bit different. Putting a CI around each of the points vertically, for each sample_size_, roughly 95% of those intervals will cross the dotted red line (you cover the true mean $\approx$ 95% of the time, hence the coverage probability is 0.95). Since your replication size is small (10), this is not that accurate. Try increasing sample_number_ to say 1000 to see it better. What you will observe is that although the length of the CIs will be smaller as you increase $n$, still $\approx$ 95% of them keep covering the red line no matter what $n$ is.
PS. I am assuming that instead of these two lines:
population <- rnorm(10000000, mean=mean, sd=sd)
sample_ <- sample(population, sample_size_)
you would do something like this:
sample_ <- rnorm(sample_size_, mean=mean, sd=sd)
That is, sample from the normal distribution, not a finite population drawn from the normal distribution as you are doing here, although the results will be close in your case (unless your sample size starts approaching 10000000). | theoretical confidence interval depending on sample size [closed]
Nice experiment. The blue lines will be at $\mu \pm z_{\alpha/2} \sigma/\sqrt{n}$ where $\alpha = 0.05$ and $\alpha \mapsto z_\alpha$ is the upper quantile function of the standard normal and $n$ is |
55,590 | ANOVA three group test is significant but the difference is small | The hypothesis test is doing exactly what it claims to be able to do: it is flagging to the investigator that there is an unusually high F-stat, too high for the null hypothesis to be believable.
Armed with that information, the investigator is allowed to conclude, "That is not enough of a difference to be interesting. There is no clinical significance."
"Practical significance" of the “effect size” is a good general term for this that you will find in the literature and here on Cross Validated. In your field of medicine, clinical significance would be a fine specific term. | ANOVA three group test is significant but the difference is small | The hypothesis test is doing exactly what it claims to be able to do: it is flagging to the investigator that there is an unusually high F-stat, too high for the null hypothesis to be believable.
Arme | ANOVA three group test is significant but the difference is small
The hypothesis test is doing exactly what it claims to be able to do: it is flagging to the investigator that there is an unusually high F-stat, too high for the null hypothesis to be believable.
Armed with that information, the investigator is allowed to conclude, "That is not enough of a difference to be interesting. There is no clinical significance."
"Practical significance" of the “effect size” is a good general term for this that you will find in the literature and here on Cross Validated. In your field of medicine, clinical significance would be a fine specific term. | ANOVA three group test is significant but the difference is small
The hypothesis test is doing exactly what it claims to be able to do: it is flagging to the investigator that there is an unusually high F-stat, too high for the null hypothesis to be believable.
Arme |
55,591 | $X^n$ where $X$ is normally distributed? | Let $Z_n := Z := g(X) := n X^n$. The relation between $Z$ and $Y$ is quite simple (one is a scaled version of the other). Let's figure out the distribution of $Z$. Assume for simplicity that $n$ is odd. Then, $g^{-1}(z) = (z/n)^{1/n}$. Hence, $|{g^{-1}}'(z)| = \frac1n (|z|/n)^{1/n-1}$.
It follows that the density of $Z$ is
$$
f_{Z_n}(z) = f_X(g^{-1}(z)) |{g^{-1}}'(z)| = \frac{n^{1/n}}{\sqrt{2\pi}} |z|^{\frac1n-1} e^{-(z^{1/n} n^{-1/n} - \mu)^2/(2\sigma^2)}
$$
Since $n^{1/n} \to 1$ and $|z|^{1/n} \to 1$ for all $z \neq 0$, the density pointwise converges to
$$
f_{Z_n}(z) \to \frac{1}{2\pi} e^{-(1-\mu)^2/2(\sigma)^2} |z|^{-1}, \quad z \neq 0
$$
which might be a good approximation for large $n$ (EDIT: it turns out it is not in general! See the edit.). Note, however, that this limit is not a density since it does not integrate to something finite. (Formally, one might be able to show that the distribution of $Z_n$ converges to a point mass at zero, in "some" sense. However, there is some mass that escapes to infinity, anything above 1 basically. So needs some care and the result might not be true. Informally, the limiting distribution is a mixture of a point mass at 0 and two point masses at $\pm \infty$ for $n$ odd.)
EDIT: Assume $n$ is odd. To clarify the situation, Let $X_1 := X\cdot 1_{\{|X| \le 1\}}$ and $X_2 := X\cdot 1_{\{|X| > 1\}}$ where $1_{\{|X| \le 1\}}$ is 1 if $|X| \le 1$ and zero otherwise and similarly for the other indicator function. We have $X = X_1 + X_2$ with $|X_1| \le 1$ almost surely and $|X_2| > 1$ almost surely. It is not hard to see that
$$
Y_n := X^n = (X_1 + X_2)^n = X_1^n + X_2^n.
$$
with $|X_1^n| \le 1$ and $|X_2^n| > 1$ a.s.
Assuming that $X$ has a continuous distribution (so that $\mathbb P(X=1) = 0$), it is not hard to see that $X_1^n$ converges in distribution to a point mass at 0. However, $X_2^n$ does not converge in distribution. In fact, it should be straightforward to show that
$$ P(X_2^n \in (0,t)) \to 0, \quad \text{as}\; n\to \infty$$ for any finite $t > 0$. This is what can be informally described as "the mass in the distribution of $X_2^n$ is escaping to infinity".
Since we also have $\mathbb P( X_1^n \in (s,\infty)) \to 0$ for any $s > 0$, it follows that $\mathbb P(Y_n \in (s,t)) \to 0$ for any $t > s > 0$. The only intervals that will have positive mass in the limit are those that contain 0. That is, if $I$ is an interval with $0 \in I$, then
$$
P(Y_n \in I) \to P(|X| \le 1), \quad \text{as}\; n \to \infty.
$$
Otherwise (that is, if $0 \notin I$), we have $\mathbb P(Y_n \in I) \to 0$. | $X^n$ where $X$ is normally distributed? | Let $Z_n := Z := g(X) := n X^n$. The relation between $Z$ and $Y$ is quite simple (one is a scaled version of the other). Let's figure out the distribution of $Z$. Assume for simplicity that $n$ is od | $X^n$ where $X$ is normally distributed?
Let $Z_n := Z := g(X) := n X^n$. The relation between $Z$ and $Y$ is quite simple (one is a scaled version of the other). Let's figure out the distribution of $Z$. Assume for simplicity that $n$ is odd. Then, $g^{-1}(z) = (z/n)^{1/n}$. Hence, $|{g^{-1}}'(z)| = \frac1n (|z|/n)^{1/n-1}$.
It follows that the density of $Z$ is
$$
f_{Z_n}(z) = f_X(g^{-1}(z)) |{g^{-1}}'(z)| = \frac{n^{1/n}}{\sqrt{2\pi}} |z|^{\frac1n-1} e^{-(z^{1/n} n^{-1/n} - \mu)^2/(2\sigma^2)}
$$
Since $n^{1/n} \to 1$ and $|z|^{1/n} \to 1$ for all $z \neq 0$, the density pointwise converges to
$$
f_{Z_n}(z) \to \frac{1}{2\pi} e^{-(1-\mu)^2/2(\sigma)^2} |z|^{-1}, \quad z \neq 0
$$
which might be a good approximation for large $n$ (EDIT: it turns out it is not in general! See the edit.). Note, however, that this limit is not a density since it does not integrate to something finite. (Formally, one might be able to show that the distribution of $Z_n$ converges to a point mass at zero, in "some" sense. However, there is some mass that escapes to infinity, anything above 1 basically. So needs some care and the result might not be true. Informally, the limiting distribution is a mixture of a point mass at 0 and two point masses at $\pm \infty$ for $n$ odd.)
EDIT: Assume $n$ is odd. To clarify the situation, Let $X_1 := X\cdot 1_{\{|X| \le 1\}}$ and $X_2 := X\cdot 1_{\{|X| > 1\}}$ where $1_{\{|X| \le 1\}}$ is 1 if $|X| \le 1$ and zero otherwise and similarly for the other indicator function. We have $X = X_1 + X_2$ with $|X_1| \le 1$ almost surely and $|X_2| > 1$ almost surely. It is not hard to see that
$$
Y_n := X^n = (X_1 + X_2)^n = X_1^n + X_2^n.
$$
with $|X_1^n| \le 1$ and $|X_2^n| > 1$ a.s.
Assuming that $X$ has a continuous distribution (so that $\mathbb P(X=1) = 0$), it is not hard to see that $X_1^n$ converges in distribution to a point mass at 0. However, $X_2^n$ does not converge in distribution. In fact, it should be straightforward to show that
$$ P(X_2^n \in (0,t)) \to 0, \quad \text{as}\; n\to \infty$$ for any finite $t > 0$. This is what can be informally described as "the mass in the distribution of $X_2^n$ is escaping to infinity".
Since we also have $\mathbb P( X_1^n \in (s,\infty)) \to 0$ for any $s > 0$, it follows that $\mathbb P(Y_n \in (s,t)) \to 0$ for any $t > s > 0$. The only intervals that will have positive mass in the limit are those that contain 0. That is, if $I$ is an interval with $0 \in I$, then
$$
P(Y_n \in I) \to P(|X| \le 1), \quad \text{as}\; n \to \infty.
$$
Otherwise (that is, if $0 \notin I$), we have $\mathbb P(Y_n \in I) \to 0$. | $X^n$ where $X$ is normally distributed?
Let $Z_n := Z := g(X) := n X^n$. The relation between $Z$ and $Y$ is quite simple (one is a scaled version of the other). Let's figure out the distribution of $Z$. Assume for simplicity that $n$ is od |
55,592 | $X^n$ where $X$ is normally distributed? | Far from an answer in general, but there is a formula for $\text{E}[X^n]$ if $\mu=0$. Then we have
$$
\text{E}[\text{X}^n] = \begin{cases}0\,, & n \text{ odd }\\ \sigma^n(n-1)(n-3)\cdot\ldots\cdot 1\,, & n \text{ even }\end{cases}
$$ | $X^n$ where $X$ is normally distributed? | Far from an answer in general, but there is a formula for $\text{E}[X^n]$ if $\mu=0$. Then we have
$$
\text{E}[\text{X}^n] = \begin{cases}0\,, & n \text{ odd }\\ \sigma^n(n-1)(n-3)\cdot\ldots\cdot 1 | $X^n$ where $X$ is normally distributed?
Far from an answer in general, but there is a formula for $\text{E}[X^n]$ if $\mu=0$. Then we have
$$
\text{E}[\text{X}^n] = \begin{cases}0\,, & n \text{ odd }\\ \sigma^n(n-1)(n-3)\cdot\ldots\cdot 1\,, & n \text{ even }\end{cases}
$$ | $X^n$ where $X$ is normally distributed?
Far from an answer in general, but there is a formula for $\text{E}[X^n]$ if $\mu=0$. Then we have
$$
\text{E}[\text{X}^n] = \begin{cases}0\,, & n \text{ odd }\\ \sigma^n(n-1)(n-3)\cdot\ldots\cdot 1 |
55,593 | Does the distribution $f(x) \propto (1-x^2)^{n/2}$ have a name? | It is known as a Power semi-circle distribution with pdf $f(x)$:
$$f(x) = \frac{1}{\sqrt{\pi }}\frac{\Gamma (\theta +2) }{ \Gamma \left(\theta +\frac{3}{2}\right)} \sqrt{1-x^2}^{2 \theta +1} \quad \quad \text{for } -1 < x < 1$$
... where shape parameter $\theta > -\frac{3}{2}$, and where your parameter $n = 2 \theta + 1$.
It nests a number of known distributions including:
ArcSine(-1,1) $\quad$ if $\theta = -1$
Uniform(-1,1) $\quad$ if $\theta = -\frac12$
Semicircle(-1,1) $\quad$ if $\theta = 0$
Epanechnikov kernel $\quad$ if $\theta = \frac12$
Bi-weight kernel $\quad$ if $\theta = \frac32$
Tri-weight kernel $\quad$ if $\theta = \frac52$
A reference is:
Kingman, J. F. C. (1963), Random walks with spherical symmetry, Acta Mathematica, 109(1), 11-53. | Does the distribution $f(x) \propto (1-x^2)^{n/2}$ have a name? | It is known as a Power semi-circle distribution with pdf $f(x)$:
$$f(x) = \frac{1}{\sqrt{\pi }}\frac{\Gamma (\theta +2) }{ \Gamma \left(\theta +\frac{3}{2}\right)} \sqrt{1-x^2}^{2 \theta +1} \quad \qu | Does the distribution $f(x) \propto (1-x^2)^{n/2}$ have a name?
It is known as a Power semi-circle distribution with pdf $f(x)$:
$$f(x) = \frac{1}{\sqrt{\pi }}\frac{\Gamma (\theta +2) }{ \Gamma \left(\theta +\frac{3}{2}\right)} \sqrt{1-x^2}^{2 \theta +1} \quad \quad \text{for } -1 < x < 1$$
... where shape parameter $\theta > -\frac{3}{2}$, and where your parameter $n = 2 \theta + 1$.
It nests a number of known distributions including:
ArcSine(-1,1) $\quad$ if $\theta = -1$
Uniform(-1,1) $\quad$ if $\theta = -\frac12$
Semicircle(-1,1) $\quad$ if $\theta = 0$
Epanechnikov kernel $\quad$ if $\theta = \frac12$
Bi-weight kernel $\quad$ if $\theta = \frac32$
Tri-weight kernel $\quad$ if $\theta = \frac52$
A reference is:
Kingman, J. F. C. (1963), Random walks with spherical symmetry, Acta Mathematica, 109(1), 11-53. | Does the distribution $f(x) \propto (1-x^2)^{n/2}$ have a name?
It is known as a Power semi-circle distribution with pdf $f(x)$:
$$f(x) = \frac{1}{\sqrt{\pi }}\frac{\Gamma (\theta +2) }{ \Gamma \left(\theta +\frac{3}{2}\right)} \sqrt{1-x^2}^{2 \theta +1} \quad \qu |
55,594 | Does the distribution $f(x) \propto (1-x^2)^{n/2}$ have a name? | This distribution is a scaled and shifted beta distribution. This can be seen by rewriting $t=0.5+0.5x$ or $x = 2t-1$ such that $1-x^2 = 4 t(1-t)$ | Does the distribution $f(x) \propto (1-x^2)^{n/2}$ have a name? | This distribution is a scaled and shifted beta distribution. This can be seen by rewriting $t=0.5+0.5x$ or $x = 2t-1$ such that $1-x^2 = 4 t(1-t)$ | Does the distribution $f(x) \propto (1-x^2)^{n/2}$ have a name?
This distribution is a scaled and shifted beta distribution. This can be seen by rewriting $t=0.5+0.5x$ or $x = 2t-1$ such that $1-x^2 = 4 t(1-t)$ | Does the distribution $f(x) \propto (1-x^2)^{n/2}$ have a name?
This distribution is a scaled and shifted beta distribution. This can be seen by rewriting $t=0.5+0.5x$ or $x = 2t-1$ such that $1-x^2 = 4 t(1-t)$ |
55,595 | Intuition - Uncountable sum of zeros | The problem here is that there isn't really any concept of an "uncountable sum" for you to have an intuition about! Summation is initially defined as a binary operation, then extended to finite sums by induction, and then extended to countable sums by taking limits. That is as far as it goes. The closest analogy we have to a "sum" over an uncountable set is the integral. We can make some valid probability statements involving the integral that are similar to the quoted claim, and involve integration as a kind of "uncountable sum". Below I will show what you can and can't validly say.
What we can say
Using integration as our version of the "uncountable sum", there is a rough analogy that holds here in probability theory that mimics the intuitive properties put forward in the quote section. Suppose we have a continuous random variable $X$ with quantile function $Q_X$ and density function $f_X$. Then the norming property of probability gives:
$$\int \limits_\mathbb{R} f_X(x) \ dx = 1.$$
For any $x \in \mathbb{R}$ we have $\mathbb{P}(X=x) = \int_x^x f_X(x) \ dx = 0$ and yet we can get any quantile value $0 \leqslant p \leqslant 1$ by taking:
$$\int \limits_{-\infty}^{Q_X(p)} f_X(x) \ dx = p.$$
This means that the "uncountable sum" (actually an integral) over a bunch of outcomes with zero probability can give us any probability value we want between zero and one. This occurs because our "sum" is really an integral that is "summing" an infinite number of infinitesimally small values. That is essentially what this kind of "intuitive" description is pointing to.
What we can't say
Unfortunately, it is not possible to extend this idea to get an exact analogy to the quoted section. Even taking the integral as our concept for an "uncountable sum", an exact analogy to the quoted section would be something like the following invalid result:
$$\ \ \int \limits_{-\infty}^{Q_X(p)} \mathbb{P}(X=x) \ dx = p.
\quad \quad \quad \text{(Invalid equation)}$$
The reason this does not work is that $\mathbb{P}(X=x) = \int_x^x f_X(r) \ dr \neq f_X(x)$. When we extend from the countable domain to the uncountable domain, and start dealing with "uncountable sums" as integrals, we start using infinitesimals. These infinitesimals are small enough that they give zero probability when we integrate over a single value, but they are actually larger than zero, so when we integrate over a larger (uncountable) set of them, they can give a positive value.
Consequently, we can see that the result really comes down to the fact that, once we start using an "uncountable sum" we also need to start using infinitesimals, which are infinitesimally small non-zero values. If we were to translate the quote into a strictly valid observation on mathematics, it would say that even though infinitesimal values look like zero from one perspective (e.g., integrating them over a single value), an integral of infinitesimal values can be any non-zero value. | Intuition - Uncountable sum of zeros | The problem here is that there isn't really any concept of an "uncountable sum" for you to have an intuition about! Summation is initially defined as a binary operation, then extended to finite sums | Intuition - Uncountable sum of zeros
The problem here is that there isn't really any concept of an "uncountable sum" for you to have an intuition about! Summation is initially defined as a binary operation, then extended to finite sums by induction, and then extended to countable sums by taking limits. That is as far as it goes. The closest analogy we have to a "sum" over an uncountable set is the integral. We can make some valid probability statements involving the integral that are similar to the quoted claim, and involve integration as a kind of "uncountable sum". Below I will show what you can and can't validly say.
What we can say
Using integration as our version of the "uncountable sum", there is a rough analogy that holds here in probability theory that mimics the intuitive properties put forward in the quote section. Suppose we have a continuous random variable $X$ with quantile function $Q_X$ and density function $f_X$. Then the norming property of probability gives:
$$\int \limits_\mathbb{R} f_X(x) \ dx = 1.$$
For any $x \in \mathbb{R}$ we have $\mathbb{P}(X=x) = \int_x^x f_X(x) \ dx = 0$ and yet we can get any quantile value $0 \leqslant p \leqslant 1$ by taking:
$$\int \limits_{-\infty}^{Q_X(p)} f_X(x) \ dx = p.$$
This means that the "uncountable sum" (actually an integral) over a bunch of outcomes with zero probability can give us any probability value we want between zero and one. This occurs because our "sum" is really an integral that is "summing" an infinite number of infinitesimally small values. That is essentially what this kind of "intuitive" description is pointing to.
What we can't say
Unfortunately, it is not possible to extend this idea to get an exact analogy to the quoted section. Even taking the integral as our concept for an "uncountable sum", an exact analogy to the quoted section would be something like the following invalid result:
$$\ \ \int \limits_{-\infty}^{Q_X(p)} \mathbb{P}(X=x) \ dx = p.
\quad \quad \quad \text{(Invalid equation)}$$
The reason this does not work is that $\mathbb{P}(X=x) = \int_x^x f_X(r) \ dr \neq f_X(x)$. When we extend from the countable domain to the uncountable domain, and start dealing with "uncountable sums" as integrals, we start using infinitesimals. These infinitesimals are small enough that they give zero probability when we integrate over a single value, but they are actually larger than zero, so when we integrate over a larger (uncountable) set of them, they can give a positive value.
Consequently, we can see that the result really comes down to the fact that, once we start using an "uncountable sum" we also need to start using infinitesimals, which are infinitesimally small non-zero values. If we were to translate the quote into a strictly valid observation on mathematics, it would say that even though infinitesimal values look like zero from one perspective (e.g., integrating them over a single value), an integral of infinitesimal values can be any non-zero value. | Intuition - Uncountable sum of zeros
The problem here is that there isn't really any concept of an "uncountable sum" for you to have an intuition about! Summation is initially defined as a binary operation, then extended to finite sums |
55,596 | Intuition - Uncountable sum of zeros | It's important to realize that probability, by its definition/construction as a mathematical concept, is only countably additive and not "uncountably additive." Generally in mathematics there is no such concept as uncountably additive (there are a few contexts where uncountable sums have been defined, but they aren't applicable to interpreting probability as an uncountable sum, and in those contexts an "uncountable sum of zeros" is always zero). Really, addition always only involves a finite number of terms and any infinite sum is defined in more complicated ways (e.g. as a limit of finite sums).
It could be problematic that in the shared text the author says first "we cannot assert the probability of an uncountable union of disjoint events is the sum of their probabilities" (which is true) and then says "an uncountable sum of zeros can be any number between 0 and 1." The last statement is a reasonable informal/intuitive way to think about it but an "uncountable sum of zeros" isn't generally a valid mathematical object and definitely isn't valid in this context. Maybe later in the text that is clarified. I would worry that the average reader would see this and just be perplexed, thinking that uncountable sums of zeros are a coherent mathematical thing in other contexts and can be equal to whatever we want (in a mathematically coherent way). The concept of an "uncountable sum of zeros" is simply not defined generally speaking (it is defined in some contexts, but always gives a sum of zero then).
It's important to understand that any time we are talking about "uncountable sums of zeros" in this context of probability, we are talking informally and intuitively and not about formally existing mathematical objects. Generally, it is important to distinguish between informal ideas and formal mathematical objects with the latter being described according to established axioms, definitions, etc.
In the context of probability (and measure in general), if we are to think of adding up uncountably many zeros to get the probability of an event, then that would result in uncountable sums of zeros giving any number between 0 and 1. So that is an informal intuitive reason why uncountable sums of zeros don't make sense here. There is no formal mathematical object, an "uncountable sum," that we are referring to here (i.e. it's just not defined in this context).
Consider the Cantor set $C$ and the set of irrationals $A=[0,1]\cap\mathbb Q^c$. If we are considering $X$ to be a uniformly distributed random variable in $[0,1]$, then $P(X\in C)=0$ and $P(X\in A)=1$ (the Lebesgue measure of these sets is firmly established, and that is identical to their probability under the uniform distribution). So these probabilities are not well-defined as uncountable sums of zeros. Similarly we can create an uncountable set of any probability between 0 and 1 that will similarly be thought of as giving an uncountable sum of zeros equal to its probability. E.g. the fat Cantor set $C_{1/4}$ with middle bits of length $1/4^n$ removed at each stage has measure $1/2$ gives $P(X\in C_{1/4})=1/2$. We can construct fat Cantor sets of any measure between 0 and 1, and each is uncountable. | Intuition - Uncountable sum of zeros | It's important to realize that probability, by its definition/construction as a mathematical concept, is only countably additive and not "uncountably additive." Generally in mathematics there is no su | Intuition - Uncountable sum of zeros
It's important to realize that probability, by its definition/construction as a mathematical concept, is only countably additive and not "uncountably additive." Generally in mathematics there is no such concept as uncountably additive (there are a few contexts where uncountable sums have been defined, but they aren't applicable to interpreting probability as an uncountable sum, and in those contexts an "uncountable sum of zeros" is always zero). Really, addition always only involves a finite number of terms and any infinite sum is defined in more complicated ways (e.g. as a limit of finite sums).
It could be problematic that in the shared text the author says first "we cannot assert the probability of an uncountable union of disjoint events is the sum of their probabilities" (which is true) and then says "an uncountable sum of zeros can be any number between 0 and 1." The last statement is a reasonable informal/intuitive way to think about it but an "uncountable sum of zeros" isn't generally a valid mathematical object and definitely isn't valid in this context. Maybe later in the text that is clarified. I would worry that the average reader would see this and just be perplexed, thinking that uncountable sums of zeros are a coherent mathematical thing in other contexts and can be equal to whatever we want (in a mathematically coherent way). The concept of an "uncountable sum of zeros" is simply not defined generally speaking (it is defined in some contexts, but always gives a sum of zero then).
It's important to understand that any time we are talking about "uncountable sums of zeros" in this context of probability, we are talking informally and intuitively and not about formally existing mathematical objects. Generally, it is important to distinguish between informal ideas and formal mathematical objects with the latter being described according to established axioms, definitions, etc.
In the context of probability (and measure in general), if we are to think of adding up uncountably many zeros to get the probability of an event, then that would result in uncountable sums of zeros giving any number between 0 and 1. So that is an informal intuitive reason why uncountable sums of zeros don't make sense here. There is no formal mathematical object, an "uncountable sum," that we are referring to here (i.e. it's just not defined in this context).
Consider the Cantor set $C$ and the set of irrationals $A=[0,1]\cap\mathbb Q^c$. If we are considering $X$ to be a uniformly distributed random variable in $[0,1]$, then $P(X\in C)=0$ and $P(X\in A)=1$ (the Lebesgue measure of these sets is firmly established, and that is identical to their probability under the uniform distribution). So these probabilities are not well-defined as uncountable sums of zeros. Similarly we can create an uncountable set of any probability between 0 and 1 that will similarly be thought of as giving an uncountable sum of zeros equal to its probability. E.g. the fat Cantor set $C_{1/4}$ with middle bits of length $1/4^n$ removed at each stage has measure $1/2$ gives $P(X\in C_{1/4})=1/2$. We can construct fat Cantor sets of any measure between 0 and 1, and each is uncountable. | Intuition - Uncountable sum of zeros
It's important to realize that probability, by its definition/construction as a mathematical concept, is only countably additive and not "uncountably additive." Generally in mathematics there is no su |
55,597 | Intuition - Uncountable sum of zeros | An example for which $\text{Prob}(X\in[a,b])$ should be able to take any value between 0 and 1 is a uniform distributed variable. In that case if $X \sim \mathcal{U}(0,1)$ then for $0\leq a\leq b\leq1$ you have $$\text{Prob}(X\in[a,b]) =b-a$$
This is a contradiction with
$$\text{Prob}(X\in[a,b]) =\sum_{\lbrace x \in \mathbb{Q} | 0 \leq x \leq 1 \rbrace} 0 = 0$$
A similar question on maths.stackexchange is the following:
Why is $\infty \cdot 0$ not clearly equal to $0$?
The answer there argues in a similar way, if you would treat multiplication of zero and infinite as if you would treat multiplication of finite numbers, then you could argue not just that $\infty \cdot 0 = 0$ but just as well that $\infty \cdot 0 = \infty$, $\infty \cdot 0 = 1$ or anything else.
The solution to the paradox is in considering whether the terms of the operation are correctly used. What is $0$ and what is $\infty$? Are they correctly used, or are they misinterpreted due to some ambiguities?
Steven Miller writes
"Our argument crucially used that the probability of a sum of disjoint events is a sum of the sum of probabilities of the events. This is true if it is a finite sum, or even a countably infinite sum, but not necessarily true if it's an uncountable sum."
So the problem isn't actually in the infinite sum. The issue with the infinite sum of singleton events is not in the uncountable property. Note: we could have a countable infinite sum of all events $X=x$ when we consider rational numbers $\mathbb{Q}$ (which are countable) as the singleton events.
The argument about the infinite sum is not wrong.
Instead, the error is in the definition of a probability space composed of only singleton events with each a zero probability. The problem is that by introducing an event space of singleton events, each with zero probability, we are creating a measure space that does not have total measure 1.
There is nothing wrong with defining a measure space where each rational number $x\in \mathbb{Q}$ between $0$ and $1$ has some constant measure $m\geq 0$. It is just not a probability measure. This is because for $m=0$ the total measure is zero, and for $m>0$ the total measure is infinite. | Intuition - Uncountable sum of zeros | An example for which $\text{Prob}(X\in[a,b])$ should be able to take any value between 0 and 1 is a uniform distributed variable. In that case if $X \sim \mathcal{U}(0,1)$ then for $0\leq a\leq b\leq | Intuition - Uncountable sum of zeros
An example for which $\text{Prob}(X\in[a,b])$ should be able to take any value between 0 and 1 is a uniform distributed variable. In that case if $X \sim \mathcal{U}(0,1)$ then for $0\leq a\leq b\leq1$ you have $$\text{Prob}(X\in[a,b]) =b-a$$
This is a contradiction with
$$\text{Prob}(X\in[a,b]) =\sum_{\lbrace x \in \mathbb{Q} | 0 \leq x \leq 1 \rbrace} 0 = 0$$
A similar question on maths.stackexchange is the following:
Why is $\infty \cdot 0$ not clearly equal to $0$?
The answer there argues in a similar way, if you would treat multiplication of zero and infinite as if you would treat multiplication of finite numbers, then you could argue not just that $\infty \cdot 0 = 0$ but just as well that $\infty \cdot 0 = \infty$, $\infty \cdot 0 = 1$ or anything else.
The solution to the paradox is in considering whether the terms of the operation are correctly used. What is $0$ and what is $\infty$? Are they correctly used, or are they misinterpreted due to some ambiguities?
Steven Miller writes
"Our argument crucially used that the probability of a sum of disjoint events is a sum of the sum of probabilities of the events. This is true if it is a finite sum, or even a countably infinite sum, but not necessarily true if it's an uncountable sum."
So the problem isn't actually in the infinite sum. The issue with the infinite sum of singleton events is not in the uncountable property. Note: we could have a countable infinite sum of all events $X=x$ when we consider rational numbers $\mathbb{Q}$ (which are countable) as the singleton events.
The argument about the infinite sum is not wrong.
Instead, the error is in the definition of a probability space composed of only singleton events with each a zero probability. The problem is that by introducing an event space of singleton events, each with zero probability, we are creating a measure space that does not have total measure 1.
There is nothing wrong with defining a measure space where each rational number $x\in \mathbb{Q}$ between $0$ and $1$ has some constant measure $m\geq 0$. It is just not a probability measure. This is because for $m=0$ the total measure is zero, and for $m>0$ the total measure is infinite. | Intuition - Uncountable sum of zeros
An example for which $\text{Prob}(X\in[a,b])$ should be able to take any value between 0 and 1 is a uniform distributed variable. In that case if $X \sim \mathcal{U}(0,1)$ then for $0\leq a\leq b\leq |
55,598 | Intuition - Uncountable sum of zeros | Let f: X -> R, f(X)>=0 and X is a subset of R (real numbers), if we define
Sum_{x in X}f(x) := sup{sum_{x in F}f(x), F is a finite subset of X}
Sum_{x inX}0 = 0 | Intuition - Uncountable sum of zeros | Let f: X -> R, f(X)>=0 and X is a subset of R (real numbers), if we define
Sum_{x in X}f(x) := sup{sum_{x in F}f(x), F is a finite subset of X}
Sum_{x inX}0 = 0 | Intuition - Uncountable sum of zeros
Let f: X -> R, f(X)>=0 and X is a subset of R (real numbers), if we define
Sum_{x in X}f(x) := sup{sum_{x in F}f(x), F is a finite subset of X}
Sum_{x inX}0 = 0 | Intuition - Uncountable sum of zeros
Let f: X -> R, f(X)>=0 and X is a subset of R (real numbers), if we define
Sum_{x in X}f(x) := sup{sum_{x in F}f(x), F is a finite subset of X}
Sum_{x inX}0 = 0 |
55,599 | Is overfitting an issue if all I care about is training error | With this type of propensity-score evaluation you can have less fear of overfitting, but you can take it too far. This paper, for example, concluded from simulation studies:
Overfitting of propensity score models should be avoided to obtain reliable estimates of treatment or exposure effects in individual studies.
If you are conducting a survey, you presumably want to apply the survey results to new cases, not just to describe the training set. Insofar as overfitting of the propensity-score model might make the survey results less applicable outside the training set, you need to take that into account. | Is overfitting an issue if all I care about is training error | With this type of propensity-score evaluation you can have less fear of overfitting, but you can take it too far. This paper, for example, concluded from simulation studies:
Overfitting of propensity | Is overfitting an issue if all I care about is training error
With this type of propensity-score evaluation you can have less fear of overfitting, but you can take it too far. This paper, for example, concluded from simulation studies:
Overfitting of propensity score models should be avoided to obtain reliable estimates of treatment or exposure effects in individual studies.
If you are conducting a survey, you presumably want to apply the survey results to new cases, not just to describe the training set. Insofar as overfitting of the propensity-score model might make the survey results less applicable outside the training set, you need to take that into account. | Is overfitting an issue if all I care about is training error
With this type of propensity-score evaluation you can have less fear of overfitting, but you can take it too far. This paper, for example, concluded from simulation studies:
Overfitting of propensity |
55,600 | Does an explicit expression exist for the moments of the residuals in least squares regression? | Let's take a classical linear regression model:
$$y_i = \boldsymbol{x}_i^T\beta + \varepsilon$$
where $\varepsilon_1, ..., \varepsilon_n \overset{IID}{\sim}\mathcal{N}(0, \sigma^2)$ and $\boldsymbol{x}_i^T = (1, x_{i1}, ...x_{ip})$.
This model can be written in matrix form as:
$$Y = X\beta + \boldsymbol{\varepsilon}$$
where $Y\in\mathbb{R}^n$ is the vector of the responses, $X\in\mathbb{R}^{n \times p}$ is the design matrix and $\boldsymbol{\varepsilon} \sim\mathcal{N}(0, \sigma^2 I_n)$ is a multivariate normal vector.
The least square estimator is given by $\hat\beta = (X^T X)^{-1}X^TY$ and the residual $\hat{e}_i$, as you defined it, is given by
$$\begin{array}{ccl}
\hat{e_i} & = & \boldsymbol{x}_i^T\hat\beta - y_i\\
& = & \boldsymbol{x}_i^T(X^T X)^{-1}X^TY - y_i\\
& = & \boldsymbol{x}_i^T(X^T X)^{-1}X^T(X\beta + \boldsymbol{\varepsilon}) - y_i\\
& = & \boldsymbol{x}_i^T(X^T X)^{-1}X^TX\beta + \boldsymbol{x}_i^T(X^T X)^{-1}X^T\boldsymbol{\varepsilon} - y_i\\
& = & \boldsymbol{x}_i^T\beta - y_i +\boldsymbol{x}_i^T(X^T X)^{-1}X^T\boldsymbol{\varepsilon}\\
& = & -\varepsilon_i + \boldsymbol{x}_i^T(X^T X)^{-1}X^T\boldsymbol{\varepsilon}\\
& = & (-b_i^T + \boldsymbol{x}_i^T(X^TX)^{-1}X^T)\boldsymbol\varepsilon
\end{array}$$
where $b_i$ is the vector of $\mathbb{R}^n$ made of zeros and a 1 at the $i-th$ position.
Now, as you know that $\varepsilon \sim\mathcal{N}(0, \sigma^2 I_n)$, using the property that for any full rank matrix $M$, if $Z \sim\mathcal{N}(\boldsymbol{\mu}, \Sigma)$, then $MZ\sim\mathcal{N}(M\boldsymbol{\mu}, M\Sigma M^T)$,
you get that $\hat{e}_i \sim{N}(0, s^2)$ where
$$\begin{array}{ccl}
s^2 & = & \sigma^2(-b_i^T + \boldsymbol{x}_i^T(X^TX)^{-1}X^T)(-b_i^T + \boldsymbol{x}_i^T(X^TX)^{-1}X^T)^T\\
& = & \sigma^2 (1 - h_{ii})
\end{array}$$
where $h_{ii} = \boldsymbol{x}_i^T(X^TX)\boldsymbol{x}_i$ is the leverage of $\boldsymbol{x}_i$, between 0 and 1.
From that, you can get the moments of the residuals using the moments of the normal distribution.
Getting the joint distribution of the vector of residuals $\hat{\boldsymbol{e}}$ is also possible since $\hat{\boldsymbol{e}} = (I - H)\boldsymbol{\varepsilon}$ where $H = X(X^TX)^{-1}X^T$ is the hat matrix: $\hat{\boldsymbol{e}}$ follows a singular multivariate normal distribution (singular since its variance matrix $(I - H)$ is singular). | Does an explicit expression exist for the moments of the residuals in least squares regression? | Let's take a classical linear regression model:
$$y_i = \boldsymbol{x}_i^T\beta + \varepsilon$$
where $\varepsilon_1, ..., \varepsilon_n \overset{IID}{\sim}\mathcal{N}(0, \sigma^2)$ and $\boldsymbol{x | Does an explicit expression exist for the moments of the residuals in least squares regression?
Let's take a classical linear regression model:
$$y_i = \boldsymbol{x}_i^T\beta + \varepsilon$$
where $\varepsilon_1, ..., \varepsilon_n \overset{IID}{\sim}\mathcal{N}(0, \sigma^2)$ and $\boldsymbol{x}_i^T = (1, x_{i1}, ...x_{ip})$.
This model can be written in matrix form as:
$$Y = X\beta + \boldsymbol{\varepsilon}$$
where $Y\in\mathbb{R}^n$ is the vector of the responses, $X\in\mathbb{R}^{n \times p}$ is the design matrix and $\boldsymbol{\varepsilon} \sim\mathcal{N}(0, \sigma^2 I_n)$ is a multivariate normal vector.
The least square estimator is given by $\hat\beta = (X^T X)^{-1}X^TY$ and the residual $\hat{e}_i$, as you defined it, is given by
$$\begin{array}{ccl}
\hat{e_i} & = & \boldsymbol{x}_i^T\hat\beta - y_i\\
& = & \boldsymbol{x}_i^T(X^T X)^{-1}X^TY - y_i\\
& = & \boldsymbol{x}_i^T(X^T X)^{-1}X^T(X\beta + \boldsymbol{\varepsilon}) - y_i\\
& = & \boldsymbol{x}_i^T(X^T X)^{-1}X^TX\beta + \boldsymbol{x}_i^T(X^T X)^{-1}X^T\boldsymbol{\varepsilon} - y_i\\
& = & \boldsymbol{x}_i^T\beta - y_i +\boldsymbol{x}_i^T(X^T X)^{-1}X^T\boldsymbol{\varepsilon}\\
& = & -\varepsilon_i + \boldsymbol{x}_i^T(X^T X)^{-1}X^T\boldsymbol{\varepsilon}\\
& = & (-b_i^T + \boldsymbol{x}_i^T(X^TX)^{-1}X^T)\boldsymbol\varepsilon
\end{array}$$
where $b_i$ is the vector of $\mathbb{R}^n$ made of zeros and a 1 at the $i-th$ position.
Now, as you know that $\varepsilon \sim\mathcal{N}(0, \sigma^2 I_n)$, using the property that for any full rank matrix $M$, if $Z \sim\mathcal{N}(\boldsymbol{\mu}, \Sigma)$, then $MZ\sim\mathcal{N}(M\boldsymbol{\mu}, M\Sigma M^T)$,
you get that $\hat{e}_i \sim{N}(0, s^2)$ where
$$\begin{array}{ccl}
s^2 & = & \sigma^2(-b_i^T + \boldsymbol{x}_i^T(X^TX)^{-1}X^T)(-b_i^T + \boldsymbol{x}_i^T(X^TX)^{-1}X^T)^T\\
& = & \sigma^2 (1 - h_{ii})
\end{array}$$
where $h_{ii} = \boldsymbol{x}_i^T(X^TX)\boldsymbol{x}_i$ is the leverage of $\boldsymbol{x}_i$, between 0 and 1.
From that, you can get the moments of the residuals using the moments of the normal distribution.
Getting the joint distribution of the vector of residuals $\hat{\boldsymbol{e}}$ is also possible since $\hat{\boldsymbol{e}} = (I - H)\boldsymbol{\varepsilon}$ where $H = X(X^TX)^{-1}X^T$ is the hat matrix: $\hat{\boldsymbol{e}}$ follows a singular multivariate normal distribution (singular since its variance matrix $(I - H)$ is singular). | Does an explicit expression exist for the moments of the residuals in least squares regression?
Let's take a classical linear regression model:
$$y_i = \boldsymbol{x}_i^T\beta + \varepsilon$$
where $\varepsilon_1, ..., \varepsilon_n \overset{IID}{\sim}\mathcal{N}(0, \sigma^2)$ and $\boldsymbol{x |